Top Banner
Notes on the lecture L L o o g g i i c c a a l l D D e e s s i i g g n n o o f f D D i i g g i i t t a a l l S S y y s s t t e e m m s s Prof. Dr.-Ing. Axel Hunger Dr.-Ing. Stefan Werner UNIVERSITÄT D U I S B U R G E S S E N © Institute of Computer Engineering, Dr.-Ing. Stefan Werner, April 2007
127
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Lecture Notes

NNootteess oonn tthhee lleeccttuurree

LLooggiiccaall DDeessiiggnn

ooff

DDiiggiittaall SSyysstteemmss PPrrooff.. DDrr..--IInngg.. AAxxeell HHuunnggeerr

DDrr..--IInngg.. SStteeffaann WWeerrnneerr

UNIVERSITÄT

D U I S B U R G E S S E N

© Institute of Computer Engineering,

Dr.-Ing. Stefan Werner, April 2007

Page 2: Lecture Notes

WARNUNG preliminary lecture notes

Dieser Text beschreibt in komprimierter Form die Inhalte der Vorlesung „Logischer Entwurf digitaler Systeme“ wie sie seit dem Wintersemester 2004/2005 an der Universität Duisburg-Essen für Studenten der Bachelor-Studiengänge im Studienprogramm ISE gelesen wird. Die überwiegend ausländischen Studierenden haben immer wieder den Wunsch nach englischsprachigen Vorlesungsunterlagen geäußert. Aus diesem Grund habe ich im Sommersemester 2006 mit der Arbeit an einem englischsprachigen Vorlesungsmanuskript begonnen. Der vorliegende Text stellt eine erste Fassung dieses Manuskriptes dar. Obwohl der Text bereits mehrfach überarbeitet wurde, steht die abschließende kritische Durchsicht noch an. Ebenso sind einige Bilder noch nicht ins Englische übersetzt. Daher wird ausdrücklich vor einem unkritischen Umgang mit dem Manuskript gewarnt. Es soll in erster Linie vorlesungsbegleitend eingesetzt werden und die eigenen Vorlesungsmitschriften ergänzen, diese aber auf keinen Fall ersetzen. Duisburg, im April 2007 Stefan Werner

Page 3: Lecture Notes

1. INTRODUCTION ........................................................................................................ 5

2. LOGIC DESIGN .......................................................................................................... 9 2.1 Elementary functions of Switching Algebra .................................................................. 9

2.2 Minimization of Functions ........................................................................................... 10

2.3 The Quine / McCluskey algorithm............................................................................... 14

2.4 Cost functions............................................................................................................... 21 2.4.1 Petrick’s method....................................................................................................... 22

2.5 Proceeding in circuit synthesis ..................................................................................... 24

2.6 Elementary circuits....................................................................................................... 24

2.7 Elementary combinatorial circuits for computation ..................................................... 25 2.7.1 Half adder ................................................................................................................. 25 2.7.2 Full adder.................................................................................................................. 27 2.7.3 Serial-/Parallel-Adder............................................................................................... 30

2.7.3.1 Serial adder....................................................................................................... 30 2.7.3.2 Parallel adder.................................................................................................... 30

2.8 Elementary combinatorial circuits for data transmission ............................................. 31 2.8.1 Multiplexer ............................................................................................................... 32 2.8.2 Demultiplexer ........................................................................................................... 33 2.8.3 Buses......................................................................................................................... 33 2.8.4 Bidirectional Signal Traffic...................................................................................... 34

2.8.4.1 Wired Or........................................................................................................... 35 2.8.4.2 Tri-State Technology........................................................................................ 36

2.9 Elementary combinatorial circuits for encoding / decoding......................................... 38 2.9.1 Encoder..................................................................................................................... 38 2.9.2 Decoder..................................................................................................................... 40 2.9.3 Read only memory (ROM)....................................................................................... 41

2.9.3.1 Word-based addressing .................................................................................... 42 2.9.3.2 Bit-wise addressing .......................................................................................... 43 2.9.3.3 Comparison of the types of addressing ............................................................ 44 2.9.3.4 Address Decoding ............................................................................................ 45 2.9.3.5 Comparative Overview: ..................................................................................... 46

2.9.4 Programmable Logic ................................................................................................ 47 2.9.4.1 General structure .............................................................................................. 47 2.9.4.2 Construction of the AND/OR-Matrix .............................................................. 49 2.9.4.3 Types of Illustrations........................................................................................ 51 2.9.4.4 Programming Points......................................................................................... 53 2.9.4.5 PLD Structures ................................................................................................. 54 2.9.4.6 Combinatorial PLD .......................................................................................... 55 2.9.4.7 Logic Diagram.................................................................................................. 57 2.9.4.8 Functional Block Diagram ............................................................................... 58 2.9.4.9 Logic-Circuit Symbols ..................................................................................... 59 2.9.4.10 The Programming of the PLD...................................................................... 60

2.9.5 Combinatorial PLD with Feedback .......................................................................... 61 2.9.5.1 Special Features of Feedback........................................................................... 62 2.9.5.2 Functional Block Diagram ............................................................................... 64

Page 4: Lecture Notes

3. DESIGN OF SEQUENTIAL CIRCUITS ................................................................ 65 3.1 State Machines ............................................................................................................. 65

3.2 Forms of Describing State Machines ........................................................................... 67 3.2.1 State machine tables ................................................................................................. 67 3.2.2 State-Transition Diagram ......................................................................................... 69 3.2.3 Timing Diagram ....................................................................................................... 70

3.3 State Machine Minimization ........................................................................................ 72 3.3.1 Minimization according to Huffmann and Mealy .................................................... 73 3.3.2 The Moore Algorithm............................................................................................... 76 3.3.3 Algorithmic Formulation of the Minimization by Moore: ....................................... 78

3.4 Conversion of State Machines...................................................................................... 80

3.5 Basic sequential circuits for data-processing ............................................................... 83 3.5.1 Counters.................................................................................................................... 84

3.5.1.1 Design of synchronous counters ...................................................................... 85 3.5.1.2 Design of asynchronous counters..................................................................... 89

3.5.2 Shift registers............................................................................................................ 91

3.6 Basic sequential circuits for program-processing......................................................... 95 3.6.1.1 Control unit in ROM-configuration ................................................................. 95 3.6.1.2 Control unit in PLA-configuration................................................................... 96

4. TESTING DIGITAL CIRCUITS ............................................................................. 98 4.1 Principles of testing ...................................................................................................... 98

4.2 Test as a process ......................................................................................................... 101 4.2.1 Overview on test mechanisms ................................................................................ 101 4.2.2 Important CAD-tools for test generation................................................................ 102

4.2.2.1 Effort-estimation ............................................................................................ 102 4.2.2.2 Application of test-tools in integrated systems .............................................. 103

4.3 Faults and fault-models .............................................................................................. 104 4.3.1 The Stuck-at fault-model........................................................................................ 106 4.3.2 The Stuck-open/Stuck-on fault-model ................................................................... 107 4.3.3 Wiring faults ........................................................................................................... 110 4.3.4 Connection between circuit-model and fault-model .............................................. 112

4.4 Test generation ........................................................................................................... 115 4.4.1 Boolean Difference................................................................................................. 118 4.4.2 Path-sensitization.................................................................................................... 122

5. FURTHER READINGS .......................................................................................... 127

Page 5: Lecture Notes

5

1. Introduction

This introductory chapter is intended to give the student a brief overview of the requirements needed to follow the subjects covered in this course. For that purpose, axioms, examples and methods for circuit-description & -simplification will be presented in the following subchapters, without raising a claim on completeness. The student should rather use it as an overview and a brief review of the topics that ought to be known already. The subchapters can be used as a control-entity to show the student which chapters from “Fundamentals of Computer Engineering 1” should be reviewed or studied with additional literature.

This course covers a subtopic of the establishment-process of digital systems, which is the logical design. Physical or electrical design e.g. deals with dimensioning of transistors or layouts for printed circuit boards, whereas logical design focuses more on functional aspects of digital systems.

The following figure gives an overview of the abstraction-layers within the illustration of digital circuits and systems

Figure. 1.1. Abstraction-layers

Page 6: Lecture Notes

6

The level of abstraction of the models in figure 1.1 rises, starting from the structure-models (technology, transistor and gate). The most important properties of those circuit-models are summarized in the following definitions.

Technology-Layer

Here the base is the illustration of a circuitry as a network of equivalent circuits of the single electronic devices, where their physical properties can be controlled by parameters. For these devices, different voltages & currents can be calculated, but there exist economical limits which are given by the immense effort of extremely exact measurements which are needed for construction (the limit for the number of single electronic devices inside a circuit lies at about 100). Function & behaviour are normally given as partial differential equations.

Transistor-Layer

The transistor-layer will be used for a structured description based on circuit diagrams. The functional description will occur by differential equations & characteristic diagrams and by the definition of voltage- & current-levels. Failures in the modelling-process of high-density integrated circuits on the transistor-layer, in most cases happen because of the immense effort and missing documentations.

The models of the following layers create the base for logical design; the gate-layer, the functional-layer, the register-transfer-layer and the automata-layer. They establish the essence of this lecture.

Gate-Layer

The gate-layer is the lowest of the logical design layers and simultaneously the best layer for testing the logical behaviour of a circuit. Functional description is made via Boolean equations, whereas the structural representation uses the so called schematics.

The gate-representation is adapted to the behaviour of logical bipolar circuits and permits to simulate them in both, physical & logical behaviour (combination of input-variables, timing, etc). The unidirectional signal-flow simulates real bipolar technology-behaviour as well.

For MOS-circuits, a gate representation might not be sufficient sometimes. However, it is possible to create equivalent gate-models with good approximation of the real behaviour by consideration of the specific MOS-transistor properties. Often groups of MOS-transistors are being merged to the so called “complex logic gates”, which can be used to describe them by gate-symbols. Those gates don’t have close relations to the transmission-lines and logic inside the MOS-circuit any more, but they are only used for the description of operation. Furthermore it is

Page 7: Lecture Notes

7

possible to model circuits in mixed representation, by gates & switches to match reality at its best.

Functional-Layer

By reason of economic advantages of a functional description, several methods to represent circuit-behaviour by a function have been developed: • Functional-blocks (e.g. Register, ALU, etc.) • Truth-tables (for combinatorial circuits) • Program-modules in a high level language, which describe the behaviour of the

blocks • Graphical illustration (e.g. Petri-nets) • And more methods as well as mixtures of these These possibilities are taken if the appropriate functions • are not available as structured models • exactly that part of the circuit shall not be considered in detail, but its existence

still is important for the operation of other elements or even the whole circuit. Register-transfer-Layer

The components of the Register-transfer-Layer (ger: Register-Transfer-Ebene (RTE) ) are registers, networks, memories and n-Bit wide connections. The internal structure of those elements will not be discussed, only their behaviour has to be defined. Accordingly, not the single bit is the corresponding data-unit, but the k-bit-word, where k can take many different values at the connections. Normally a hierarchization occurs on the RTE, e.g. by using 8-bit-adders and multiplexers from the RTE for assembling fast 32-bit-adders which are then components of the RTE themselves.

Automata-Layer

Delay-elements and feedback are used to create sequential circuits which can be used e.g. as RAM-modules. The functional description results from transition- & source-functions of the automata, whereas these terms still leave some tolerance for modelling and have to be concretised for an individual case. The structural description occurs via automata-models.

System-Layer

On the system-layer, the general characteristics of an electronic system are described by semi-autonomous modules, e.g. processors, interface-units (I/O), memory (e.g. ROM) etc. These modules are being characterized by functionality (in case of a processor by the instruction set), by protocols or by stochastic processes. The typical language-elements of this layer are object-oriented.

Page 8: Lecture Notes

8

The layers mentioned above don’t exclusively compose all possibilities for abstraction and description. Moreover they can be combined in an application-oriented way. From the view of computer-architecture, the all-purpose-computer-layer & operating-system-layer should be added as well, or the system-layer should be differentiated to a greater extent. The upper illustration occurred against the background of this lecture as well, that’s why the layers for logical design are described a bit more detailed than the others.

Page 9: Lecture Notes

9

2. Logic Design

The task of logic design is the conversion of the behavior of combinatorial circuits to structural descriptions on the gate-layer. Often, the starting point is a description by means of truth-tables or Boolean equations, as used in the lectures of “Fundamentals of Computer Engineering 1”.

This chapter gives in an intensively compact form, an overview of the prerequisite basic principles. The axioms, examples and procedures introduced in the following subchapters are without demanding completeness and should be accompanied by additional literature or by going through the content of the lecture „Fundamentals of Computer Engineering 1“ if necessary.

At the end of the chapter, the spectrum of already known principles for minimization will be expanded by the Quine/Mc Cluskey algorithm.

2.1 Elementary functions of Switching Algebra

In Switching Algebra, the possible input and output values are defined as a binary (bi-valued) set:

B= {not true, true} i.e. B= {0,1}.

In total, 16 functions can be defined for two binary variables. Every function can be illustrated graphically by a switching symbol . However, the only elementary functions of switching algebra are

- AND-operation (Conjunction, UND) Symbol:• , ∧ , &

Truth- (Function-) Table:

- OR-operation (Disjunction, ODER) Symbol: +, ∨ Truth table:

- Negation (Complement, NOT) Symbol: x , x′ , x¬

Truth table:

111001010000* yxyx

111101110000

yxyx +

0110xx

xy ≥1

xy &

x 1

Page 10: Lecture Notes

10

All other functions can be described using these 3 functions. If one wants to realize these functions technically using circuits, 2 discrete signal values (e.g. Current1/Current2 or Spg1/Spg2) have to be assigned the logic values. If one describes the higher signal value as (H)igh and the lower signal value as (L)ow, it results in two possibilities of assigning the signal values to the logic values:

log. 0 1

Is assigned

Signal peak L H

“positive logic“

0 1

H L

“negative logic“

The use of the terms „positive/negative logic“ says something about the agreed assignment of values, but nothing however about the sign of the electric parameters. The signal peaks can be

- Both positive (e.g. TTL Type.: L= 0,2V, H= 3,5V)

- both negative (e.g. ECL Type.: L= -1,8V, H= -0,8V)

- one positive, the other one negative

2.2 Minimization of Functions

Complex logic expressions and therefore also technical realization via logic gates can often be minimized. For this, three procedures can be of use :

1. Algebraic (mathematical ) simplification by application of Boolean Algebra

2. Graphical simplification (Karnaugh-Veitch-(KV-)Diagram)

3. Algorithmic simplification (e.g. Quine-McCluskey algorithm)

for 1: Basic terms for algebraic simplification

Canonical forms

Page 11: Lecture Notes

11

Every switching expression can be written down in a canonical form. This is often useful during development. There are the two canonical forms: the disjunctive normal form (DNF) and conjunctive normal form (CNF). To understand these forms, we have to explain literals, Minterms and Maxterms first.

Literals: A literal is either a variable or the complement of a variable.

Minterm: A Minterm over n variables is a logical sum (disjunction) of exactly n literals with no repeated variables. With n variables we thus have 2n

possible Minterms. Example (n=3):

CBA ⋅⋅ CBA ⋅⋅ CBA ⋅⋅ CBA ⋅⋅ CBA ⋅⋅ CBA ⋅⋅ CBA ⋅⋅ CBA ⋅⋅

Maxterm: A Maxterm over n variables is a logical product (conjunction) of exactly n literals with no repeated variables. With n variables we thus have 2n possible Maxterms. Example (n=2):

BA + BA + BA + BA +

Sum-of-products (SOP)

The sum-of-products is a regular form consisting of a sum of m terms, where every term is a product:

CBCBABAfSOP ⋅+⋅⋅+⋅=

Product-of-sums (POS)

The product-of-sums is a regular form consisting of a product of m terms, where every term is a sum:

( ) ( ) ( )CBCBACAfPOS +⋅++⋅+=

Disjunctive Normal Form (DNF)

The DNF is a sum of products (SOP) consisting only of Minterms. Therefore every variable must appear exactly once in each product.

CBACBACBACBACBAf DNF ⋅⋅+⋅⋅+⋅⋅+⋅⋅+⋅⋅=

Page 12: Lecture Notes

12

Conjunctive Normal Form (CNF)

The CNF is the product of sums (POS) only containing Maxterms. Therefore every variable must appear exactly once in each sum.

( ) ( ) ( )CNFf A B C A B C A B C= + + ⋅ + + ⋅ + +

De Morgan: It is true that: baba *=+ i.e. baba +=*

Shannon extended this rule to n variables.

for 2: Graphical Minimization

A KV-Diagram is an assignment of fields. Every field is assigned exactly one Minterm via the given index (input variables!) on the edge of the diagram. For n input variables 2n fields result in this way. The indexing must be in such a way that every field differs in only one variable with the one lying next to it.

000100110010

001101111011CBACBACBACBA

CBACBACBACBA

B

C

A

AC

D

B

B

AC

D EE

KV-Diagram with 3-Variables KV-Diagram with 4 Variables KV-Diagram with 5 Variables

In a KV-Map:

1 field represents 1 Min-term (n Variables)

2 fields lying next to each other represent n-1 variables

3 fields lying next to each other represent n-2 variables

M

n fields represent the 1-Function (e.g. 1),,( =CBAf )

Page 13: Lecture Notes

13

Minimization Procedure:

All fields, which represent a “1“-Min-term of the function (VDNF), will be marked.

In this way as many as possible marked fields lying next to each other will be summarized in a way that they can be described by a minimum number of input variables (1,2,4,8...fields).

Several such resulting products will be OR-combined.

Should a task allow that a Min-term be 0 or 1 ( don’t care), a field marked in this way can be used, as if it were “1“ , this must however not be imperative.

Example: abdcadbabcdacdcdabcacdbdbcf ++++++++=2 The expression can be directly entered into the KV-Diagram, when one takes into consideration that, the term ba * represents 4 fields (intersection of all a“- and “b“- fields) and the other terms represent 2 fields each:

AC

D

B1

11

1ab:

AC

D

B

11bcd: usw.

AC

D

B1

11

1f :2

11111

f = +bc+ad+ = a(b+d) + c(b+d)

= (a+c) (b+d)

2 ab cd------------

for 3: Algorithmic simplification according to Quine/McCluskey

The Quine/McCluskey procedure is an algorithm performed in two steps. It will be described in detail in the next section.

Page 14: Lecture Notes

14

2.3 The Quine / McCluskey algorithm

The suitability of the minimization methods presented so far, decreases with complexity of the circuit under consideration. The usage of the KV diagram for example gets complex for a number of variables of n > 4 and for a number of n > 6 the geometric construction gets too complicated. In this chapter an algorithmic minimization method will be presented, that is suitable for computer-based execution and that has no limitation for the number of variables. This method was first introduced by W.Quine in 1952 and enhanced later by E.Mc Cluskey (1956).

The Quine / McCluskey algorithm is split-up into two major parts:

1. Evaluation of prime terms (Prime implicants)

2. Evaluation of the minimum number (DF sum) of prime terms.

1. step: Evaluation of prime terms (Prime implicants)

Definition: A term p of a logic function f is called prime term if it cannot be

combined with another term of f that differs from p.

or: Prime term p of f is a subdomain of f and all variables are needed.

The first task thus is to find pairs of terms that differ in only one variable, starting from the DNF.

For that purpose the following scheme will be used consecutively.

Consecutive procedure: (algorithmic description)

1.1 Establishment of the DNF; List of minterms 1.2 Pairwise combination of terms as far as possible in lists

1.3 Repetition of 1.2 with result in next list until:

1.4 no further minimization is possible any more.

At this: Noting down the source of combined terms

The procedure will now be presented on the following example. Origin is the equation f1 in its DNF.

Page 15: Lecture Notes

15

f A B C D ABCD ABCD ABCD ABCD ABCD ABCD1( , , , ) = + + + + +

Every term in f1 represents a minterm of the function f1. To every single of these minterms a weight can be assigned now, which depends on the number of non-negated variables.

3

3

2

3

1

1

6

5

4

3

2

1

==

==

==

==

==

==

WeightDABCm

WeightDCABm

WeightDBCAm

WeightBCDAm

WeightDCBAm

WeightDCBAm

Now we can construct the following table (minterm table), in which the minterms are organized in ascending weight.

Weight

Nr

A B C D Minterm

1 1 0 0 0 I m1 2 0 0 I 0 m2 2 3 0 I I 0 m4 4 0 I I I m3 3 5 I I 0 I m5 6 I I I 0 m6

The task of combining minterms which differ in only one variable is comparable to the combination of two neighboring fields in a KV diagram. In the Quine/McCluskey algorithm the search for these terms occurs at a term with the weight G in groups with weight G+1 and G-1. Here, the differing variable occurs in one term in negated form and in the other term in non-negated form. Starting from the minterm table we can construct the first minimization table where we have to note the origin of every new term.

origin A B C Dm1 0 0 0 I m2,m4 0 - I 0 m3,m4 0 I I - m5 I I 0 I m4,m6 - I I 0

Page 16: Lecture Notes

16

This step has to be repeated until no more combining is possible any more. At first the terms marked with a “-“ that differ in only one more variable are combined. These terms have to be marked, where identical terms are possible. In this case one of them can be deleted. In comparison to the KV-map we combine all neighboring 4-fields. In the next step we combine all neighboring 2n fields. If no more combining is possible, all non-marked, non-deleted rows give the prime terms of the function.

In this example, the first minterm table is not combinable any further and thus directly gives the primeterms of the function f1.

origin A B C D Primeterms m1 0 0 0 I p1 m2,m4 0 - I 0 p2 m3,m4 0 I I - p3 m5 I I 0 I p4 m4,m6 - I I 0 p5

f p p p p p

ABCD ACD ABC ABCD BCD1 1 2 3 4 5= + + + +

= + + + +

With it, step 1 of the Quine/Mc Cluskey algorithm is finished.

2. step: Determination of a minimum set (sum DF) of Primeterms/ prime implicants

Intention of the second step is the reduction of the number of primeterms. Starting point for that is the prime implicant chart. The columns of the table consist of minterms, the rows consist of primeterms. The allocation of a minterm to a primeterm is exhibited by an “x” in the matrix.

In the following, the prime implicant chart of the function f1 is shown. The example however terminates here, which means, that the primeterms cannot be combined any further.

⇒ Minterm m1

m2

m3

m4

m5

m6

⇓ p1 x Prime- p2 x x term p3 x x

p4 x p5 x x

Page 17: Lecture Notes

17

The second step occurs by means of the following scheme:

2.1) Determination of the essential primeterms: only one x (origin cross) per column means p m0 0≡ Minterm is associated to exactly one Primeterm

2.2) Labeling of the essential rows ⇒ partial solutions

2.3) a) Cancellation of all columns with x in essential row. b) Cancellation of arisen empty rows

Until now: • Explicit simplification (no choice) • Determination of the essential primeterms • Determination of a remainder matrix by means of canceling columns and rows. Method can terminate here!

In general: Choice of primeterms to cover the remaining minterms in the remainder matrix by:

2.4) In case of identical rows: Choice of a row, canceling the remaining identical rows.

2.5) Canceling dominated rows

2.6) Canceling dominating columns

The search for the essential primeterms of the function is now being clarified on the example of function f2.

m1 m2 m3 m4 m5 m6p1 x x p2 x x p3 x x p4 x x p5 x x x x p6 x

2.1 Determination of the essential primeterms of the function

Find the columns with only one “x” and mark the row with a “*”. These are the so called essential primeterms of the function. Essential primeterms are an essential part of the solution, as the assigned minterms don’t belong to any of the other primeterms.

Page 18: Lecture Notes

18

m1 m2 m3 m4 m5 m6p1 x x p2 x x p3 x x p4 X x p5 x x X x p6 x

m2 is only covered by p2 => essential primeterm. Equally m4 is covered only by p3. Thus p3 is an essential primeterm, too.

m1 m2 M3 m4 m5 m6p1 x x p2 x x p3 x x p4 x Xp5 x x x x p6 X

2.2) Marking of essential rows ⇒ partial solutions m1 m2 m3 m4 m5 m6p1 x x p2 * x x p3 * x x p4 x x p5 x x x x p6 x

2.3) a) Canceling all columns with “x” in essential row.

This is done, because these minterms are already covered by the essential primeterm.

Page 19: Lecture Notes

19

m1 m2 m3 m4 m5 m6p1 x x p2 * x x p3 * x x p4 x x p5 x x x x p6 x

Canceling columns

m1 m2 m3 m4 m5 m6p1 x p2 * p3 * p4 x x p5 x x x p6 x

2.4) b) Canceling created empty rows.

Here, these are the rows that cover the essential primeterms of the function. With it we get the following prime implicant chart.

m1 m5 m6 p1 x p4 x x p5 x x x p6 x

Up to now, the algorithm was definite and issued the essential primeterms and the remainder matrix without any choice. It can terminate now. In general however this is not the case, and a choice must be made from the remaining primeterms, which covers the remaining minterms as well. This can be done by:

2.4) In case of identical rows: Choice of a row, canceling the remaining identical rows

2.5) Canceling dominated rows

2.6) Canceling dominating columns

Page 20: Lecture Notes

20

For this example it holds:

m1 m5 m6 p1 x p4 x x p5 x x x p6 x

p4 dominates p6; p5 dominates p1 and p4 (row dominance)

=> dominant primeterms: p5

So the minimized function is: f p p p p p2 1 2 3 4 5( , , , , ) = f p p p2 2 3 5= + +

Example for column dominance based on 2.6): mi

pi m1

m2

p1 x x p2 x

Column m2 dominates column m1: m1 ⊂ m2 ,

i.e. p2 is omitted and p1 remains

In general it holds for the solution:

Solution: Σmin pi: Disjunction of all essential primeterms (from 2.1 - 2.3) and one choice (from 2.4 - 2.6)

The obtained DF is an expression of minimum length. The minimization offers choices in the steps 2.4-2.6. These choices can be supported via cost function by means of:

• Minimization of the terms => Minimization of gates

• Minimization of literals (Variables) => Minimization of transmission lines

Page 21: Lecture Notes

21

2.4 Cost functions

Circuits are normally subject to specific objectives, which are written down in a specification. According to these objectives, designs can be optimized and choices can be controlled. Objectives for optimization could be e.g.: minimum effort for realization, maximum speed, minimum power-consumption, or easy testability. The formulation of a cost function is therefore often unavoidable. There exist however multiple cost functions, where it depends on the targeting technology, which is the best to choose.

For the realization of multi layered functions there are different possibilities, e.g.

• Cost function of the lines (KL) is

to be minimized KL != min

• 2.) Cost function (KG) of the gates

is to be minimized KG !=

min It holds

G:= Number of gates in the circuit

L:= Number of transmission lines in the circuit

e.g. K L K GL G= =,

K K KG L G L,!

min= + =

Due to the matter of fact, that gates are normally a lot more expensive than transmission lines, a common result is:

K G LG L, = ⋅ +103

The task thus is primarily, to obtain KG and K

L. In advance however, the totality

of valuable solutions has to be found. This totality of all solutions and a weight for the cost function can be found by the help of Petrick’s method.

Page 22: Lecture Notes

22

2.4.1 Petrick’s method

The Quine/McCluskey algorithm offers choices for the selection of primeterms in steps 2.4-2.6. Petrick developed an algebraic method for this purpose in 1956. The petrick expression used for that is a propositional logic formulation that leads to the terms that have to be chosen or might be chosen for a solution. This method uses a matrix based description, as done with the Quine/McCluskey algorithm.

Origin is the prime implicant chart of a function f. Assume all essential primeterms are already found.

f m1 m2 m3 p1 X X p2 X X p3 X X

For every primeterm pi Petrick defines a Boolean variable ei, for which holds: ei :=

I if pi covers a minterm mj 0 if pi doesn’t cover a minterm mj

Petrick’s method now indicates the alternative choices of covering primeterms pi for every minterm mj.

For the example it thus holds PAm1 = e1 + e2 PAm2 = e2 + e3 PAm3 = e1 + e3

As every minterm has to be covered in the petrick expression, a conjunction of the petrick expressions of the minterms has to follow. For the example it holds PA = (e1 + e2 ) • (e2 + e3) • (e1 + e3) = (e1 e2+ e1e3 + e2 e2 + e2 e3) • (e1 + e3) = e1 e2 e1+ e1e3 e1 + e2 e2 e1 + e2 e3 e1+ e1 e2 e3+ e1e3 e3 + e2 e2 e3 + e2 e3 e3 = e1e2e1+ e1e3e1 + e2e2e1 + e2e3e1+ e1e2e3+ e1e3e3 + e2e2e3 + e2e3e3 = e1e2+ e1e3 + e2e1 + e2e3e1+ e1e2e3+ e1e3 + e2 e3 + e2e3 (1) (2) (1) (3) (3) (2) (4) (4) = e1e2+ e1e3 + + e2e3e1+ + + e2 e3

Page 23: Lecture Notes

23

With it, there exist 4 solutions in total. (This number was unknown, jet):

PA = p1p2 + p1p3 + p2p3 + p1p2p3

These solutions have to be weighted now for the cost function. Therefore the following example has to be considered:

Example: Given the function f with 4 variables and the following prime implicant chart

Objective of minimization: Minimum length: ∑ +=i

1i ip LiteraleL(f)

Whereas i is considered the outputs of pi (Only one output is considered here)

Reference number of mi equals its binary value:

0= 0000; 2 = 00I0 etc mi Cost function pk 0 2 4 11 12 14 16 ck p1 24xx x x 2 p2 23xx x x 2 p3 134 xxx x x 3 p4 134 xxx x 3 p5 124 xxx x x 3 p6 134 xxx x x 3 p7 123 xxx x x 3 ck : Number of (input)variables in pi

The petrick expression can be computed to (practice at home)

IpppppppppppppppppppppppPA

I)p(p)p(p)p(pp)p(p)p(p)p(pPA!

75421754326543154327641

!

3173214756265

=++++=

=+⋅+⋅+⋅⋅+⋅+⋅+=

⇒ There exist 5 solutions, namely

Interpretation: Number of Primeterms

L1: 7641 pppp +++ KL1=Σck=11 KG1=4 L2: 5432 pppp +++ KL2=11 KG2=4 L3: 65431 ppppp ++++ KL3=14 KG3=5 L4: 75432 ppppp ++++ KL4=14 KG4=5 L5: 75421 ppppp ++++ KL5=13 KG5=5

The solutions L1 and L2 are therefore the best to choose and they are equitable.

Page 24: Lecture Notes

24

2.5 Proceeding in circuit synthesis

In circuit synthesis (combinatorial circuit) the following scheme can be used:

1. Evaluate the number of input- & output-variables from the specification of the problem.

2. Describe the relations between the inputs & outputs of the circuitry. => Set up the truth table.

3. From the truth table, derive the CNF or DNF and simplify. => Set up a function, a KV map; Quine/McCluskey Algorithm => Evaluate the minimization result (Optimization), e.g. Cost function

4. If necessary transform the circuit to NAND/NAND and respectively NOR/NOR structure.

5. Draw the circuit.

2.6 Elementary circuits

In general, two types of logical (digital) circuits can be distinguished (compare Figure 2.1):

• Combinatorial circuits

• Sequential circuits (Schaltwerke)

While combinatorial circuits are composed only of logic gates, one key-characteristic of sequential circuits is feedback. Caused by feedback, an output signal is not any longer only depending on input signals but also on the internal state of the circuit, which is fed back to the circuit’s inputs.

Combinatorial circuits are further subdivided by the number of gate-layers. (see below)1.

Sequential circuits however are being specified in more detail by the suffix synchronous or asynchronous. Transitions in synchronous sequential circuits are controlled by one central master clock. In contrast to that, asynchronous circuits don’t follow such a master clock, but flip-flops are controlled by signals inside the sequential circuit, or signal propagation times are used as quasi-storing of states.

1 Die in der Abbildung angegebenen Beispiele für Schaltnetze werden in einem der folgenden Kapitel erläutert.

Die Bedeutung ist für das weitere Verständnis dieses Grundlagenkapitels nicht notwendig.

Page 25: Lecture Notes

25

Figure 2.1: subdivision of logic circuits

2.7 Elementary combinatorial circuits for computation

For internal computation, often basic arithmetic operations like counting, adding or complementing are needed. As it is not very meaningful to recreate these basic operations for every new digital circuit, standard circuits have been developed for this purpose, which are available as ready-to-use devices. These devices will be presented here in short.

2.7.1 Half adder

Adders are logic circuits for addition of two dual numbers. As all types of calculations can be reduced to additions, the adder is the basic circuit for every arithmetic operation.

The addition of two single-digit dual numbers can lead to one of the following results:

0 + 0 = 0 0 + I = I I + 0 = I I + I = I0

The addition I + I gives the sum 0 and a carry (ger.: Übertrag) Ü=I. The adder’s truth table gives a value for both, sum and carry for every addition. For that reason, the truth table looks like that:

Page 26: Lecture Notes

26

A B S C 0 0 0 0 0 1 1 0 1 0 1 0 1 1 0 1

This circuit is called half adder, because it is limited to the addition of two single-digit dual numbers. The according circuit symbol is shown in the following figure.

Figure 2.2:

From the truth table, the according function can be derived in its DNF.

For S (row 2,3):

BABABAS ⊕=⋅+⋅=

The function for the carry can be derived from row 4 of the truth table:

BAC ⋅=

It follows, that this circuit can be constructed by two logic gates (xor, and) as shown:

S

Figure 2.3:

An alternative realization can be found for S, as a function of A, B and C:

S=f(A, B, C)

In this case, S is an undefined function which can be represented by the following truth table and KV map:

Page 27: Lecture Notes

27

Ü A B S C 0 0 0 0 0 0 1 1 0 - 0 1 0 1 1 - 0 1 1 - B - 0 1 0 0 - A 1 -

1 0 1 - 1 1 0 - 1 1 1 0

)( BACCBCAS +⋅=⋅+⋅= with BAC ⋅=

The following figure shows the appropriate circuit:

A B

1

>1

Ü

S

Figure 2.4:

2.7.2 Full adder

For the addition of dual numbers with more than one single digit, the carry bit as a result of an addition in a lower digit position has to be considered, too, as it is “carried” to the next higher digit. For that purpose, the following truth table can be used:

Page 28: Lecture Notes

28

A B Ü1 S C2 0 0 0 0 0 0 1 0 1 0 1 0 0 1 0 1 1 0 0 1 0 0 1 1 0 0 1 1 0 1 1 0 1 0 1 1 1 1 1 1

with C1: Carry from the lower addition S: Sum C2: Carry from the current addition

This circuit is called full adder, as it can not only process its input variables (A&B), but also a carry from previous stages. The according symbol is shown in the figure:

Figure 2.5:

The functional equations for S and Ü2 can be derived in DNF from the truth table.

( ) ( ) 11 CBABACBABAS ⋅⋅+⋅+⋅⋅+⋅= (eq. 1.1)

( ) ( ) 11 CBACBA ⋅⊕+⋅⊕= (eq. 1.2)

1CBA ⊕⊕= (eq. 1.3)

11112 CBACBACBACBAÜ ⋅⋅+⋅⋅+⋅⋅+⋅⋅= (eq. 2.1)

111 )()( CBABACCBA ⋅⋅+⋅++⋅⋅= (eq. 2.2)

111 )()( CBACCBA ⋅⊕++⋅⋅= (eq. 2.3)

1)( CBABA ⋅⊕+⋅= (eq. 2.4)

or

Page 29: Lecture Notes

29

11

11112

CBCABACBACBACBACBAÜ

⋅+⋅+⋅=⋅⋅+⋅⋅+⋅⋅+⋅⋅= (eq. 2.5)

For technical realization, the structure known from the half adder can be used. For the outputs of a half adder it holds:

BAS ⊕=1 (eq. 3) BAC ⋅= (eq. 4)

A comparison of eq. 1.3 with eq. 4 shows, that for the output of a full adder it can be written:

11 CSS ⊕= (eq. 6)

Similarly by comparing eq. 2.4 with eq. 3 & eq. 4, for the carry of the full adder it can be written:

112 CSCC ⋅+= (eq. 7)

Thereby it has to be considered that there are 3 carries in total,

C1:= Carry from previous level

C2:= Carry as result from the current addition

C:= Carry that is caused by the fact that the FA consists of 2 HA. C here is the carry of the first HA of this structure. Take a look at the following figure:

A further comparison of eq. 6 & 7 with the output equations of an HA shows that S1 and C can be used again as inputs for an additional, downstream half adder HA2. According to that, the following circuit can be derived:

A B

1

>1

Ü

S1

&

&

1

>1

&

&

Ü1

>1

HA1

HA2

S

Ü2

Figure 2.6:

Page 30: Lecture Notes

30

By using the circuit symbols of a half adder, one gets:

HA1 HA2ABÜ

Ü

S

12

Figure 2.7:

2.7.3 Serial-/Parallel-Adder

The addition of multi-digit numbers can be done bit-serial or bit-parallel. For that purpose the series adder & the parallel adder are introduced in the following.

2.7.3.1 Serial adder

By use of a serial adder, arbitrarily long words can be added with only one full adder, by stepwise addition and consideration of the carry of the preceding stage. For this task, not only a full adder is needed, but also shift registers for buffering the inputs, the carries and the result.

Figure 2.8:

If the accumulator-principle is used, one of the input-registers is also used as the result-register at the same time. The advantage of this is that only one register is needed. A disadvantage is however, that one of the input terms is lost after computation.

2.7.3.2 Parallel adder

A reduction of the computation time can be achieved by connecting adders bitwise in parallel. The parallel adder uses one full adder for the connection of every input or output bit.

Page 31: Lecture Notes

31

Figure 2.9:

As the parallel adder is only a chain of full adders, it is a combinational circuit. In comparison to that, The serial adder is a sequential circuit, as it contains a combinational circuit and one or multiple memory devices.

If the parallel adder is used with the accumulator-principle, it gives the following circuit:

Figure 2.10:

This circuit has a critical timing condition, because the propagation of the carry can take a longer time than the clock signal time, needed only for the parallel addition.

2.8 Elementary combinatorial circuits for data transmission

Inside a computer system, the data transfer plays a significant role especially in the operational part. So called transmission networks connect the single units in a computer and switch the necessary information to them without manipulation. According to this, data transmission is an operation that is not dependent on data-types. Multiplexers and demultiplexers are used for selection of paths, functions or devices. The actual transmission occurs on bus-lines.

Page 32: Lecture Notes

32

2.8.1 Multiplexer

Multiplexers consist of 2n data-inputs, n control-inputs and one output. They are used e.g. to transmit parallel data serially over a single bus-line. Which line is connected to the bus-line, depends on the active control input. In the following the block diagram and the switch diagram of a multiplexer (MUX) are shown.

MUX

I0

I1

Im

....

C1

C2

Cn....

O

I0

I1

Im

....

O

Controls

Figure 2.11:

example: 4:1 multiplexer

The technical realization of a multiplexer will be explained in the following by an example of a 4:1 multiplexer.

The multiplexer consists of 4 inputs (I0, I1, I2, I3) in total. To be able to choose one of those 4 input signals, 2 control inputs (C0, C1) are necessary. When an input is chosen, the output O takes its logic value. So it holds:

103102101100 CCICCICCICCIO ⋅⋅+⋅⋅+⋅⋅+⋅⋅=

In the following the truth table of the MUX is shown.

I0 I1 I2 I3 C0 C1 O 0 X X X 0 0 0 1 X X X 0 0 1 X 0 X X 0 1 0 X 1 X X 0 1 1 X X 0 X 1 0 0 X X 1 X 1 0 1 X X X 0 1 1 0 X X X 1 1 1 1

Table 1: Truth table of a 4-1 multiplexer

Page 33: Lecture Notes

33

2.8.2 Demultiplexer

The counterpart of the MUX is the demultiplexer. This device distributes serial input-data to one of several parallel outputs. Therefore it has one data-input, n control-inputs and 2n data-outputs.

MUX

O0

O1

Om

....

C1

C2

Cn....

I

O0

O1

Om

....

I

Controls

Figure 2.12: Demultiplexer a) schematic and b) functionality

C0 C1 I O0 O1 O2 O3 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 0 0 1 1 0 1 0 0 1 0 0 0 0 0 0 1 0 1 0 0 1 0 1 1 0 0 0 0 0 1 1 1 0 0 0 1 Table 2: Truth table for a 1-4 Demultiplexer

0 0 1O C C I= 1 0 1O C C I= 2 0 1O C C I= 3 0 1O C C I=

2.8.3 Buses

Buses connect spatially distributed information sources (Sender) and –sinks (Receiver) via decentralized multiplexers and demultiplexers, often combined with decentralized coding and decoding. A bus is therefore a components for the transportation of information.

functional: a node with switches arranged in a star topology

technical: a line with switches for the connection (of pairs) of Senders and Receivers.

Page 34: Lecture Notes

34

Buses are found on all levels of a computer system. They fulfill different tasks in the process, from which different properties and construction characteristics result:

a) on the Chip: faster data traffic,

b) on the circuit board Compromise: wiring/ velocity,

c) in the system: Standardisation (exchangeability of components),

d) Computer network: little wiring, protocol for securing data traffic.

The principle mode of operation will be explained based on the example figure 2.13.

left: technical Structure (due to wired logic bidirectional information flow

right: logic equivalent (without wired logic, mono-directional information flow)

Figure 2.13: Principle Circuit and Functionality of Bus Systems

Figure 2.13 shows on the left side a Bus, which connects the Sender (Index S) and the Receiver (Index E) from six system components (A to F) with each other. Due to the multiplexer function of this Bus, only one source is allowed to send, i.e. all of them switch its information on the Bus. The sinks are each according to their function not equipped with gates, i.e. always receive the information. Or they are equipped with gates and only receive the information when they are chosen.

Buses are categorized in unidirectional and bidirectional buses. Unidirectional buses only have one source or one sink, i.e. information is forwarded only in one direction along the transmission line. In case of a bidirectional bus, information can be forwarded in both directions.

2.8.4 Bidirectional Signal Traffic

Buses allow bidirectional data traffic between several participants via a shared bunched circuit. Serial Buses are solely special cases, in which the bit parallel transmission is carried out serially.

Page 35: Lecture Notes

35

Special Property:

For any arbitrary point in time

- only one Sender is allowed to be active,

- arbitrarily many of the connected receivers may receive this message.

For this special, switching techniques are necessary, which will be described in the following .

2.8.4.1 Wired Or

If several devices are connected to the Bus line , and all of them can send data, several „writing“ outputs have to be connected together. Assuming logic devices (Gates), which possess only one switch for each variable, the problem can be solved in a way , that for all outputs the output resistor against the operating voltage is cancelled, and instead of this externally is connected to the line. Then each switching level can switch the voltage on the bus line to ground potential. The connection of several drivers on the Bus occurs as an exception via the direct connection of the Gate outputs and connection with an external Pull-up-resistor.

AR

V

T

T

T

CC

1

2

3

Fig. 2.14: Wired-Or

- As soon as at least one Transistor is active, A = 0 (UA ≈ 0,2V)

- When all transistors block , A = I (UA ≈ VCC)

- A transistor is active when UBE > 0,7 V

- A transistor is blocked, when UBE ≤ 0,7 V

- UBE is a result of the logic connection of the inputs of the individual gates

As a further agreement, it must be fixed for the definition of the bus circuit that, for non active senders the output transistors are blocked (i.e. switching on, a logic I on the bus) so that the active driver alone takes a decision on the state of the bus line.

Advantage: - simple switching techniques

Disadvantage: - The driver capacity for logic I is low (solely over R).

- Small values of R or a big number of inputs switched on later, lead to slow signal flanks and therefore delays.

Page 36: Lecture Notes

36

2.8.4.2 Tri-State Technology

Another option to ensure that only one component is controlling the voltage of the bus line is, to modify the output lines of all components in a way that all of them are disconnected from the bus line, except the one that is controlling the bus line. In that case all output components have to be modified in such a way that an additional signal OE (Output Enable) separates the output component from the bus line when the unit is not selected. The output line shows in this case non of the defined voltages to assign a logical “0” or “I” to the logical output. In doing so, a third state is defined; the high impedance. Should exactly this component be selected, then short cuts in the circuit will certainly be avoided. Since every output line can now be in exactly one of three possible states ( writing a “0” on the bus, writing a “I” on the bus; cut off from bus), we now also speak about tri-state outputs, i.e. tri-state drivers respectively.

The choice of components on the bus, which can write on the bus, can for example occur via a decoder component, since it has been determined that for this component, a “I” always only lies on one of its outputs. The respective signals must be made readily available by the bus management.

Buses with tri-state drivers have 3 states: “0” (Low), “I” (High), “Z” (Z=high impendent). In contrast to open-collector buses, the states L and H will be handled symmetrically. The tri-state bus is faster than the open collector bus, it requires however a higher implementation complexity.

Fig. 2.15: Tri-State Gate as Circuit diagram

OE D O 0 0 Z OE = 0 disconnects the output line by O=Z 0 I Z I 0 I OE = I enables the output line. Gate operates in inverse mode I I 0 O = D .

Page 37: Lecture Notes

37

In a tri-state bus it is never allowed to have two participants simultaneously active. Otherwise, this can lead to damaging of the bus, when a participant wants to drive a bus line on H (laying on the operating voltage), and others want to drive it on L (laying it on mass). Therefore tri-state drivers are used for bus lines which only become active after the arbitration e.g. address-and data lines.

Advantages :

- simple switching technology for the user

- Actively operated 0- and I-states (high fan out).

- Also for numerous drivers per line, no disadvantages in the time behaviour.

Disadvantages:

In case mistakenly two drivers are activated simultaneously

- an undefined voltage level can appear on the bus line,

- there exists a danger of destruction due to disallowed high transverse currents.

The tri-state technology has established itself in computer manufacturing in comparison to the Open-Collector-Technology.

Application Example:

System2

System3

System1

1

1

1

1

1

1

EEE 12 3 Figure 2.16:

Ei: centrally controlled by the main system

Page 38: Lecture Notes

38

The Enable-Signals Ei are in most cases under central control of a main system.

A combination of tri-state-technology and direction switching results in the frequently used bidirectional bus drivers:

&

&

& &

1

1

D

D

R

A1

2

E

E R path function0 0 A ->

D2 receive

0 I D1 -> A send I 0 A = Z passive D2 = Z I I A = Z active D2 = Z

Table 2.4

Figure 2.17:

E : shared Enable; R: definition of direction

2.9 Elementary combinatorial circuits for encoding / decoding

Encoding and decoding belong to the most common tasks in digital data-processing. Binary coded data-words are being converted to another code. Such code-transformations can be designed as combinational circuits. The practical use of encoders & decoders is addressing of functional blocks, data storage and implementing Boolean functions (array logic).

2.9.1 Encoder

Encoders are used for encoding information in the 1-out-of-n code (exactly one of n bits is active) by the code-words m from a binary code of fixed word-length. (m << n). The encoder outputs the word, for which the input is enabled.

n

m

Figure 2.18:

Page 39: Lecture Notes

39

Example: Code converter, which transforms 3-bit dual numbers to gray code.

Gray Code (E. Gray 1835-1901):

Representation of the decimal numbers 0 to 9, where at every transition only one bit changes. Decimal Gray-Code 0 0 0 0 0 1 0 0 0 I 2 0 0 I I 3 0 0 I 0 4 0 I I 0 5 0 I I I 6 0 I 0 I 7 0 I 0 0 8 I I 0 0 9 I I 0 I

At the transition from 9 to 0 three bits change, therefore this is called a non-cyclical code. A cyclical code is e.g. the Glixon gray code.

Decimal Glixon Gray-

Code

0 0 0 0 0 1 0 0 0 I 2 0 0 I I 3 0 0 I 0 4 0 I I 0 5 0 I I I 6 0 I 0 I 7 0 I 0 0 8 I I 0 0 9 I 0 0 0 <- one change only

Page 40: Lecture Notes

40

3 Bit Dual

Code Glixon Gray Code

0 0 0 0 0 0 0 0 I 0 0 I 0 I 0 0 I I 0 I I 0 I 0 I 0 0 I I 0 I 0 I I I I I I 0 I 0 I I I I I 0 0 D2 D

1 D0

Searching

for Calculation

rule

G2

G1

G0

By „good observation“ one can see:

100

211

22

DDGDDG

DG

⊕=⊕=

=

D2 D1 D0

G2 G1 G0

=1 =1

Figure 2.19:

2.9.2 Decoder

The decoder is the complementary circuit compared to the encoder and is used to decode words in a binary code of fixed length to a 1-out-of-n code. The decoder activates that output, whose codeword is active on the input.

An encoder with a specific mapping between inputs and outputs can not be used as a decoder for the reversed process, as the signal flow of logic circuits is not invertible.

Page 41: Lecture Notes

41

n

m

Figure 2.20:

2.9.3 Read only memory (ROM)

Semiconductor memory makes up a significant segment in the spectrum of microelectronics. They can be subdivided in memory blocks that are embedded in the logic of a circuit (e.g. microprocessor) and microchips, which are only used as storage devices. The latter are not subject to this lecture. An overview on semiconductor memory devices follows in figure 2.21.

Figure 2.21: Overview Semiconductor memory devices [Keil 87]

Semiconductor memory can be subdivided into three groups by their access modes: Random access

Random access means that the access time is independent of the physical position of data inside the memory. All memory positions are addressed and written and respectively read in the same time. Random access has an outstanding importance compared to the two following categories.

Page 42: Lecture Notes

42

serial access If accessing single memory positions is only possible serially, it is e.g. a FIFO-memory (first in first out). Such devices are normally needed only for very specific tasks. Associative access

In case of associative memory, the stored data itself plays a major role at the assignment of addresses. This is normally rather used for exotic applications.

For memory with random access, an arbitrary data word can be read or stored at any point in time. Figure 2.22 shows the basic principle of memory with random access.

Figure 2.22: Basic Principle of Memory with Random Access

Random access is the most important memory structure in computer technology and will be closely looked at in the following section.

2.9.3.1 Word-based addressing

For all memories with random access, the problem to be dealt with, is how to address one from 2n Memory positions with one address of n Bit. The basis for this composes the decoding of the address. In this process not single memory (bit) positions will be considered, but vectors of 8, 16, 32 or 64 Bit will be considered. These vectors will be labeled as memory word. Such memories work word oriented and the addressing results word-wise. For the write or respectively the read process, exactly one word is selected via its. This results in the following scheme of a word-wise addressed memory with random access.

Page 43: Lecture Notes

43

Figure 2.23: Word-wise addressed memory with random access scheme [Pelz]

The horizontal lines, which addresses a word, are also called word lines. The vertical lines carry the read-or-write data and are called Bit lines.

2.9.3.2 Bit-wise addressing

There exists also the possibility to store more than one data word in single a memory word. When writing or reading, a complete memory word will however be selected first. In a further selection process the chosen data word is then identified. The memory possesses a second decoder for this; the column decoder; see figure 2.24.

Fig. 2.24 Bit-wise addressed memory with random access scheme

Should the memory possess m rows each consisting of n cells (columns), then a number of R data words of length N can be stored per row, whereby:

Page 44: Lecture Notes

44

R=(n/N)

The number of required address bits r for the column decoder can be determined from:

r = ld R

The address for the row decoder can be reduced by r positions in this way.

2.9.3.3 Comparison of the types of addressing

Addressing Advantages Disadvantages a) Word-wise - suitable for 4- or 8-Bit

computers - for big memory difficult to

complete (m and n must almost be of equal size)

- write- / read logic n-times necessary

b) Bitwise - Quadratic Array - flexible during the construction

of different memory sizes

- several components for storing words are necessary

- higher internal cabling complexity (but there are tricks; e.g. using data lines for addressing)

Table 2.5: Types of addressing

Example: Calculation of the number of lines necessary for a memory (8k x 8).

Therefore m=8k Memory words for every n=8 bit

Word-wise Addressing

1 n

CS

D D D D1011 n1 n0 fig. 2.25: Word-wise Addressing

internal: LWi = m/n + 2n

external: LWe = ld (m/n) + n = 21

Page 45: Lecture Notes

45

Bitwise Addressing

1 n

CS

D D01

CS CSy1 yn

x

fig. 2.26: Bitwise Addressing

internal: LBi = 2 m

external: LBe = ld z + 1 = 17 (stat.), 7 (dyn.)

2.9.3.4 Address Decoding

The decoder circuit itself can be compared to a collection of AND-Gates, whereby the address-bits inverted and not-inverted are provided. The following diagram illustrates the principle of the decoding of two address lines to four word lines. From the four output signals is in principle exactly one active.

fig. 2.27: Simple decoder circuit from two to four Bit [Pelz]

If more addresses are needed, AND-Gates with more inputs are required. In static CMOS-Technology we have as a rule that not more than four inputs for a NAND/AND-Gate are possible. According to this, AND-Gates will be cascaded. The same effect can be achieved by using NAND/NOR-levels instead of AND/AND-levels, see following diagram.

Page 46: Lecture Notes

46

fig. 2.28: To Cascading in Decoder Circuits [Pelz]

2.9.3.5 Comparative Overview:

Type Word breadth

Contents-change

Capacity [Bit]

Access time[ns]

Capacity [Bit]

Access time [ns]

RAM stat. 1, 4, 8, 9 Yes 64-16k 5-50 4k-64k 25-200 RAM dyn. 1, 4, 8 Yes - - 4k-1M 100-200 ROM 4, 8 No - - -1M Very fast PROM 4, 8 Conditional 64-64k - 16k-64k A few 10ns EPROM 8 in Minutes - - 8k-512k Average

velocity EEPROM 1, 4, 8 in ms - - -64k A few 100ns

Table 2.6: Overview Types of Memory

Selection Criteria:

Speed Capacity Word length Possibilities to change/ secure the contents

Trends:

Bigger capacities / shorter access times "intelligent" memory with

o internal Refresh o Video-RAM o integrated address calculation o integrated fault recognition and -correction o associative memory

further specialisations

Page 47: Lecture Notes

47

2.9.4 Programmable Logic

2.9.4.1 General structure

Programmable logic devices (PLD) are Semi-Custom-ICs of low complexity with an AND- and an OR- Matrix for programming by the user or the manufacturer. Components with higher complexity and a matrix architecture of simple function blocks are described as Field Programmable Gate Array (FPGA).

Figure 2.29 illustrates the general structure of all PLD. In it the following elements are recognisable :

• a programmable AND/OR - Matrix, • the programmable feedback, • an Input block, • an Output block.

figure. 2.29: General PLD-Construction [Auer 1994]

The heart of all PLD ‘s is their programmable AND/OR matrix. The remaining elements must not necessarily be realised by all PLD ‘s.

Within the programmable matrix, the outputs of logic AND-Gates lead to a matrix of logic OR-Gates as in figure 2.30

figure: 2.30: The structure of programmable AND/OR-matrices [Auer 1994]

Page 48: Lecture Notes

48

The differentiation of the PLD-types illustrated in fig. 2.30 is

• based on programming possibilities of the AND- and OR-matrices; • the carrying out of the programming which can take place either

o by the user (also called field programmable) or o by the manufacturer (factory programmed).

The following components belong, among others to the group of PLD-IC :

PROM: Programmable Read Only Memory contains a fixed AND-matrix. In this fixed matrix, the addressing of the individual memory cells is realised. Only the OR-matrix is programmable by the customer. Data or logic functions, respectively will be stored in the OR-matrix. The well known EPROM-memory also belongs to this group which has the addressing of the memory cells in an AND-matrix as being fixed after being programmed by the manufacturer.

FPLA: Field Programmable Logic Array components consists of a customer programmable AND- and OR-matrix. The component increases not only the flexibility during the design but also the level of exploitation of the structure.

PAL: Programmable Array Logic components contain a fixed OR-matrix. Only the AND-matrix is electrically programmed by the customer. PAL is a registered trade mark of the company Monolithic Memories Inc. United States of America.

HAL- Components (Hardware Array Logic) are a manufacturer programmed version of a PAL. The AND as well as the OR-matrix are to be seen by the user as being given and fixed.

GAL- Components (Generic Array Logic ) structurally similar to the PAL-components. In this we are dealing with electrically erasable and electrically programmable logic-array. GAL is a trade mark of Lattice Semiconductors.

EPLD – Components (Erasable Programmable Logic Device) also structurally similar to the PAL-Components. Instead of "fuse programming" used for "Standard"-PAL, Floating-Gate-Technology is used for EPLD- Components : the component can be erased by UV-light and thereby be available for new programming. Possible programming errors can be overcome in this way without losing any of the components.

Page 49: Lecture Notes

49

figure 2.31: Summary of the PLD-Variations

In the summary of variations illustrated above FPLA are given as representatives of components built upon the basis of the Integrated Fuse Logic. In this case we are dealing with a notation of the Fa. Valvo distributed PLD-IC. The programming takes place via the separation of the melting paths (Fuse Link) on the crossing lines of the AND/OR-matrices. Due to the complexity we will differentiate a total of four types:

FPLA: freely programmable Logic Array; see above.

FPGA: Field Programmable Gate Array (freely programmable Gate Array) with programmable AND-matrix;

FPLS: Field Programmable Logic Sequencer (freely programmable logic sequencer) with register functions at the output of the programmable matrices;

FPRP: freely programmable ROM-Patch with a fixed programmed AND-matrix as address decoder and programmable OR-matrix as data memory .

2.9.4.2 Construction of the AND/OR-Matrix

The structure of the AND/OR-matrices of the PLD-components can be illustrated in such a way that the principal construction is immediately recognisable. Two AND/OR-matrices – each of them realized in bipolar technology- are combined to each other. The general structure is illustrated once again in figure 2.32

Page 50: Lecture Notes

50

fig. 2.32: general construction of the AND/ OR- Matrices [Auer 1994]

Figure 2.33 shows an example of a programmed device. Here exactly one of the three word lines is addressed via a 1 from m decoder and the stored data are delivered through the Bit-lines.

fig. 2.33: Example of a programmed device

In the circuit illustrated above, the following values will be delivered upon choosing the rows.

x y z a I 0 I b 0 I I c 0 0 0

Table 2.7

Page 51: Lecture Notes

51

The following generalisation should further clarify the example in figure 2.32.First only sections of the circuit which combine the inputs with AND will be illustrated.

figure 2.34: AND-combinations matrix with diodes [Auer 1994]

In the circuit of figure 2.34 the voltage U takes the peak U = Vcc only when voltages Vcc are also connected to all inputs I0 to In. The connections in the circuit represented by waves are the programming points of the component. These connections can be cut off electrically. In doing so, there is no influence of an input signal on the logic combination.

In the circuit parts in which the OR-combinations are realised, bipolar transistors which work upon a shared resistance of R0 will be controlled by the voltages of the AND-combined inputs as in figure 2.35.

fig. 2.35: Circuit part for the realisation of the OR-combinations [Auer 1994]

The voltage on the R0 resistor will have the peak UR0 = Vcc when at least one transistor is active. There also exist circuit variations with multi-emitter-transistors with an active L-peak.

2.9.4.3 Types of Illustrations

It is hardly possible to illustrate the full electronic circuit of the matrix. For the multiple AND-and OR-combinations built in within the matrix, simplified illustrations, are brought in.

Page 52: Lecture Notes

52

An initial agreement for the simplification is concerned with the illustration of the electrical connection points, which can be separated whilst programming. These connections are denoted by waves in complete circuits; see figure 2.36a. Alternative illustrations or simplifications respectively, illustrate this connection as a point, figure 2.36b or as a star, figure 2.36c . Two lines crossing each other without a point, or star respectively represent “not connected”.

2.36a 2.36b 2.365.8c

Figure 2.36: Types of illustrations of the Programming Points

The detailed electrical connection in the crossing points of the matrices is graphically illustrated once again in figure 2.37 here the symbolical illustration is contrasted to the technical realisation.

figure 2.37: Technical Realisation of the Connections [Auer 1994]

A second agreement concerned with the multiple combinations of the n-inputs to the AND or OR-Gates respectively. Figure 5.10a shows for this the electronic illustration and figure 5.10b a simplified illustration whereby the logic function with the multiple inputs and the separable connections is highlighted. In figure 5.10c the illustration is further simplified, whereby only a horizontal line to the Gate is illustrated and the input signals cross this horizontal line as vertical lines. A point on which the lines cross each other implies that there exists an electrical connection of an input signal to the gate inputs. These crossings points also symbolise the separable connections denoted by waves in the complete circuit(figure 2.38a).

Page 53: Lecture Notes

53

2.38a

2.38b

2.38c

2.38: Illustration of the multiple combinations

2.9.4.4 Programming Points

The technical realisation of the programming points is depending on the chosen technology. In bipolar technology diodes or transistors inserted onto the crossing points. Whilst programming poly-silicon bridges are physically destroyed ("burned"). These separation bridges are also known by the term "Fuse Link" .

Instead of fuse programming, in EPLD-memory transistors with floating gate are used. In the non-programmed state there is no charge on the (electrically isolated) floating gate, through which an intact connection of the matrix nodes can be generated. A programmed cell marks, in the process, an "open" node in the programmable matrix. The charge stored on the floating gate can be removed by radiating with UV-light of particular wavelength (EPROM-Erasure device) and the component thereby be erased.

In all known PLD-components, the input signals are handed over directly and inverted into the AND-matrix. This results in exactly four connection possibilities for each input to the individual AND-gates, see figure 2.39

figure 2.39: Connection possibilities of the inputs to the AND-Gates [Auer 1994]

An AND-Gate is constantly set to 0-Peak, when the connections shown in fig. 5.39a remain non-programmed (intact). The influence of a corresponding input on the AND-Gates is ruled out by the separation of both connections. Should one of

Page 54: Lecture Notes

54

the two connections remain available as in fig. 2.39c or fig. 2.39d respectively, then the input is effectively direct i.e. negated at the AND-Gate.

Finally, figure 2.40 shows an example of a programmed AND-matrix and the combinations realized by it.

321 III ⋅⋅

31 II ⋅

21 II ⋅ fig. 2.40: Example of a programmed AND-matrix

2.9.4.5 PLD Structures

In correspondence to the demands of circuit development the following PLD-structures are offered :

• Combinational PLD-structure, • Combinational PLID-structure with feedback, • PLD with registered outputs and feedback , • PLD with programmable output polarity, • Exclusive-OR-Function combined with registered outputs, • Programmable registered inputs, • PLD with product-term-shading, • PLD with asynchronous registered outputs, • GAL with programmable macro cells for signal outputs.

From the multitude of structure a few interesting architectures will be closely illustrated in the following.

Page 55: Lecture Notes

55

2.9.4.6 Combinatorial PLD

Characteristic of the combinational PLD is the AND/OR-matrix-structure where the feedback branch and the storage possibilities on the in- and outputs are missing. Furthermore a programmable AND-matrix is available. This structure is illustrated in figure 2.41

figure 2.41: Example of a PLD with combinational logic

In the non-programmed state all inputs as well as their negated states connections with all eight AND-gates. Always two outputs of the eight AND-Gates are connected to an OR-Gate.

In contrast to the structure illustrated in figure 2.42,in the memory components (EPROM, EEPROM or PROM respectively) the AND-matrix for decoding the addresses is programmed and fixed and the OR-matrix is initially un-programmed, figure 2.42

Page 56: Lecture Notes

56

figure 2.42: Exemplary Structure of a PROM- or EPROM memory respectively

In the PROM-memory connections in the OR-matrix will be burned out whilst the programming takes place, and so the component is not reprogrammable. In case of EPROM, all connections in the OR-matrix are reactivated with UV-Light or electrically in the EEPROM respectively and these connections will be rescinded when programming.

An example of a combinational PAL-Structure is shown in figure 2.43.

figure 2.43: Example of a combinational PAL-structure

In case of PAL-components, only the AND-matrix is programmable. The AND-Gates are in contrast fixed and in groups on OR-combined. In the OR-matrix is therefore no programming possible.

Page 57: Lecture Notes

57

For illustration of the complexity of the PLD in the data sheets • Logic Diagram, • Functional Block Diagram or respectively • Logic Symbols

are used.

2.9.4.7 Logic Diagram

In figure 2.44 a section from a logic diagram is illustrated as an example. The illustrated section distinguishes itself by the following characteristics:

• The input signal goes directly and inverted into the AND-matrix.

• Four AND-Gates are connected to the output via fixed wired NOR-Gates. The programmable AND-matrix is illustrated by horizontal and vertical lines. The matrix section illustrates the non-programmed state. After programming the crossing in the AND-matrix which have not been separated will be denoted by dots (points).

• The input signals are connected to the AND-Matrix via the so called INPUT- or OUTPUT-Lines respectively. All INPUT- and OUTPUT lines cross each other on the horizontal lines connected to AND-Gates. By programming these crossing points, the product terms will be built. For this reason, these lines are named PRODUCT-Lines in the AND-matrix.

figure 2.44: a section from a logic diagram

The fig. 2.45 shows the logic diagram of a 10H8-component from Monolithic Memories. A substantial disadvantage of the illustration with a logic diagram lies in the size of the surface area occupied by the diagram and the bulky nature of the graphics.

Page 58: Lecture Notes

58

fig. 2.45: Logic diagram of the PAL 10H8

2.9.4.8 Functional Block Diagram

Functional block diagrams illustrate a graphical simplification of the logic diagram, which occurs without any loss of information. The functional block diagram for the section illustrated in figure 2.44 is given in figure 2.46.

fig. 2.46: Functional block diagram for the logic diagram in fig. 2.44

Functional block diagrams uses the graphical symbols we know from digital technology. One line is drawn for the transmission of signals between the blocks even for cases with several lines in which the number of lines is then given.

Page 59: Lecture Notes

59

The number of the input lines times the number of AND-Gates.

furthermore, marked in the left field of the block is a half wave showing that the combinations in the AND-matrix are programmable.

Based on this scheme the functional block diagram of the PAL 10H8 is illustrated in figure 2.45

figure 2.45: Functional block diagram of the PAL 10H8

2.9.4.9 Logic-Circuit Symbols

A further possible illustration method is the logic circuit symbol, which illustrates the functional plan in conjunction with the connection points of the device’s housing. For the PAL 10H8 in a DIL-Housing, the logic-circuit symbol in 2.46 is returned, with the following proving to be important: The number and assignment of the input pins; The number and assignment of the output pins; The form of the OR-combinations between the AND-matrix and the

outputs; It is implied, that both the input signals as well as their inverted states go

into the AND-matrix.

Page 60: Lecture Notes

60

figure 2.46: Logic circuit symbol of the combinational PAL 10H8

2.9.4.10 The Programming of the PLD

For Circuit development with PLD-components it is important when accommodating the logic function in the IC to know which functions are realisable at all. Here principally four elementary programmable signal paths in the AND-matrix of PLD-components, illustrated in figure 2.47, are with combinational logic possible.

figure 2.47: Programmable elements of the AND matrix and their logic-functions

The connections that have not been cut off will be denoted by a dot on the lines of the matrix crossing. Should both connections from the input lines to the product lines of an AND-Gate remain intact (figure 2.47a), the output of the AND-Gate is constantly programmed to L. Should only one of the two connections remain, as in figure 2.47b or 2.47c respectively, then the input signal goes directly or respectively

Page 61: Lecture Notes

61

complemented into the AND-Gate. The influence of the input signal on the AND-matrix is ruled out by the cutting of both connections (diagram 2.47d).

2.9.5 Combinatorial PLD with Feedback

Combinatorial PLD with feedback offer, in comparison to the basic PLD’s mentioned in the sections above, the possibility to programme the signal output, see figure 2.48.

fig. 2.48: sector of a logic diagram of a PAL with feedback [Auer 1994]

The essential differences with the combinational structure without feedback are: the controllable Tri-state-Inverter at the output; the connection from the output to the AND-matrix.

The outputs A and B are routed through an tri-state Inverter with Enable-inputs (active High). The tri-state-function is programmed via the appropriate PRODUCT-Lines AOE (A Output Enable), BOE.

The output A will be additionally fed back into the AND- Matrix. Should all the fuses of the product terms AOE be destroyed whilst programming, then A works as an output; should one leave all fuses intact, the output driver becomes highly resistive and A is programmed to an input. This way the output A can be used, depending on the programming of the product terms AOE as both an output or as an input or furthermore as a programmable I/0-Port for bidirectional data traffic.

From the output pins’ point of view, three signal paths dependent on the circuit of the tri-state drivers are possible because of this:

Page 62: Lecture Notes

62

Tri-state-driver is continuously active: the pin connection points work exclusively as outputs with a feedback of the signal in the AND-Matrix (internal feedback); Should all the fuses in the product term AOE be cut off, then the associated AND- Gate will always lie on the H-level (compare also figure 5.20), and it’s output therefore frees the output driver. The connection point A will accordingly be continuously operated as an output. The signal that appears at the output via the internal feedback, inverted or not, again fed back into the AND-matrix. Tri-state-driver is continuously in a highly impedance state: the pins

work exclusively as an input; Should all programmable fuses on the PRODUCT-Line AOE remain unchanged, or both fuses remain intact for any input on this line respectively, then the respective AND-Gate will always be inactive. The output driver is switched to a high impedance state and interrupts the AND-matrix connection to the output pin. This way pin A can only be operated as an input. Tri-state driver changes it’s function: the pin output will alternatively be

operated as an input or output respectively. The output driver is controllable as an in-/output via a programmable logic combination on the PRODUCT-Line AOE . This mode of operation of a pin is suitable for bidirectional data traffic.

2.9.5.1 Special Features of Feedback

Feedback on the same Product Line

A feedback from the output onto the same product line can be found in figure 2.49a and b. In figure 2.49a a feedback to the same product line is programmed. When all further programmed connections on the observed product line are H-level, then the output will oscillate between H and L via the feedback taking into consideration the signal propagation time. The frequency of the oscillations is dependent on the propagation time of the participating gates and cannot be influenced outside the IC .

Flow diagram 2.49a: The value C lies at the output of the AND-Gate. This appears inverted according to the propagation time of the inverter at the output A and will be with the propagation time of the inverter in the feedback path fed back into the AND-matrix. The logic value L (respectively C ) then lies at the intact crossing point or at the input of the AND-Gate respectively), while the value C still lies at

Page 63: Lecture Notes

63

the output. The output of the gate also changes it’s value in accordance with the propagation time of the gate.

figure 2.49: Feedback paths

Feedback onto another Product Line

The signal feedback from the output to another product line in the AND-matrix is denoted in figure 2.49c . In both cases the output A1 generates a signal back into the AND-matrix and is forwarded via the product line to the output A2. Should a product line not transmit a signal which has been fed back (figure 2.49d) then the transition path for the feedback becomes transparent.

Example

Data should be transmitted bidirectional via pin 14 of the PAL 16L8, and when E1=H (Pin6) and E2=L (Pin7) and E3=H (Pin8), pin 14 should work as an output. Otherwise P14 works as an input. The signal paths programmed for this are denoted in bold in the logic diagram of the PAL 16L8 (figure 2.50).

Page 64: Lecture Notes

64

figure 2.50: Logic diagram of the PAL 16L8

2.9.5.2 Functional Block Diagram

Figure 2.51 shows the functional block diagram of a combinational PLD with a feedback based on the example of the PAL 16L8.

fig. 2.51: Functional block diagram of the PAL 16L8

Page 65: Lecture Notes

65

3. Design of sequential circuits

The circuits, discussed up to now were only combinatorial circuits. In these circuits, the outputs at a certain time (apart from propagation delay times) are only depending on the inputs at the same time. Sequential circuits’ outputs however are also depending on former inputs. In addition to combinational devices, sequential circuits also consist of memory elements like flip-flops. The stored information is characteristic for the state of the sequential circuit. A circuit with n binary storage elements can be in one of 2n possible states.

Sequential circuits can be constructed synchronously or asynchronously. The state of synchronous sequential circuits only changes at well defined points in time, controlled by one clock-signal. Asynchronous sequential circuits don’t behave like that. There the function of the circuit depends on certain additional boundary-conditions which can vary by construction or operation. These circuits are a lot more complex and difficult to design therefore bigger sequential circuits are normally designed as synchronous circuits. In the following we will only deal with synchronous circuits. For the treatment of asynchronous ones, referred literature may be used.

3.1 State Machines

State machine theory is suitable for the synthesis of synchronous circuits. The general state machine model will be determined by the following parameters:

X: Input set/Vector Y: Output set/Vector Z: State set/Vector

or as an illustration of components :

figure 3.1: General State Machine Model

The changes in states will be described by a transition function (e.g. g) . The output vector Y is derived from the output function (e.g. f). For clarification of the time sequences highly set indices will be used. With this follows, for the description of a state machine in vector form:

.

.

.

Z X Y

Z x1 xk

y1 yl

.

.

.

Page 66: Lecture Notes

66

Input vector: Xn State vector: Zn State transition function: g(Xn,Zn) Next state vector: Zn+1= g(Xn,Zn) Output function: f(Xn,Zn) Output vector: Yn= f(Xn,Zn) Mealy-State Machine

A Mealy-State Machine is defined through its’ Output function f: Yn = f(Xn,Zn) as well as State transition function g: Zn+1 = g(Xn,Zn)

figure 3.2: Mealy-State Machine Model

Moore-State Machine

A Moore-State Machine is defined through its’ Output function h: Yn = h(Zn) State transition function g: Zn+1 = g(Xn,Zn)

figure 3.3: Moore-State Machine Model

another notation:

Yn = f(Zn+1) results from replacing Zn+1 => Yn = f(g(Xn,Zn))

Page 67: Lecture Notes

67

Comparison of the Mealy- and Moore-State Machines

1. In a stable state can appear

in a Mealy-State Machine different output vectors

in a Moore-State Machine only one output vector.

2. Mealy- and Moore-State Machines can be transformed into each other .

3. Yn = Zn: No output functions available => Medwedjew-State Machine

3.2 Forms of Describing State Machines

3.2.1 State machine tables

The state table (also State Machine Table) is a common form of illustrating state machines. It defines all details of the behavior of a state of a state machine. It consists of three clumn areas. The first column contains a list of all possible states. The second column area contains a list of all possible input combinations in its first row. All other elements inside this matrix give the next states depending on the combination of actual states and possible inputs. Therefore it is a representation of the state transition function, see table 3.1. The same table can be used to assign the output values to the next states resulting from the output function. The combination of both tables leads to the full state machine table as given in table 3.3. (uncoded illustration):

Transition Table

x1 x2 ..... xi ..... xk z1 z2 . . zj . . zl

zij=g(xi,zj)

Table 3.1

Output table

x1 X2 ..... xi ..... xk z1z2. . zj. . zl

yij=f(xi,zj)

Table 3.2

Page 68: Lecture Notes

State table

x1 x2 ..... xi ..... xk z1 z2 . . zj . . zl

zij/yij

Table 3.3

The state table illustrated above is equivalent to a Mealy-State Machine. In the state table of a Moore-State Machine, only the next state will be entered in the Transition table. As the output function is only a function of the actual state and is independent of the input values xi, the output will be noted in a further column (the third column area):

State table of a Moore State Machine

x1 x2 ..... xi ..... xk z1 y1 z2 y2 . . . . zj yu . . . . zl

zij

yv Table 3.4

If certain elements zij and/or yij, are missing one speaks of an incompletely determined state machine, otherwise the machine in completely determined.

Page 69: Lecture Notes

69

Application of the State Table for the a) Analysis of circuits b) Synthesis of circuits

Revision „Fundamentals of Computer Engineering 1“ 1. Definition of an In- and Output variable 2. Choice of type of state machine (Moore, Mealy,…) 3. State coding 4. Choice of type of flip flop and calculation of the flip-flops input functions 5. Design of the circuit for the state transition function 6. Design the circuit for the output function 7. Eventual transformation of the logical expressions into suitable structured

expressions 8. Application in the circuit diagram

3.2.2 State-Transition Diagram

A state transition diagram is used to for graphical representation of a state machine. A graph is composed of nodes and edges. The nodes are assigned the states of the state machine. This follows that a state machine is composed of

A finite number of nodes (circle) The connecting lines between the nodes; the edges. Each edge is a transition

between two states. Arrows on the edges show the direction of the transistion (directed graph). A sequence (chain) of edges is called a path. In a connected graph, every node is reachable by every other node by at least one path. Looking at a single node, the number of edges leading from that nodes to other nodes is limited by the number of maximum input combinations. The edges are labeled with “input/output” Rule 3.1: With three state variables, a maximum of eight states can be coded and can therefore use a maximum of eight nodes . k state variables => ≤ 2k nodes Rule 3.2: With three input variables, eight states can be coded and can therefore use a maximum of eight edges per node.

m input variables => ≤ 2m edges per node

Page 70: Lecture Notes

70

Example 3.1: RS-Flip flop

S R 1+nQ Function 0 0 nQ Save 0 1 0 Reset 1 0 1 Set 1 1 X Not allowed

Table 3.5: Truth table of the RS-flip flop

resulting state table: Inputs 00 0I I0

z0 z0/0 z0/0 z1/I z1 Z1/I z0/0 z1/I

Table 3.6

Z0 Z1

I0/I

0I/0

0x/0 x0/IS R Q

X Y

n n n

n n

figure 3.4

3.2.3 Timing Diagram

For the description of the state machine behavior, one can use the impulse diagrams. They offer a clear illustration , where the variable is directly applied.

Example 3.2: Design of a State Machine for the control of a processing circuit .

The designing of a processing circuit resulted in the following impulse diagram for the control of the processing part of the synchronous circuit to be designed. In the diagram LR is a LOAD signal, which with LR= I works towards the parallel loading of the Operand registers with valid data. CL is the CLEAR Signal (active low) for the register holding the result

Page 71: Lecture Notes

71

figure 3.5 a) : Impulse diagram

Now, it remains to develop the synchronous circuit, which is started via the input STRT and produces the signal sequences for LR and CL shown in the Impulse diagram above. The memory elements of the circuit are to be synchronised with the positive flanks of the clock. Where for the moment, only the crossing from STRT=0 to STRT=I is decisive. The state with STRT=0 will be defined in this way as Z0. Once STRT is set to I , it’s value negligible for further course. In this way, the individual clock pulses for STRT=I can be assigned the states Z1 to Z4 , see diagram 3.6.b.

Z 1 Z Z Z2 3 4Z0 figure 3.6b): Impulse diagram

With Z4 the state machine reaches it’s final state and LR=CL=0. Before running a new sequence of states STRT must be set to = for at least one clock pulse, i.e. change the state machine once into the state Z0 , see diagram. 3.6c. The state Z0 will be described in the illustrated relation as „sharper“ state. The state Z4 leads the circuit into independency from STRT in the sharp state or remains in Z4. Due to this, the state table can now be positioned.

Z STRT=0

STRT=I

LR CL

Z0 Z0 Z1 0 0 Z1 Z2 Z2 I 0 Z2 Z3 Z3 0 I

Page 72: Lecture Notes

72

Z3 Z4 Z4 I 0 Z4 Z0 Z4 0 0 Table 3.7

3.3 State Machine Minimization

The aims of state machine minimization will be discussed based on the Mealy State Machine example illustrated in diagram 3.7 . In the example K1 and K2 show the combinatorial circuit parts of the state machine. The states are realized in Block Z.

figure 3.7: Mealy-State Machine

The number of lines |Xn| and |Yn| are in most cases defined by the application and it is very difficult to influence them . Due to costs, the number |Zn| of the states is decisive. Therefore the aim of state machine minimization is the minimization of the states |Zn|.

Example 3.1: Trivial Simplification

figure 3.8

(Entire frame contains the state machines discussed above.) R: Reset or Starting Point

Page 73: Lecture Notes

73

In the first step eliminate:

• Non-reachable States

• isolated States

• isolated sub-graphs

Remark: R: A/W-Reset in State Table

R

figure 3.9

3.3.1 Minimization according to Huffmann and Mealy

The minimization of a state machine means to reduce the number of states (if possible) . The number of states can be reduced either when states can be eliminated or when they can be summarized with other states.

According to Huffmann and Mealy two states can be summarized to one state, if they are equivalent. The principle requirement for equivalency is that they have

- for identical input values - the same next state with - identical output vectors.

Example 2.2:

3 4 5

1 0 7

1/0

1/0

1/0

1/0

1/0

1/0

0/1

0/1

0/10/10/10/10/1

1/0

1/0

2 6

0/0

figure 3.10: Exemplary State Machine

Page 74: Lecture Notes

74

Here the following are equivalent:

State 5: X = 0 -> Zn+1 = 2, with Yn+1 = I X = I -> Zn+1 = 0, with Y

n+1 = 0

State 6: as in State 5

That means the state transition diagram can be simplified to:

3 4 5

1 0 7

1/0

1/0

1/0

1/0

1/0

0/1

0/1

0/10/10/11/0

1/0

2

0/0

0/1

figure 3.11

Minimization is likewise (and more systematic in the process) possible in the state table.

Zn Zn+1 Yn Yn+1

X=0 X=I X=0 X=I0 0 1 0 0 0 1 3 7 V I 0 2 6 0 I I 0 3 1 4 I I 0 4 5 0 0 I 0 5 2 0 I I 0 6 2 0 I I 0 7 5 0 0 I 0

Table 3.7

V: dependent on previous state

From the table it follows immediately that:

Page 75: Lecture Notes

75

State 6 is identical to State 5 State 7 is identical to State 4

In this way the states 6 and 7 can be eliminated and the state table can be updated respectively:

Zn Zn+1 Yn Yn+1 X=0 X=I X=0 X=I0 0 1 0 0 0 1 3 4 V I 0 2 5 0 I I 0 3 1 4 I I 0 4 5 0 0 I 0 5 2 0 I I 0

Table 3.8

Due to the update, it is now recognisable that also the states 2 and 4 are equivalent. It follows that after striking off the state 4.

Zn Zn+1 Yn Yn+1 X=0 X=I X=0 X=I0 0 1 0 0 0 1 3 2 V I 0 2 5 0 I I 0 3 1 2 I I 0 5 2 0 I I 0

Table 3.9

Minimized State Transition Diagram:

3 5

2

1 0

1/0

0/1

0/10/1

1/0

0/1

0/0

1/01/0

1/0

figure 3.12

Page 76: Lecture Notes

76

3.3.2 The Moore Algorithm

In addition to the above mentioned equivalence, there exists a further form of equivalence. Assuming that, in the next state rows of two states there are states

lk ZZ , that can be summarised. Then, there could be, after summarising lk ZZ , once again row similarities.

Example 3.5: The following state table is given.

0201401043124320324110410YcbaZ

Table 3.10

The principle requirement for each summary is as before, that the associated output of the states are equal. We will initially consider the states 0 and 2 for which this condition is fulfilled.

- Input b : equal next state : 4

- Inbut b : when 0, 2 can be summarised (hypothesis), the equal next state (remains in it’s state)

- Input a: only equal next state when, 1/3 can be summarised. => This is the case under the same conditions

The equivalence identified in this way, will be described as „1“-equivalence and is eventually also be determined via the method of „closely(sharply) looking“ . It is however better to aim for a procedure, which possesses general validity. The MOORE-Algorithm exists hereby for searching for the k-equivalent states. This procedure works iteratively and finds the minimal partition (End class), where the blocks of this partition illustrate the minimum state set.

1. Step: Set 0-equivalence to

0-equivalent are in the example, all states with the same output.

Page 77: Lecture Notes

77

{ }

äquivalent0 jeweils sind4 und 3 1, sowie2 und 0 ZuständeDieBB

BBP:hier

−==

=

)4,3,1()2,0(

,02

01

02

010

2.Step: Iteration

- To find the k-equivalents: search blocks of the (k-1)-equivalents for next states upon entering the same input.

- The block 1−kiB disintegrates into k

xB and kyB , when the next states of the

block 1−kiB fall into different blocks of (k-1)-equivalents; otherwise the states

are k-1-equivalent.

- Aborting the iteration: when no more further fragmentation is possible .

Example 2.3 (Continuation) Investigating on “1“-equivalence

213200024414431)4,3,1()2,0(

02

01

cba

EingabenBB

wie für 01B

unterschiedliche 0iB

0iB

Folgezustände derZeilen sind alle imgleichen enthalten

Table 3.11

Iteration: Investigating on „2“-equivalence

cba

2132000244144314)3,1()2,0(

1Z 2Z 3Z

Table 3.12

States (0, 2) and (1,3) and (4) are “1” equivalent

The minimized state machine consists of three states

Page 78: Lecture Notes

121

113

232

321

ZZZcZZZbZZZa43) (1,2) (0,

ZZZafelAutomatent der ungRückgewinn

Table: 3.13

Rotation of the Moore-Table

00

1123

2132

1321

ZZZZZZZZ

IZZZZycba

Table 3.14

Coding, and so on.

3.3.3 Algorithmic Formulation of the Minimization by Moore:

Definition:

Two states Zm and Zn of a Moore State Machine are k-equivalent, when for every subsequence α of the possible sequences of inputs with α ≤ k vectors it is valid that:

g(Zm, α) = g(Zn, α)

Considerable is a systematic comparison of all sub sequences with the variable k. However, a systematic search of the k-equivalence classes beginning with the 0-equivalence can be constructed.

− All states with identical exits illustrate 0-equivalence classes. − Should states be k-equivalent, this way they are also (k-1)-equivalent. For (Mk ⊆

Mk-1)- a (k-1)-equivalence is proven, in this way k-equivalent states illustrate exactly the subset of the (k-1)-equivalent states, which via an arbitrary input vector are once again passed into (k-1)-equivalence classes

− The search for highly valued k-equivalences must be continued, until it is proven that

− A state set is k-equivalent and (k+1)-equivalent (and therefore k+2, k+....);

− These states are equivalent w.r.t. arbitrary input sequence or/and − State sets now only contain a single element; − this state is therefore equivalent to no other.

Page 79: Lecture Notes

79

figure 3.13

Page 80: Lecture Notes

80

3.4 Conversion of State Machines

In some tasks, a conversion into one of the two types of state machines, Moore or Mealy, offers further advantages:

Examples 3.4:

figure 3.14: Moore-State Machine with 2 states

Upon the conversion into a Mealy-State Machine the state machine in diagram 3.15 behaves in the same way but consists only of one state.

figure 3.15: Equivalent Mealy-State Machine

For the transformation of both state machine types into each other, it is valid that:

1. Every Moore-State Machine is at the same time a Mealy-Machine.

2. For every Mealy-State Machine there exists an equivalent Moore-State Machine.

(Formal proof via the introduction of the Markings function (Markierungsfunktion), see [Stürz/Cimander])

for 1.: A comparison of the tables 3.3 and 3.4 shows that, the state sets Z and the input set X of both types of state machines are in principle identical. In both cases, the next state can be calculated from the transition function: Zn+1 = g(Xn,Zn).

The outputs of the Moore-State Machine are in contras to the Mealy-State Machine however not assigned to the state transitions from one state to the next, instead they are assigned to the states themselves and therefore

Page 81: Lecture Notes

81

independent from the inputs xi. For the transformation of a Moore- State Machine into a Mealy-State Machine the outputs Y must directly be assigned the states Z.

Example 3.7:

Given is the Moore State Machine according to table 3.15.

22313

12132

33121

321

yzzzzyzzzzyzzzzYxxxZ

Output function: Yn = h(Zn) State Transition function: Zn+1 = g(Xn,Zn) table 3.15

The individual next state elements of the equivalent Mealy-State Machine do now derive, when the output function of the Moore State Machine is considered in the state transition table. For the Moore State Machine according to table 3.15 it is valid that Yn = h(Zn), i.e. in component notation y1 = h(z2), y2=h(z3), y3=h(z1). Herewith ,the equivalent Mealy-State Machine results according to table 3.16.

1223313

1231232

2331121

321

yzyzyzYyzyzyzzyzyzyzz

xxxZ

table 3.16

for 2.: Not every Mealy-State Machine is at the same time a Moore-State Machine: should for example in a state transition diagram of a mealy machine the outputs on the edges which end on the same nodes not match, then this state machine is not a Moore-State Machine (since the respective nodes must be assigned several outputs). In order to obtain an equivalent Moore-State Machine, as many new nodes as the number of different outputs on the edges of original nodes must be directed from the nodes of the Mealy-State Machine. In general, a Moore-State Machine equivalent to a Mealy-State Machine has therefore more states as the original Mealy-State Machine.

Example3.8:

The Mealy-State Machine according to Table 3.17 should be transformed into an equivalent Moore-State Machine.

Page 82: Lecture Notes

82

42112

31221

21

yzyzzyzyzz

xx

Table 3.17

The next states 1+nkz of the Mealy-State Machine are included in the matrix

elements [xi,zj] and can be calculated from the transfer function of the Mealy-State Machine:

1+nkz = g(xi,zj) ≡ [xi,zj]

Thereby in the Mealy-State Machine appear the following next states each with different outputs and therefore they have to be decomposed in new states z* with according outputs, in order to transform into a Moore-State Machine.

For the assignment of the new states of the Moore-State Machine to the original states of the Mealy-State Machine the following correlation holds:

{ } [ ] [ ] [ ] [ ]{ }22122111*4

*3

*2

*1

* ,,,,,,,,,, ZXZXZXZXzzzzZ ==

Or by components: z1

* = [x1,z1] = z2/y2 with assigned output y2 z2

*= [x1,z2] = z1/y1 with assigned output y1 z3

*= [x2,z1] = z1/y3 with assigned output y3 z4

*= [x2,z2] = z2/y4 with assigned output y4

This results in the following first structure of an equivalent Moore-State Machine.

[ ][ ][ ][ ] 422

*4

312*3

121*2

211*1

21

,,,,

yzxzyzxzyzxzyzxz

xx

====

Table 3.18 The next states of the Moore-State Machine can now be determined by the transfer function g*(xi,zj*) of the Moore-State Machine. Thereby it holds for the state-variables zj* the correlation indicated above to the original Mealy-State Machine.

Page 83: Lecture Notes

83

Therefore the next states of the Moore-State Machine are determined as: 1* +n

kz = g*(xi,zj*) with zj* = [xi,zj] = g (xi,zj) ≡ zk Determination of the next states in table 3.12 Next state [x1,z1*]:

1* +nkz ≡ [x1,z1*] = g*(x1,z1*) with z1* ≡ [x1,z1] = g (x1,z1) = z2

1* +nkz ≡ [x1, z2] = with z2*

Next state [x1,z2*]:

1* +nkz ≡ [x1,z2*] = g*(x1,z2*) with z2* ≡ [x1,z2] = g (x1,z2) = z1

1* +nkz ≡ [x1, z1] = with z1*

Next state [x1,z3*]:

1* +nkz ≡ [x1,z3*] = g*(x1,z3*) with z3* ≡ [x2,z1] = g (x2,z1) = z1

1* +nkz ≡ [x1, z1] = with z1*

In adequate manner, the remaining states of the Moore-State Machine can be determined.

It follows the equivalent Moore-State Machine depicted in table 3.19.

4*4

*2

*4

3*3

*1

*3

1*3

*1

*2

2*4

*2

*1

21

yzzzyzzzyzzzyzzz

xx

table 3.19

3.5 Basic sequential circuits for data-processing

Digital systems basically consist of two sequential circuit units, the control unit (Steuerwerk, Programmwerk) and the arithmetic logic unit (ALU, Operationswerk, Datenwerk).

Page 84: Lecture Notes

84

The control unit passes information about operators and operands to the ALU, whereas the ALU processes this data. In this context the control unit is a processor that controls a process and the ALU is a processor that executes this process. This definition however seems a bit blurred, which may result from the fact that the tasks of the control unit and the ALU cannot be separated from each other that precise. As an example it can be taken into account that both “types of processors“ consist of the same basic circuits e.g. counters or registers. These are considered in more detail in the following.

figure 3.16

3.5.1 Counters

Counters are used in many applications in data processing. Every time a large amount of events in a large period of time, or a fast sequence of events has to be measured, electronic counters are very suitable.

Electronic counters are capable of counting a sequence of pulses on their input, where the counter doesn’t care about the type of pulse generator.

Counters are circuits that contain a well defined allocation between the number of pulses at their inputs and the states of their outputs. With a number of n outputs, 2n combinations are possible, which represent a specific state. These outputs can be used to display or continue working with the information.

A counter that adds incoming pulses, counts upwards. Correspondingly a counter counts downwards if it subtracts incoming pulses. Counters are subdivided into synchronous and asynchronous counters. In case of synchronous counters, all elements are controlled by a parallel clock line. Asynchronous counters pass the clock signal from the outputs of one component to the inputs of the subsequent

Page 85: Lecture Notes

85

one. The used code distinguishes binary counters from BCD-counters. BCD-counters can be used to count in aiken-code or excess-3-code.

Counter design generally follows this scheme:

• Determination of the number of flip-flops • Setting up the state table • Determination of the input functions for the flip-flops • Creation of the circuit • Determination of possible initial conditions at initiation of the circuit

3.5.1.1 Design of synchronous counters

Synchronous counters are sequential circuits that consist of a sequential part (flip-flops) and a combinational part. In case of a synchronous counter, all flip-flops are activated at the same time by a clock signal. The combinational part generates the input functions for the flip-flops. For clarification the following figure illustrates the block-diagram of a 4-bit upwards-counter.

figure 3.17: 4-bit upwards-dual-counter

Counters can be represented in the Moore- or Mealy-Model. Initial point for the design of a counter normally is a truth-table with the order that has to be counted.

Example: Design of a synchronous 4-bit upwards-counter. A 4-bit upwards-dual-counter passes through the following 16 states:

figure 3.18:

The truth-table for the according count sequence is given in the following.

Page 86: Lecture Notes

86

D C B A 0 0 0 0 0 0 0 1 0 0 1 0 0 0 1 1 0 1 0 0 0 1 0 1 0 1 1 0 0 1 1 1 1 0 0 0 1 0 0 1 1 0 1 0 1 0 1 1 1 1 0 0 1 1 0 1 1 1 1 0 1 1 1 1

Table 3.20 From the number of states (Z=16) the number n of flip-flops needed can be determined: n= ld(Z) ; here n=4

With it the automata table for the counter can be specified. Current state Next state

Q3 Q2 Q1 Q0 Q3+ Q2

+ Q1+ Q0

+ 0 0 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 1 0 0 1 1 0 1 0 0 0 1 0 0 0 1 0 1 0 1 0 1 0 1 1 0 0 1 1 0 0 1 1 1 0 1 1 1 1 0 0 0 1 0 0 0 1 0 0 1 1 0 0 1 1 0 1 0 1 0 1 0 1 0 1 1 1 0 1 1 1 1 0 0 1 1 0 0 1 1 0 1 1 1 0 1 1 1 1 0 1 1 1 0 1 1 1 1 1 1 1 1 0 0 0 0

Table 3.21

Page 87: Lecture Notes

87

From the state table, the equations for the succeeding states of the flip-flops can be derived now. For this purpose the use of KV-maps is suggested:

Q0+

Q2 I 0 0 I Q3 I 0 0 I

I 0 0 I Q1

I 0 0 I

Q0 From the KV-map it can be read

00 QQ =+

For the realization by means of JK flip-flops the coefficients of the function have to be compared with the characteristic equation of the flip-flop.

QkQJQ +=+

From this it follows that:

J0=1 and 01 =k and respectively 00 1 Jk ==

Accordingly for the remaining next states it holds:

Q1+

Q2

0 I I 0 Q3 I 0 0 I

I 0 0 I Q1

0 0 0 0

Q0

10101 QQQQQ +=+

Anew coefficient comparison gives:

J1=Q0 and 01 QK = and respectively K1 = Q0= J1

Page 88: Lecture Notes

88

Q2+

Q2 I I 0 0 Q3 I 0 I 0

I 0 I 0 Q1

I I 0 0

Q0

201210

01212022

)( QQQQQQ

QQQQQQQQ

++=

++=+

Comparison of coefficients:

102

012

QQK

QQJ

+=

=

DeMorgan:

2

01

10

102

JQQQQ

QQK

===

+=

Q3+

Q2

I I I I Q3 I 0 I I

0 I 0 0 Q1

0 0 0 0

Q0

01230213

01230323133

)( QQQQQQQQ

QQQQQQQQQQQ

+++=

+++=+

Page 89: Lecture Notes

89

Comparison of coefficients:

0213

0123

QQQK

QQQJ

++=

=

DeMorgan

3

210

021

0213

JQQQQQQ

QQQK

===

++=

With it the circuit of the synchronous 4-bit upwards-counter can be shown.

figure 3.19

3.5.1.2 Design of asynchronous counters

Asynchronous counters distinguish themselves from synchronous counters by not using the same clock signal for all flip-flops. The first flip-flop however is always controlled by the master clock signal. The clocking of the remaining flip-flops occurs by the outputs of the primary flip-flops. By that:

• not all flip-flops have to be designed for the maximum clock-input frequency C

• as not all flip-flops are switched by the master clock, the control functions for those flip-flops simplify. In total this leads to less complex combinational circuits for controlling the flip-flops, as can be seen in the following example of a 4-bit asynchronous counter when being compared to the synchronous counter.

Page 90: Lecture Notes

90

figure 3.20

If a clock-line is connected to the input T of the first flip-flop, the following pulse-diagram can be found.

figure 3.21

This function can also be exhibited in a truth-table. If a value is assigned to every output (e.g. E0=1, E1=2, E2=4, E3=8), the dual-code is found and it can be proven that the counter runs through all numbers from 0000|2 to 1111|2 .

Page 91: Lecture Notes

91

clock E3 E2 E1 E0 numb

er 0 0 0 0 0 0 1 0 0 0 1 1 2 0 0 1 0 2 3 0 0 1 1 3 4 0 1 0 0 4 5 0 1 0 1 5 6 0 1 1 0 6 7 0 1 1 1 7 8 1 0 0 0 8 9 1 0 0 1 9 10 1 0 1 0 10 11 1 0 1 1 11 12 1 1 0 0 12 13 1 1 0 1 13 14 1 1 1 0 14 15 1 1 1 1 15 16 0 0 0 0 0 17 0 0 0 1 1

Table 3.22

3.5.2 Shift registers

In digital data-processing it is often reasonable to shift a piece of information stepwise, e.g. inside a memory chain. Such a memory chain is called shift register. Data are shifted by clock pulses about one or multiple positions, but only one position at one pulse. Shift registers are needed e.g. for basic arithmetic operations like multiplication and division. Both can be realized by an addition or respectively subtraction and a shift operation. Even only a shift operation represents a mathematical operation. If position-values are assigned to the outputs of a shift register then a shift of a dual number to the right corresponds to a division by 2 and respectively a shift to the left corresponds to multiplication with 2. From a circuit-based view the shift register as well as the counter consists of a pure sequential part and a pure combinational part. The data in the register are shifted by the clock pulses from one memory-cell to the next. Mostly a conversion of the data format is possible, such that serial inputs/outputs can be converted into parallel inputs/outputs.

Page 92: Lecture Notes

92

Example: Design of a serial shift register

A 3 staged shift register for right shifting input-sequences x={0,I} is to be designed. The state of the first input is constantly I and the flip-flops are clocked synchronously. Now the state table for the shift register can be shown.

Current State

Next State

I Q2 Q1 Q0 Q2+ Q1

+ Q0+

1 0 0 0 1 0 0 1 0 0 1 1 0 0 1 0 1 0 1 0 1 1 0 1 1 1 0 1 1 1 0 0 1 1 0 1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 1 1 1 1 1 1

Table 3.23 From the state table the equations for the next states can be determined.

10

21

2

QQ

QQ

IQ

=

=

=

+

+

+

If the realization occurs by use of D-flip-flops, one gets the following circuit for the 3 staged serial shift register:

figure 3.22 This layout of the register gets inputs serially and puts them out serially, too. At every clock pulse the data are shifted one cell to the right. The shift register can therefore be employed as a FIFO (First In – First Out) memory.

Page 93: Lecture Notes

93

Example: Realization with JK-flip-flops

Comparison of coefficients (Equations have to be expanded by ( xx QQ + ) ) gives:

1010

2121

22 01

QKQJ

QKQJKJ

==

==

==

Circular shift registers

If the output of the last flip-flop is being connected to the input of the first flip-flop in a shift register, this is called a circular shift register (ger.: Ringschieber). A signal that travels through this register comes back to the origin when passed to the end. Such a register can be used as a ring counter.

figure 3.23

Series/Parallel-Converters

Other special types of shift registers are the so called series/parallel-converters. Those are needed when information arrives in series, but has to be processed further in parallel. Its counterpart is the parallel/series-converter, which outputs information in series that originally was put in parallel.

In the shift register considered now it is possible to input data in parallel and shift it in series. Loading and saving of the register is done via the input TP while serial shifting is done via the clock input TS. This shift register is often referred to as PISO (Parallel In - Serial Out).

Page 94: Lecture Notes

94

figure 3.24

The counterpart to the PISO is a shift register that inputs data serially and outputs them parallel. This is also referred to as SIPO (Serial In - Parallel Out). The input TP for parallel loading of the register is not required here any more.

figure 3.25

The according symbol for the described shift registers looks as follows:

figure 3.26

Page 95: Lecture Notes

95

3.6 Basic sequential circuits for program-processing

Control units in general can be designed as finite state-machines by help of the Mealy-model. One major task is the design of micro program control units for instruction control in modern microprocessors. A given microprocessor instruction is in that case decomposed to a sequence of so called micro instructions. The according sequence is stored in a micro program store.

figure 3.27

The control unit given in the above figure essentially consists of a register and a combinational circuit. The register contains the state of the control unit, the combinational circuit stores the program. The combinational circuit can be constructed by discrete logic, or by integrated technology, e.g. by the use of ROMs or PLAs.

3.6.1.1 Control unit in ROM-configuration

figure 3.28

Page 96: Lecture Notes

96

• Decoder is already a part of the ROM • Saving of n-ld(n) Feedbacks & flip-flops in the register • Analogous saving of ROM-bits • Simple determination of memory contents and programming • RAM instead of ROM allows for quick reprogramming, i.e. Change of

function while circuit is installed! But: Utilization of the fully coded ROM most times rather low.

3.6.1.2 Control unit in PLA-configuration

figure 3.29

Configuration and complexity are similar to that of the ROM, as fully utilized feedback (the AND-plane is a 2-to-4 decoder).

Struktur und Aufwand hier ähnlich ROM, da volle Ausnutzung des Coderaumes in der Rückkopplung (folglich stellt die AND-Plane einen 2 zu 4 Decoder dar).

If

- Code space is not fully utilized and/or

- additional inputs for control signals are available

effort to be spend with PLA is less than with ROM

Page 97: Lecture Notes

97

Technical:

m: no of address lines (Zn+1 + Xn)

k: length of data (Zn+1 + Yn)

• ROM: no. of bit; always mk 2⋅

• PLA: no. of conjunctions and disjunction in equations: L: where often L << mk 2⋅ , especially for great numbers of m and k

Page 98: Lecture Notes

98

4. Testing digital circuits

4.1 Principles of testing

Faults may occur in various development steps, e.g.:

Design faults like: - Implementation doesn’t fulfill with the specification - Logical design is wrong, i.e. wrong function implemented - Misinterpretation of the specification, special cases are not considered Implementation faults like:

- Fault-free components don’t work when combined - Interface- and timing behavior misunderstood - Wiring fault Component faults like:

- System properly designed and wired, but still not working - Not all components work always properly (Damaged on arrival, etc.)

The principle of testing The justification for tests of all kinds is the matter of fact that neither design faults are totally avoidable, nor fault-free production is possible. In manufacturing there occur defects with statistical probability that have to be found by tests. These defects can’t even be prevented by better production processes, but their propagation from process stage to the next can be stopped by continuous testing of the single production steps. Figure 4.1 exhibits the process of testing and the setup of an ATE (Automatic Test Equipment) with its’ most important components. The block DUT is the Device under Test that normally consists of m inputs, n outputs and q internal states.

Page 99: Lecture Notes

99

Figure 4.1: Principle of testing

Intention of testing is to find input sequences (also called test pattern or stimuli ) and apply them to the DUT, such that the output signals can be compared to the expected outputs in the case of a fault free DUT. The stimuli are saved in the testpattern memory together with the correct outputs (responses) of the fault free circuit. Then the stimuli are forwarded to the DUT while the output of the DUT are forwarded to the comparator. The comparator then compares the present outputs with the expected (stored) outputs (responses) and creates a go/no-go decision. The control unit (Steuerung) contains the test program that among others generates the addresses under which the stimuli and the responses are stored in the testpattern memory. Even though this process runs automatically it shouldn’t be forgotten that both the test pattern and the test program have to be delivered by the test engineer. In the following the class of DUTs is being restricted to digital circuits which then behave in a combinational or sequential way. In case of a combinational circuit for a complete test one would need up to 2m stimuli. In case of a sequential circuit even 2m q+ stimuli would be needed. example: Memory requirements a)Assumption: combinational circuit with m = 24 inputs s(timuli) = r(esponses) = 224 = 16M Words = 48 Mbyte (with 1 Word = 24 bit = 3 Byte) (All stimuli are independent from each other as it is a combinational circuit.) The required memory size arises to

Page 100: Lecture Notes

100

s + r = 96 Mbyte b) Assumption: sequential circuit here e.g. a microprocessor with m = 24 and q =100 That leads to s = r = 2124 Words ~ 3*1037 Byte

These patterns have to be created as a sequence as the DUT has to be considered a sequential circuit; with it a test program for time-accurate running of the test sequence is needed.

Time considerations Control in the ATE with 100 MHz => 10ns/Stimulus and with it the duration for the sequential circuit: 1037*10-8 sec ~ 1029 sec => 3,1721 years =1,16*1024 days (example: 2000 years = 7,3*105 days) => A test by applying all possible patterns is normally not possible Consequences • Determination of

minimum sets of test patterns

⇒ Testpattern generation

• Evaluation of given test

patterns ⇒ Fault simulation: a fault-simulator is a tool

that cares for the weighing of the stimuli and calculates the fault coverage of a set of given test pattern.)

• Reducing the problems in

the DUT (Decrease the values of m and q)

⇒ Easy testable design (The evaluation of a circuit can be made via testability analysis)

The consequences given above also give the historical development due to the increasing complexity of DUTs. Nowadays the trend goes more and more to a combination of the available methods. An additional problem is the matter of fact that normally the ATE is at least one generation older than the DUT.

Page 101: Lecture Notes

101

4.2 Test as a process

The relationship mentioned above, can be generalized to a process-model where the design of a circuit and especially the single steps of testing can be considered as an entire process. When producing or processing integrated circuits and PCBs, or wiring those components, process-specific faults occur with certain feasibilities. According to that, the same statistical dependence between usable production and waste has to be taken into account in every production phase.

From that consideration it follows that tests are essential in the single steps of production and design. Especially in the areas of series-production and routine-tests a further refinement of the phases of testing and producing will occur. It is easy to see, that multiple successive production steps lead to a relatively small productive output if no test-mechanism is applied. In that case a final test of extremely good fault detection is necessary to keep the product quality on a high level. Compared to multiple smaller tests between the single stages of production which would be adapted to specific faults, such a test mechanism would lead to very high charges, if technically realizable at all.

4.2.1 Overview on test mechanisms

Fully automated test generation

Starts from the circuit diagram and is possible in general. From an economical point of view it is justifiable only for relatively small circuits. One uses:

• Boolean difference • D-Algorithm • And respectively enhancements of the D-Algorithm

Heuristic test-pattern-generation

Test patterns are obtained by the experience and intuition of the test engineer. This method is used for complex circuits that cannot be tested by algorithmic means, or respectively if the exact circuit-structure is unknown.

Test-friendly design

All details about the circuit are known, actually the design can also be changed. Testability can be simplified very much by DFT (Design For Testability) or BI(S)T (Built-In-(Self-)Test).

Page 102: Lecture Notes

102

Testability analysis

Both of the newer methods, partitioning test-pattern-generation and inclusion of test-aids are naturally heuristic, i.e. their result is strongly depending on the users’ intuition. Caused by that, testability-analysis has been developed as another criterion for the decision of the test-engineer.

4.2.2 Important CAD-tools for test generation

Logic simulations are mainly used in the design phase, to verify that the circuit design behaves according to the original specifications. Within the scope of test generation logic simulations are used to determine the behavior of a fault-free circuit while certain stimuli are attached to the inputs. These patterns are used later on to compare to the real output of the circuit.

Fault simulations are used to determine how many of all assumed faults are detectable by a set of given test pattern and therefore they deliver the fault coverage for this set of given test pattern. Normally the stuck-at fault-model is chosen. There, one considers that all production faults manifest in a jamming of the transmission lines at the logic values 0 or I. To limit computing-time it is assumed, that only one fault occurs at a given time.

Automated test-pattern generation:

While logic- and fault-simulation only have to deliver signals from input to output, popular methods for test-pattern generation have to compute two phases:

Fault-signal propagation to the outputs of the circuit, and

Backwards-simulation from the point of failure to the inputs to “inject” the appropriate fault.

Thereby normally a number of conflicts occur, i.e. single paths turn out to be not applicable and have to be replaced by others.

4.2.2.1 Effort-estimation

From the causes mentioned above, one can imagine that the effort in using these tools varies very strong. While in logic simulations there is a linear relationship n (n: number of gates) between computation-time and the size of the circuit, this relation rises to n2 in fault simulation and to n3 in automated test-pattern generation. The effort for testability-analysis of a circuit rises approximately linearly with the size of the circuit. Opposite to that apparently optimal behavior lies the fact that the

Page 103: Lecture Notes

103

testability-analysis provides only an approximation of the complexity and based on that fact adds to the mentioned tools, but cannot replace them.

4.2.2.2 Application of test-tools in integrated systems

Facing the complexity of modern circuits, a combination of the classic tools has turned out to be the most effective solution (see figure 4.2).

a.) b.)

a) Iterative procedure assisted by a Test Pattern Generator and a Fault Simulator

b) Possible application of testability analysis ba) for redesign bb) for controlling the ATPG-Algorithm bc) as a substitute for fault Simulation figure. 4.2 application of test-tools in integrated systems

Due to the above mentioned guideline values for the effort for every tool, it is obvious that the process in 4.2 a) requires a huge amount of computation capacity. The process 4.2 b) shows the different possibilities for the use of testability-analysis. The three possibilities for testability-analysis can be employed in a lot of different ways in reality, where bc) has the least importance. The relevance of the other ways lies in the fact that the automated test-pattern generation either works with problem-sets that are a lot smaller (alternative ba) ), or by knowing the complexity can be controlled very efficiently (alternative bb) ). Proceeding by

Page 104: Lecture Notes

104

means of figure 4.2 therefore can be done with less effort on computation-time and additionally this goes into the direction of the ideal process-model.

Following the conceptional steps for building up a process-model mentioned here, single tasks have to be solved for the implementation. One aspect is the integration of design and tests regarding the test-equipment where especially the limited possibilities of test-machines compared to the high demands of the developers have to be considered. Furthermore, a consistent database has to be created, as both data-storage and data-transfer play an important role in a closed process model.

4.3 Faults and fault-models

Testing means to differentiate between faulty circuits and fault-free circuits. Recognition of faults in a production-line requires continuous observation of each step in the production process.

Faults in the circuits can thereby occur in various phases of product design:

Sources for faults:

• in raw material (crystal-faults) • in design

- logic faults (these are not covered in this section) • in production

- Non-uniform doping - Non-uniform etching - Masking faults - Bonding fault

• after production - Electrical overload / surge (while operating) - thermal overload - static discharge (prior to CMOS-circuits)

• Atommigration • in operation

- environmental factors - corrosion - micro-cracks - electromigration (material transportation)

The systematic acquisition of those and other effects occurs in fault-models.

Page 105: Lecture Notes

105

figure 4.3: fault-classification / source: [MUT 75]

A modification of the behavior of a circuit caused by a physical defect is called fault:

• logic faults: A logic fault manifests by the logic behavior of a circuit.

• parametric faults: Parametric faults change the operating parameters of a circuit, like speed, power-consumption or temperature behavior, etc.

• delay faults: Delay faults are malfunctions of the circuit, regarding the transition-time. They can be caused by parameter changes.

• intermittent faults: Faults, that occur only from time to time.

• permanent faults: Permanent faults are time-invariant. Logic faults are always permanent.

Page 106: Lecture Notes

106

One distinguishes:

Fault-detecting tests: compares the fault-free circuit with all possible types of erroneous circuits.

Fault-diagnosing tests: does the same as the fault-detecting test, but additionally can distinguish between all faulty circuits and allows statements about the type of fault.

4.3.1 The Stuck-at fault-model

Origin of the stuck-at fault-model is bipolar technology whose most important branches are Transistor-Transistor-Logic (TTL) and Emitter-Coupled-Logic (ECL). The stuck-at model is the most widely used fault-model, as it leads to the most often occurring symptom, namely that transmission-lines stay on a permanent state and don’t obey any signal-change any more. One distinguishes stuck-at-0 and stuck-at-I (s-a-0; s-a-I), that means that a node is fixed to high logic-level or low logic-level.

figure. 4.4: Assembly of a NAND-Gate in TTL

Possible causes of the stuck-at-fault: e.g. R1 missing => T3 conducts, T4 blocks=> x=I Idea of the single stuck-at fault model The stuck-at-model is a simple binary representation which easily can be used in a program. • All assignment functions in the gates stay existing. • At the same time all lines can take the faulty logic states 0 and I (permanent). • Faults now propagate from the inputs to the outputs • single-fault-assumption: in one circuit only one fault is considered at a time.

Page 107: Lecture Notes

107

4.3.2 The Stuck-open/Stuck-on fault-model

This type of fault normally occurs at MOS circuits and in contrast to the stuck-at model relates to the single transistors. These faults can lead to a sequential behavior in a combinational circuit. Additionally it is not always possible here, to assign a fixed state to a transmission-line, such that the effort for a simulation strongly increases. Furthermore, very detailed circuit-models are necessary for considering those faults. It is also possible to use stuck-open and stuck-on faults on the gate layer, by assigning a stuck-at-0 fault or respectively a stuck-at-I fault for a line interruption to the following gate. By reason of the different meanings of those faults however, one should consider this behavior as a separate fault-group and use the stuck-on-stuck-open model mainly on transistor-layer. MOS circuits are widely used in n-Channel-, (C)MOS- and SOS-technology. Figure 4.5 shows the principle assembly of an n-Channel transistor in MOS-technology.

figure. 4.5: Assembly of a MOS transistor

Further subdivision of MOS-transistors is made via their behavior in idle-state. One distinguishes self-conducting and self-locking types.

figure. 4.6: Types of MOS-transistors

Page 108: Lecture Notes

108

Gate logic arises from a combinational circuit where in NMOS- or respectively PMOS-technology only n-channel or respectively p-channel transistors make up the circuit. The circuit is connected to the supply voltage via a load-transistor, which allows a closed current. PMOS circuits are hardly found today, as they always operate slower than corresponding NMOS circuits, caused by the material properties of silicon. In today’s’ NMOS technology, a logic circuit in form of a combinational circuit is realized by a set of input-variables which drive a set of switches which can pull down the potential of the output to ground potential, based on the variables (see figure 4.7). If the circuit has no conductive path to ground, the output node will take positive potential via the load-transistor. The inputs of MOS transistors are driven almost current-free. A current flow inside the NMOS circuit thus only occurs if a conductive path through the switch-assembly exists.

figure. 4.7: NMOS-combinational circuit

Gate logic in CMOS technology consists of pair wise arranged n-channel & p-channel transistors which are both driven through the same signal. Accordingly, there is one combinational circuit made of p-channel transistors and one of n-channel-transistors. In contrast to NMOS or PMOS technology there is no load transistor for CMOS. Line capacitance has a special meaning in CMOS technology (see fig. 4.8), as it can lead to a retaining (sequential) behavior in case of a fault.

Page 109: Lecture Notes

109

figure 4.8: CMOS-combinational circuit

Stuck-open fault Stuck-on fault

figure 4.9 Stuck-open fault figure 4.10 Stuck-on fault

Different types (self-conducting, self-locking) affected by the same fault (e.g. line interruption at the gate) lead to different behavior. Possible consequences of line interruptions NMOS

f: logically false f = Z (high-impedance) in absence of a load-resistor f = U (unknown voltage level) if load-resistor is shorted

CMOS: f: logically false f fn 1 n+ = : f stores preceding state f n on capacitive line

Page 110: Lecture Notes

110

4.3.3 Wiring faults

The reason for most faults is the interruption of a transmission line, or a short-cut (short) between transmission lines. On an unpopulated PCB, these effects are easy to determine. If however the PCB is populated by analog (linear) elements, normally changed parameters and possibly limited output signals can be observed. The consequences in digital circuits are described by the models in table 2.10. If PCBs are populated with mixed elements (analog & digital, or different digital device groups) faults are not easy to find any more.

Physical → Effect Technology of the

circuit-element

Leerlauf

Kurzschluß

Fault model (Open) (Short)

open input acts as:

shorted outputs

TTL ECL

I 0

0 dominates I dominates

stuck-at-fault Shorts act expanding on the structure. (Not possible to illustrate with stuck-at )

(C)MOS

- mostly undetermined: (open DS, short DS, Storage effect) - crosstalk - antenna behavior - storage effect

mostly undetermined: - undefined Voltage level - danger of destruction

stuck-open, stuck-on, stuck-at,

figure 4.11: Consequences of open inputs and signal-shorts in the case of

digital elements

Page 111: Lecture Notes

111

(Open)

The behavior of interrupted lines or non-soldered points strongly depends on the used technology thus this information has to be considered when modeling the fault. It is critical here, what value the succeeding gate interprets the open circuit. Figure 4.12 exhibits the model for bipolar technologies.

Figure 4.12: Consequences of opens on circuits in bipolar technology

In CMOS-circuits, an open acts like a stuck-open fault, however it can probably affect multiple transistors at the same time. (Short) Shortw between neighbored transmission lines can only be considered if the exact topology of the circuit is known (see figure 4.13). Thus normally one restricts such observations to neighbored pins of the IC, for simplification. Additionally it should be possible at a given topology to automatically generate the set of faults. E.g. one can take the same consideration as with neighbored pins for the wiring of an IC in standard cell-technology. It is also important to define how a short, as the behavior differs depending on the used technology (TTL, CMOS, NMOS, ...).

figure 4.13: short circuit

Page 112: Lecture Notes

112

In case of TTL- & NMOS-circuits the logic value 0 is driven at the gate output by a more low-ohmic switching-transistor than the value I. Thus the logic 0 dominates in case of a short. In ECL-technology the resistance-relationships are opposite at the gate outputs, and thus the logic I dominates in case of a short. In CMOS-circuits comparable values are existent for the switched n- and p- networks. Thus, in case of a short the logic values at the outputs are mostly undefined. In every of the technologies the fault acts structural expanding. In figure 4.14 the short (example from figure 4.13) is expanded by a logic function, which represents the dominance of the logical value in case of a short. The function of the gate is defined by an O (for operator), in general. Depending on the used elements the operator has to be chosen in the following way:

0 = AND in case of TTL and NMOS 0 = OR in case of ECL

figure 4.14: Structural expansion in case of short circuit

4.3.4 Connection between circuit-model and fault-model

A requirement for the use of computers for testing is an appropriate model of the circuit under consideration. For that purpose one needs a circuit model and a definition of the fault-model that accounts for the appropriate technological fault mechanisms. Typically simulation tools contain a set of basic elements, the so called “primitives“ from which every system can be build up and which also give reference for the chosen fault model. The use of models to describe the DUT and its faults brings simplifications on the one hand, but blurred test declarations on the other hand. The desired maximized simplification of the model with a maximum number of details at the same time permanently lead to new abstractions of the model, caused by the evolution of circuit technologies. In figure 4.15 some important steps of this evolution are shown.

Page 113: Lecture Notes

113

figure 4.15: Circuit- und fault-models on different abstractionlayers The level of abstraction of the models in figure 4.15 rises starting from the structural models (technology, transistor, gate). Real faults occur only on the technology-layer and are just indicated on higher layers. The significance of these fault-models thus always has to be evaluated in combination with the used circuit-model. In the following overview the most important properties of the circuit-models from figure 4.15 are summarized.

Page 114: Lecture Notes

114

Technology-Layer

Here the base is the illustration of a circuitry as a network of equivalent circuits of the single electronic devices, where their physical properties can be controlled by parameters. For these devices, different voltages & currents can be calculated, but there exist economical limits which are given by the immense effort for extremely exact simulations (the limit for the number of single electronic devices inside a circuit lies at about 100). Tests related to the semiconductor-structure are much device-specific or respectively manufacturer-specific, that they don’t lead to a general model and that they can only be performed by the manufacturers themselves. The gained conclusions relate to the behavior on the physical / technnological layer and are inapplicable for development and analysis of test-patterns for ordinary PCBs. Transistor-Layer On this layer the user executes his parametric test. Thereby compliance with treshold values such as voltage levels or leakage currents for input- and output transistors is checked, which is given in the corresponding data-sheets. Modeling a complex integrated circuit on the transistor-layer normally fails because of the immense effort and missing documentation. Gate-Layer This is the lowest layer for logical design of digital systems and with it the best suitable layer to conduct tests on the logical behavior of a circuit. The gate-representation is based on the behavior of logic circuits realized in bipolar technology and permits it to exhibit logical and physical behavior (combining input variables, timing, etc.). The used unidirectional signal-flow also complies with real behavior of bipolar technology. For MOS circuits a pure gate representation is not always possible. However when considering the special properties of MOS transistors, it is possible to combine them to equivalent gate-models which are called „complex logic gates“, which then can be expressed easily via gate logic. These gates normally don’t have a direct relation to the transmission lines and combinations inside the MOS circuit, but they only serve as a description for this unit. Furthermore it is possible to design circuits in mixed representation by gate- and switch-models to represent reality at best. Functional-Layer

By reason of economic advantages of a functional description, several methods to represent circuit-behaviour by a function have been developed:

Page 115: Lecture Notes

115

• Functional-blocks (e.g. Register, ALU, etc.) • Truth-tables (for combinatorial circuits) • Program-modules in a high level language, which describe the behaviour of the

blocks • Graphical illustration (e.g. Petri-nets) • And more methods as well as mixtures of these These possibilities are taken if the appropriate functions • are not available as structured models • exactly that part of the circuit shall not be considered in detail, but its existence

still is important for the operation of other elements or even the whole circuit. Register-Transfer-layer In the register-transfer layer the DUT is reduced to an appropriate set of registers and operators. The function of the device is to store data and to modify data while transferring. Automata-Layer This most abstract model allows multiple, simple representations of the DUT. Its’ function is described by the transfer- and output-function of the state machine, where those terms still give a lot of leeway for modeling and have to be concretized for every individual case. For practical purposes it results: • One always has to check if a given fault-model fits to the present circuit, or • Creation of an adequate fault-model • Details for a test (E.g. “detects 90% of…“) always have to be mentioned for a

circuit- and fault-model, together.

4.4 Test generation

Objective of test generation (test-pattern computation) is to find test-patterns for the inputs of the circuit based on an exactly defined formulation, which allow to observe the circuits’ outputs and then distinguish between correct and erroneous output-data.

Page 116: Lecture Notes

116

The main problem in generating tests for complex circuits is to find a set of input data (stimuli) which is smaller then all posible input combinations. Thereby it generally holds that an fault is only detectable if the wrong behavior that the fault causes is observable at the output. Example: Fault-matrix of an AND-Gate with three inputs

e1e2e3

a&

figure 4.16 In this circuit 8 single faults can occur: Fi, 1≤ i ≤ 8. These are characterized by the facts that one input remains at the logical state “0“ or at “1” or that the output stucks-at “0” or at “1”. For these faults the following fault-matrix can be constructed. The columns e1,…e4 contain the values for all possible input-pattern, the column a holds the responses to all input pattern in case of a fault free circuit. The columns F1..F8 contain the output values in case of an existing fault as indicated below the fault numbers Fi. Output at a specific fault e1 e2 e3 a F1

sa0/e1 F2

sa0/e2

F3 sa0/e3

F4 sa1/e1

F5 sa1/e2

F6 sa1/e3

F7 sa0/a

F8 sa1/a

0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 1 0 0 0 0 0 0 0 0 1 2 0 1 0 0 0 0 0 0 0 0 0 1 3 0 1 1 0 0 0 0 1 0 0 0 1 4 1 0 0 0 0 0 0 0 0 0 0 1 5 1 0 1 0 0 0 0 0 1 0 0 1 6 1 1 0 0 0 0 0 0 0 6 0 1 7 1 1 1 1 0 0 0 1 1 1 0 1

Table 4.1 Some of the columns are identical, as the appropriate faults create the same erroneous behavior at the output. E.g. F1, F2, F3, F7. These faults indeed can be observed, but cannot be distinguished. Therefore it is sufficient to let only one of these remain in the matrix and delete the others. With it one gets the following reduced matrix.

Page 117: Lecture Notes

117

Output at a specific fault e1 e2 e3 a F1

sa0/e1

F4 sa1/e1

F5 sa1/e2

F6 sa1/e3

F8 sa1/a

0 0 0 0 0 0 0 0 0 1 1 0 0 1 0 0 0 0 0 1 2 0 1 0 0 0 0 0 0 1 3 0 1 1 0 0 1 0 0 1 4 1 0 0 0 0 0 0 0 1 5 1 0 1 0 0 0 1 0 1 6 1 1 0 0 0 0 0 1 1 7 1 1 1 1 0 1 1 1 1

Table 4.2

In general: Multiple non-distinguishable faults can be combined to classes of equivalent faults, which then can be covered by one test. With it follows in general a reduction of the set of faults from 2m + 2 to m + 2 (with m = number of inputs). It is sufficient to find a test-pattern for one of each fault-class. Another question is if it is necessary to employ all input combinations for recognizing all possible faults. E.g.: Combination 1 (0,0,1): recognizes only fault 8 Combination 3 (0,1,1): recognizes F4 AND F8 Thus combination 3 additionally covers F4. Therefore all tests that only cover F8 can be deleted (F0, F1, F2, F4). By employing the combinations 3,5,6 & 7 all faults of the gate can be recognized. This number of test-patterns cannot be simplified any more. More definitions: Def. 1: In a combinational circuit that only contains (N)AND, (N)OR and

Inverters it is sufficient to test all stuck-at-faults at the primary inputs (PI) and fan-out nodes. The differentiability for diagnostic usage will be lost by that behavior however.

Page 118: Lecture Notes

118

figure 4.17: Example circuit for Definition 5

4.4.1 Boolean Difference

Originally this method was based on the circuit’s equation and is therefore independent from the implementation. In this case it delivers all tests for stuck-at faults at the primary inputs and outputs. In principle however it can be employed in dependence of the implementation, too by determining the functional equation of the circuit. Limits of that method are given by the size of the circuit.

The method of the boolean difference is • independent of the implementation in principle - delivers the complete set of testpattern for all input variables, • can be employed independent of implementation - provides the set of testpattern with all possible paths - can also be used for internal nodes, • is unhandy, as in the here treated form it works with formular manipulation

(Alternative: Bochmann/Steinbach: Logic design with XBOOLE; this method is an example for computer based implementation for the solution of Boolean differences with the help of ternary vector-lists),

• in general too extensive, as all solutions have to be evaluated, but normally one test per fault is enough.

Page 119: Lecture Notes

119

Derivation: Point of origin for these considerations is the expansion theorem: f (x ,x ,x ,..., x ) x f (x ,x ,..., x ,I, x ...x ) x f (x ,x ,..., x ,0, x ...x )1 2 3 n i 1 2 i 1 i 1 n i 1 2 i 1 i 1 n= ⋅ + ⋅− + − + The Boolean difference is the partial derivative of the function f with respect to the variable x, or when used with respect to all variables the total derivative. It is constructed by means of inserting first a I and then a 0 for the variable xi into the function f and then combining them with XOR. df (x)

dxf (x) f (x)

ix xi 1 i 0

= ⊕= =

df (x)dx

= f (x) f (x)

df (x)dx

= f (x ,x ,...., x ,I, x ...x ) f (x ,x ,...., x ,0, x ...x )

ix =1 x

i1 2 i -1 i +1 n 1 2 i-1 i +1 n

i i⊕

=0

The Boolean difference thus is always I if the functional characteristics of f related to xi=I and xi=0 are different. This exactly describes the fault-free case. By establishing the Boolean difference thus a path from the PI to the PO is enabled such that changes at the input can be observed at the output. That however means that all input combinations except xi are redundant. In total the following prepositions can be made:

dfdx

Ti

i

i 1

n

==∑ ===> non trivial solution : gives all possible test-patterns

dfdx

0i

= ===> f(x) ist independent from xi : the node xi is redundant

dfdx

Ii

= ===> f(x) only depends on xi : all variables except xi are

redundant Specific calculation rules:

a) df (x)dx

df (x)dxi i

=

Page 120: Lecture Notes

120

b) dx

df (x)dx

dx

df (x)dxi j j i

⋅ = ⋅

Example:

figure 4.18: Example circuit for test-pattern determination by the help of the

Boolean Difference From the circuit in figure 4.18, the following equation can be derived: f b a c= + ⋅ In the following the Boolean difference will be established for the three inputs a, b and c. The crossover from the antivalent system to the AND/OR system occurs by AND- combinations on both sides with the complement of the other side of the original system. After that, the XOR has to be replaced by on OR. The corresponding proof can be done by the reader, if required. ∂∂∂∂

fa

f f

fa

(b c) b

(b c) b b (b c)b c

a I a 0= ⊕

= + ⊕

= + ⋅ + ⋅ += ⋅

= =

∂∂

fb

a c I a c

a c I

a c

a c

= ⋅ ⊕ + ⋅

= ⋅ ⊕

= ⋅

= +

As the concerned circuit is symmetrical the Boolean difference for c can be found by taking the solution for a and exchanging the variables cyclically. ∂∂

fc

a b= ⋅

Page 121: Lecture Notes

121

Every solution ∂ ∂f / x Ii = with xi ={a,b,c} describes a signal path from xi to f. To accomplish a test for a specific error, xi additionally has to be switched to the inverse value of the fault. A test for a stuck-at-0 fault at the primary input a thus is: T a f

aa / sa0 = ⋅∂∂

from ∂∂

fa

I= we get b = I and c = I

from a/sa0 we get a = I From this the stimuli for the inputs have to be a = I, b = I, c = I for the according test-pattern. By examining all boolean differences, the test-patterns for all possible stuck-at errors at the primary inputs can be established. c b a c b a c b a c b a Ta/sa0 I I I Tb/sa0 0 I 0 Tb/saI 0 0 0 Tc/sa0 I I I Ta/saI I I 0 I I 0 I 0 0 Tc/saI 0 I I 0 I I 0 0 I

Table 4.3 The found set is the complete test-pattern set which can be further reduced for application. Minimum test-pattern sets: A minimum test-pattern set has to include at least one test for every possible fault. In the above example the test for s-a-0 at a is same as for stuck-at-0 at c and also s-a-I at c tests s-a-0 at b. Additionally one of the three patterns of s-a-I at b has to be chosen. The so found set can be used for a functional test. a) Functional test (Go / No go Test):

Test fault 0 0 0 ===> {Tb/saI } 0 I I ===> {T , Tc/saI b/sa0}I I 0 ===> {T , Ta/saI b/sa0} I I I ===> {T , Ta/sa0 c/sa0}

Table 4.4

Page 122: Lecture Notes

122

There are 4 resulting patterns, thus the set is relatively small but only gives small possibilities for further investigations or diagnostics. b) Diagnostics When taking into account “maximum differentiability of faults” while selecting test-patterns from the complete set, a diagnostic test can be evaluated. This test is interesting especially for precise analysis, e.g. for repairing or systematic improvement of production processes, as it possibly indicates the location of a fault. The method for setting up a diagnostic test is: 1. List all tests with unique fault assignment assignment. 2. Selection of test-patterns from the set such that preferably only additional

tests are listet. By selecting different patterns for errors at b it can be offered to find the location of the error, additionally to just recognizing an error. The selected patterns here are:

Test Error I I I ===>

{T , Ta/sa0 c/sa0} I I 0 ===> Ta/saI 0 I I ===> Tc/saI 0 I 0 ===> Tb/sa0 0 0 I ===> Tb/saI

Table 4.5

4.4.2 Path-sensitization

When looking back at the boolean difference as a method for test-pattern generation it can be seen that it always provides all patterns for a fault and thus also all paths. The boolean difference also is a very effording method. In this chapter it is shown how to develop a path for only a single fault. This method is very general and is used in the contruction of various tools for test generation. The first question is: How can a sensititive path be created? Thereby a sensitive path should be defined in a way such that an error at the input of a circuit can be observed at the output. As an example an AND- and an OR- gate are considered. Here the variable a is considered switchable and b is a signal-variable.

Page 123: Lecture Notes

123

AND-Gate

a B y

0 B 0

1 B b

OR-Gate a B y

0 B b

1 B 1

figure 4.19: Path sensitization at an AND- and an OR-Gate If the input a of the AND gate in figure 6.1 is set to I the output will be switched according to the variable b. With it a sensitive path from b to y has been established. Thereby one considers b as a path-variable and a as control-variable.

Equivalent to that is the assignment for switching a sensitive path for an OR gate: The assignment of a 0 for the control variable enables a sensitive path from b to y.

If those considerations are expanded to complete circuits single gates can be recognized as switches. A path then is a chain of sensitive gates. This proceeding can be explained by aid of the following example. Example:

figure 4.20: Circuit-example for path sensitization The assumed fault shall be F = a/sao.

By taking a first look at the circuit it can be seen that y is the end of the path. When choosing a as the origin of the path it can be seen that the path has to go through the gate G2 and after that through G4. Thus, the path is:

a g y → → . The following 5 steps establish a sensitive path from a to y. The single steps are indicated by encircled numbers in figure 4.20.

Page 124: Lecture Notes

124

Steps PI internal PO Action a b c d e f g y 1. I Error-activation by I at

Node a 2. I 0 I Sensitize G2 (OR- Gate), by

0 at b 3. I 0 I I I sensitize G4 (AND-Gate) by

I at f Requirement f=I has to be fulfilled by backtracing to the PIs 4a) I 0 I I I I Justify f = I e.gg by c = I 5a) I 0 I X X I I I complete assignment of the

nodes for the test of a/sao

figure 4.21: Switching of a critical path from a -->y for the circuit of figure 5.1

Comparison of this figure with the circuit shows that the step 4a) doesn’t show a unique possibility for a jusification of f = I. Alternatively e = I could have been chosen. Figure 4.22 takes this alternative and continues with the sensitization from that point.

Steps A b c d e f g y Action 4b) I 0 I I I I Alternative justification for f

= I 5b) I 0

I X I I I I I Contradiction possible here

if e.g. b=I is demanded figure 4.22: Alternative proceeding for justification of the critical path a --->

y Apart from the fault F = a/sa0 and respectively F = a/sa1 the following are recognized, too: 1. all signal-values complementary to the assignment of the path 2. Possibly additional faults caused by the justification assignment (This

especially holds at circuits with multiple POs, as additional errors are oberservable at the other outputs.)

a g y → → is a free path. I.e. that the path is sensitive to both error types a/sa0 and a/sa1 in the same way.

Page 125: Lecture Notes

125

Now we consider an error at b in the circuit of Figure 5.1. As b is used to control the other gates in the circuit and at the same time a path b g y → → exists, it can lead to only one fault to be recognized, as b is not only the startvalue of the path, but also control-variable for sensitizing of the path. In the following we see the general setps, necessary for path sensitizing. 1. Elementary operations a) Choose type and position of the fault => Demand for the complementary

signal at the position of error for the given type of fault. b) Allocate all remaining inputs such that a sensitive path through the following

gate is established. c) Justify all control assignments. 2. Procedure (offers two alternatives) d1) d2)

Choose by a) Choose by a)

Assign by b) Assign by b)

Justify by c) when demanded all gates along the path

Justify (by c)

=> Path assembly starting from PI => path demand complete

(Wojtkoviak uses this variant) Assembly (starting from PI) from P0

e) Choice of a path at the fan-out node

f) Choice of an input when justifying via dominating value (0 at (N)AND 1 at (N)OR )

Caused by the possibility to choose, this method is controlled heuristically. (Heuristic = reasonable suggestion).

for e) Try to choose longest path, as probably a lot of additional faults are

recognized.

Page 126: Lecture Notes

126

for f) Try to choose shortest path, as the probability for contradictions is rather small for the justification.

Often only a trivial-heuristic is utilized: „Choose all inputs x1...xn in a sequence“.

Additionally the inputs can be rated by measurement values from testability analysis or structural analysis.

Overview over available methods

figure 4.23: path sensitization (without backtracing)

Page 127: Lecture Notes

127

5. Further readings

Wirth, N.: Digital Circuit Design. Springer, 1995. [43 YGQ 3218] Friedman, A.D. Logical Design of digital systems. Compuetr Science Press [41 TVE 1180] Hill, F.J. Introduction to switching theory and logical design. Wiley, 1974. [43 YGQ 4175] Wilkinson, B. Digital System Design, Prentice-Hall Inc., 1st edition 1987, Reprinted 1993, 1994 McCluskey, E.J.: Logic design principles, Prentice Hall 1986 [43 YGQ 3991] Liebig, H.: Logischer Entwurf digitaler Systeme, 3. Auflage, Springer 1996 [43 YGQ 3420] Giloi, W. Logischer Entwurf digitaler Systeme, 2. Auflage; Springer 1980 [43 YGQ 3420] Scarbata, G.: Synthese und Analyse digitaler Schaltungen; Oldenbourg 1996 Eschermann, B.: Funktionaler Entwurf digitaler Schaltungen; Springer 1993 [43 YGQ 3632] Wojtkowiak, H.: Test und Testbarkeit digitaler Schaltungen; Teubner 1988 [43 YGQ 2474] Daehn, W.: Testverfahren in der Mikroelektronik; Springer 1997 [YGQ 3535] Miczo, A.: Digital testing and simulation; John Wiley & Sons, 1986 [43 YGQ 2393]

Web Ressources:

Lecture Notes “Fundamentals of Computer Engineering 1” http://www.fb9dv.uni-duisburg.de/ti/de/education/ws0506/fce1/index.php

Online Learning Environment “Modulo” http://www.fb9dv.uni-duisburg.de/modulo_lernwelt

This learning environment is specially implement to support the lectures, exercises and lab in LDDS