PRINCIPLES OF COMPILER DESIGN 1. Introduction to compilers:- A compiler is a program that reads a program written in one language (source language (or) high level language) and translates it into an equivalent program in another language.(target language (or) low level language) Source program Target Program ( High Level Language) (Low Level Language) Compiler :- It converts the high level language into an equivalent low level language program. Assembler :- It converts an assembly language(low level language) into machine code.(binary representation) PHASES OF COMPILER There are two parts to compilation. They are (i) Analysis Phase (ii) Synthesis Phase Source Program Principles of Compiler Design 1 COMPILER Syntax Analyzer Lexical Analyzer Semantic Analyzer Intermediate Code Generator Code Optimizer Code Generator Error Handler Symbol Table Manager
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
PRINCIPLES OF COMPILER DESIGN
1. Introduction to compilers:- A compiler is a program that reads a program written in one language (source
language (or) high level language) and translates it into an equivalent program in another language.(target language (or) low level language)
Source program Target Program ( High Level Language) (Low Level Language)
Compiler:- It converts the high level language into an equivalent low level language program.
Assembler:- It converts an assembly language(low level language) into machine code.(binary representation)
PHASES OF COMPILER
There are two parts to compilation. They are (i) Analysis Phase(ii) Synthesis Phase
Source Program
Target Program
Analysis Phase:- The analysis phase breaks up the source program into constituent pieces. The analysis phase of a compiler performs,
1.Lexical Analysis (or) Linear Analysis (or) Scanning:-
The lexical analysis phase reads the characters in the program and groups them into
tokens that are sequence of characters having a collective meaning.
Such as an Identifier, a Keyboard, a Punctuation, character or a multi character operator like ++.
“ The character sequence forming a token is called lexeme”
For Eg. Pos = init + rate * 60Lexeme Token Attribute value
rate
+
60
init
ID
ADD
num
ID
Pointer to symbol table
60
Pointer to symbol table
2. Syntax Analysis (or) Hierarchical Analysis :-
Syntax analysis processes the string of descriptors (tokens), synthesized by the lexical
analyzer, to determine the syntactic structure of an input statement. This process is known as
parsing.
ie, Output of the parsing step is a representation of the syntactic structure of a statement.
Example:-
pos = init + rate * 60
= pos +
init *
rate 603. Semantic Analysis:-
The semantic analysis phase checks the source program for semantic errors.Processing performed by the semantic analysis step can classified into
a. Processing of declarative statementsb. Processing of executable statements
During semantic processing of declarative statements items of information are added to the lexical tables.Example:- (symbol table or lexical table) real a, b;
1. Intermediate code generation:-After syntax and semantic analysis some compilers generate an explicit intermediate
representation of the source program. This intermediate representation should have two important properties.
a. It should be easy to produceb. It should be easy to translate into the target program.
We consider the intermediate form called “Three Address Code”. It consists of sequence of instructions, each of which has atmost three operands.
Example:-pos = init + rate * 60pos = init + rate * int to real (60)
Might appear in three address code as,
temp1 = int to real (60)temp2 = id3 * temp1temp3 = id2 + temp2id1 = temp3
=
id1 +
id2 * id3 60
2. Code Optimization:- The code optimization phase attempts to improve the intermediate code, so that faster running machine code will result.
3. Code Generation:-The final phase of the compiler is the generation of the target code or machine code or
assembly code. Memory locations are selected for each of the variables used by the program. Then intermediate instructions are translated into a sequence of machine instructions that perform the same task.
Example:-MOV F id3, R2MUL F #60.0, R2MOV F id2, R1ADD F R2, R1MOV F R1,id1
Principles of Compiler Design3
Translation of a statement pos = init + rate * 60
id1 = id2 + id3 * 60
=
id1 +
id2 *
id3 60
=
id1 +
id2 *
id3 int to real 60
temp1 = int to real (60) temp2 = id3 * temp1 temp3 = id2 + temp2 id1 = temp3
temp1 = id3 * 60.0id1 = id2 + temp1
MOV F id3, R2 MUL F #60.0, R2 MOV F id2, R1 ADD F R2, R1 MOV F R1,id1
Principles of Compiler Design4
Lexical Analyzer
Syntax Analyzer
Semantic Analyzer
Intermediate Code Generator
Code Optimizer
Code Generator
Symbol table management:-A symbol table is a data structure containing a record for each identifier, with fields for
the attributes of the identifier. The data structure allows us to find the record for each identifier quickly and to store or retrieve data from that record quickly.
Error Handler:-Each phase can encounter errors.
The lexical phase can detect errors where the characters remaining in the input do not form any token of the language.
The syntax analysis phase can detect errors where the token stream violates the structure rules of the language.
During semantic analysis, the compiler tries to construct a right syntactic structure, but no meaning to the operation involved.
The intermediate code generator may detect an operator whose operands have in compatible types.
The code optimizer, doing control flow analysis may detect that certain statements can never be reached.
The code generator may find a compiler created constant that is too large to fit in a word of the target machines.
Principles of Compiler Design5
Role of Lexical Analyzer:-The main task is to read the input characters and produce as output a sequence of
tokens that the parser uses for syntax analysis.
Source Token Program
Get next Token
After receiving a “get next token” command from the parser, the lexical analyzer reads input characters until it can identify a next token.
Token:-Token is a sequence of characters that can be treated as a single logical entity. Typical
tokens are, (a) Identifiers(b) Keywords(c) Operators(d) Special symbols(e) Constants
Pattern:-A set of strings in the input for which the same token is produced as output, this set of
strings is called pattern.
Lexeme:-A lexeme is a sequence of characters in the source program that is matched by the
pattern for a token.
Finite Automata
Definition:-A recognizer for a language is a program that takes as input a string x and answers
“yes” if x is a sentence of the language and “no” otherwise.A better way to covert a regular expression to a recognizer is to construct a
generalized transition diagram from the expression. This diagram is called finite automation.
A finite automation can be,1. Deterministic finite automata2. Non-Deterministic finite automata
Principles of Compiler Design6
Lexical Analyzer Parser
Symbol Table
1. Non – deterministic Finite Automata:- [ NFA]A NFA is a mathematical model that consists of,
1. a set of states S2. a set of input symbol Σ3. a transition function δ4. a state S0 that is distinguished as start state5. a set of states F distinguished as accepting state. It is indicated by double circle.
Example:-The transition graph for an NFA that recognizes the language (a/b)* a
a
start a
b
The transition table is,
StateInput Symbola b
0 0,1 01 - -
2. Deterministic Finite Automata:- [DFA]A DFA is a special case of non – deterministic finite automata in which,
1. No state has an ε – transition2. For each state S and input symbol there is atmost one edge labeled a leaving S.
PROBLEM:-
1. Construct a non – deterministic finite automata for a regular expression (a/b)* Solution;-
r = (a/b)* Decomposition of (a/b)* (parse tree)
r5
r4 *
( r3 )
r1 / r2
a b
Principles of Compiler Design7
0
1
For r1 construct NFA start a
For r1 construct NFA start b
aε ε
NFA for r3 = r1/r2 start
ε εb
NFA for r4, that is (r3) is the same as that for r3.
NFA for r5 = (r3)* εa
ε start ε ε ε
ε ε b
ε
Principles of Compiler Design8
2
3
2
3
1
2
4
3
5
6
5
1
2
4
3
7 0
6
2. Construct a non – deterministic finite automata for a regular expression (a/b)*abb Solution;-
r = (a/b)* Decomposition of (a/b)* abb (parse tree) r11
r9 r10
r7 r8 b
r5 r6 b
r4 * a
( r3 )
r1 / r2
a b
For r1 construct NFA start a
For r1 construct NFA start b
aε ε
NFA for r3 = r1/r2 start
ε εb
NFA for r4, that is (r3) is the same as that for r3.
NFA for r5 = (r3)* εa
ε start ε ε ε
ε ε b
ε
Principles of Compiler Design9
2
3
2
3
1
2
4
3
5
6
1
2
4
3
5
7 0
6
NFA for r6=a start a
NFA for r7= r5.r6 ε
a ε start ε ε ε ε a ε ε
b
ε
NFA for r8 = b start b
NFA for r9 = r7. r8
εa ε ε
start ε ε ε a b ε ε
b ε
NFA for r10 =b start b
NFA for r11 = r9.r10 = (a/b)* abb
ε
a ε start ε ε ε a b b ε ε
b
ε
Principles of Compiler Design10
7 8
1
2
4
3
5
8 0
6
7
8 9
1
2
4
3
5
9 0
6
87
9 10
1
2
4
3
5
10
0
6
87 9
CONVERSION OF NFA INTO DFA
1. Convert the NFA (a/b)* into DFA? Solution: ε The NFA for (a/b)* is, a
ε start ε ε ε
ε b ε ε
ε closure {0} = { 0,1,2,4,7} -------------- ATransition of input symbol a on A = { 3 }Transition of input symbol b on A = { 5 }
ε closure {3} = {3,6,1,2,4,7} ------------ BTransition of input symbol a on B = { 3 }Transition of input symbol b on B = { 5 }
ε closure {5} = {5,6,1,2,4,7} ------------ CTransition of input symbol a on C = { 3 }Transition of input symbol b on C = { 5 }
Since A is the start state and state C is the only accepting state then, the transition table is,
StateInput symbol
a bABC
BBB
CCC
The DFA is,
a a b
Start a b
b
Principles of Compiler Design11
5
1
2
4
3
7 0
6
A B C
2. Convert the NFA (a/b)*abb into DFA? Solution: The NFA for (a/b)*abb is,
ε
a ε start ε ε ε a b b ε ε
b
ε
ε closure {0} = { 0,1,2,4,7} -------------- ATransition of input symbol a on A = { 3,8 }Transition of input symbol b on A = { 5 }
ε closure {3,8} = { 3,6,7,1,2,4,8} -------------- BTransition of input symbol a on B = { 8,3 }Transition of input symbol b on B = { 5,9 }
ε closure {5} = { 5,6,7,1,2,4} -------------- CTransition of input symbol a on C = { 8,3 }Transition of input symbol b on C = { 5 }
ε closure {5,9} = { 5,6,7,1,2,4,9} -------------- DTransition of input symbol a on D = { 8,3 }Transition of input symbol b on D = { 5,10 }
ε closure {5,10} = { 5,6,7,1,2,4,10} -------------- ETransition of input symbol a on E = { 8,3 }Transition of input symbol b on E = { 5 }
Since A is the start state and state E is the only accepting state then, the transition table is,
StateInput symbol
a bABCDE
BBBBB
CDCEC
Principles of Compiler Design12
1
2
4
3
5
10
0
6
87 9
b
bb a a
start a c b
a a
MINIMIZATION OF STATES
Problem 1: Construct a minimum state DFA for a regular expression (a/b)* abb
Solution:- 1. The NFA of (a/b)*abb is
ε
a ε start ε ε ε a b b ε ε
b
ε
2. Construct a DFA:
ε closure {0} = { 0,1,2,4,7} -------------- ATransition of input symbol a on A = { 3,8 }Transition of input symbol b on A = { 5 }
ε closure {3,8} = { 3,6,7,1,2,4,8} -------------- BTransition of input symbol a on B = { 8,3 }Transition of input symbol b on B = { 5,9 }
ε closure {5} = { 5,6,7,1,2,4} -------------- CTransition of input symbol a on C = { 8,3 }Transition of input symbol b on C = { 5 }
Principles of Compiler Design13
EA B D
C
1
2
4
3
5
10
0
6
87 9
ε closure {5,9} = { 5,6,7,1,2,4,9} -------------- DTransition of input symbol a on D = { 8,3 }Transition of input symbol b on D = { 5,10 }
ε closure {5,10} = { 5,6,7,1,2,4,10} -------------- ETransition of input symbol a on E = { 8,3 }Transition of input symbol b on E = { 5 }
Since A is the start state and state E is the only accepting state then, the transition table is,
StateInput symbol
a bABCDE
BBBBB
CDCEC
3. Minimizing the DFA Let Π = ABCDEThe initial partition Π consists of two groups.Π1 = ABCD ( that is the non – accepting states)Π2 = E ( that is the accepting state)
So, (ABCD) (E)
AB
a a A B B B
b b A C B D
AC
a a A B C B
b b A C C C
Principles of Compiler Design14
AD
a a A B D B
b b A C D E
On input “a” each of these states has a transition to B, so they could all remain in one group as far as input a is concerned. On input “b” A,B,C go to members of the group Π1 (ABCD) while D goes to Π2 (E) . Thus Π1 group is split into two new groups.
Π1 = ABC Π2 = D , Π3 = E So, (ABC) (D) (E) AB
a a A B B B
b b A C B D Here B goes to Π2. Thus Π1 group is again split into two new groups. The new groups are,
Π1 = AC Π2 = B , Π3 = D, Π4 = ESo, (AC) (B) (D) (E)
Here we cannot split any of the groups consisting of the single state. The only possibility is try to split only (AC)
For AC
a a A B C B
b b A C C C
But A and C go the same state B on input a, and they go to the same state C on input b.Hence after this, (AC) (B) (D) (E)Here we choose A as the representative for the group AC.Thus A is the start state and state E is the only accepting state.
Definition of Context – free – Grammar:- [CFG]A CFG has four components.1. a set of Tokens known as Terminal symbols.2. a set of non-terminals3. start symbol4. production.
Notational Conventions:-
a) These symbols are terminals. (Ts)(i) Lower case letters early in the alphabet such as a,b,c(ii) Operator symbols such as +, -, etc.(iii) Punctuation symbols such as parenthesis, comma etc.(iv) The digits 0, 1, 2, 3, …, 9(v) Bold face Strings.
b) These symbols are Non-Terminals (NTs)(i) Upper case letters early in the alphabet such as A, B, C(ii) The letter S, which is the start symbol.(iii) Lower case italic names such as expr, stmt.
c) Uppercase letters such as X, Y, Z represent grammar symbols either NTs or Ts.
PARSER: A parser for grammar G is a program that takes a string W as input and produces either a parse tree for W, if W is a sentence of G or an error message indicating that W is not a sentence of G as output.
There are two basic types of parsers for CFG.1. Bottom – up Parser2. Top – down Parser
1. Bottom up Parser:-The bottom up parser build parse trees from bottom (leaves) to the top (root). The input
to the parser is being scanned from left to right, one symbol at a time. This is also called as “Shift
Reduce Parsing” because it consisting of shifting input symbols onto a stack until the right side
of a production appears on top of the stack.
There are two kinds of shift reduce parser (Bottom up Parser)1. Operator Precedence Parser2. LR Parser ( move general type)
Principles of Compiler Design17
Designing of Shift Reduce Parser(Bottom up Parser) :-
Here let us “reduce” a string w to the start symbol of a grammar. At each step a string matching the right side of a production is replaced by the symbol on the left.
For ex. consider the grammar,
S aAcBeA Ab/bB d
and the string abbcde.
We want to reduce the string to S. We scan abbcde, looking for substrings that match the right side of some production. The substrings b and d qualify.
Let us choose the left most b and replace it by A. That is AAb/b So, S abbcde
aAbcde (A b)We now that Ab,b and d each match the right side of some production.
Suppose this time we choose to replace the substring Ab by A, in the left side of the production.
A AbWe now obtain,
aAcde (A Ab)Then replacing d by B
aAcBe (B d)Now we can replace the entire string by S.
W = abbcde position production
abbcde 2 AAb/b (that is, Ab)
aAbcde 2 AAb
aAcde 4 Bd
aAcBe SaAcBe
Thus we will be reached the starting symbol S.
Each replacement of the right side of a production by the left side in the process above is
called a reduction.
In the above example abbcde is a right sentential form whose handle is,
Ab at position 2
AAb at position 2
Bd at position 4.
Principles of Compiler Design18
Example:- Consider the following grammar
EE+E
EE*E
E(E)
Eid and the input string id1+id2*id3. Reduce to the start symbol E.
Solution:-
Right sentential form handle Reducing Production
id1 + id2 * id3 id1 Eid
E + id2 * id3 id2 Eid
E + E * id3 id3 Eid
E + E * E E*E EE*E
E+E E+E EE+E
E
Stack implementation of shift reduce parsing:
Initialize the stack with $ at the bottom of the stack. Use a $ the right end of the input
string.
Stack Input String
$ w$
The parser operates by shifting one or more input symbols onto the stack until a handle β
is on the top of a stack.
Example:- Reduce the input string id1+id2*id3 according to the following grammar.
1. EE*E
2. EE+E
3. E(E)
4. Eid
Solution:-
Stack Input String Action
$ id1+id2*id3$ shift
$id1 +id2*id3$ Eid(reduce)
$E +id2*id3$ shift
$E+ id2*id3$ shift
$E+id2 *id3$ Eid(reduce)
Principles of Compiler Design19
Stack Input String Action
$E+E *id3$ shift
$E+E*id3 $ Eid(reduce)
$E+E*E $ EE*E(reduce)
$E+E $ EE+E(reduce)
$E $ Accept
The Actions of shift reduce parser are,
1. shift Shifts next input symbol to the top of the stack
2. Reduce The parser knows the right end of the handle which is at the top of the
stack.
3. Accept It informs the successful completion of parsing
4. Error It detects syntax error then calls error recovery routine.
Operator Precedence Parsing;-
In operator precedence parsing we use three disjoint relations.
< if a < b means a “yields precedence to” b
= if a = b means a “has same precedence as” b
> if a > b means a “takes precedence over” b
There are two common ways of determining precedence relation hold between a pair of
terminals.
1. Based on associativity and precedence of operators
2. Using operator precedence relation.
For Ex, * have higher precedence than +. We make + < * and * > +
Problem 1:- Create an operator precedence relation for id+id*id$
id + * $id - > > >+ < > < >* < > > >$ < < < -
Principles of Compiler Design20
Problem 2: Tabulate the operator precedence relation for the grammarEE+E | E-E | E*E | E/E | E E | (E) | -E | id
Solution:-Assuming 1. has highest precedence and right associative
2. * and / have next higher precedence and left associative 3. + and – have lowest precedence and left associative
The central idea is that a production is treated as a rewriting rule in which the non-
terminal in the left side is replaced by the string on the right side of the production.
For Ex, consider the following grammar for arithmetic expression,
EE+E | E*E | (E) | -E |id
That is we can replace a single E by –E. we describe this action by writing
E => -E , which is read “E derives –E”
E(E) tells us that we could also replace by (E).
So, E*E => (E) * E or E*E => E* (E)
We can take a single E and repeatedly apply production in any order to obtain sequence of
replacements.
E => -E
E => -(E)
E => -(id)
We call such sequence of replacements is called derivation.
Suppose α A β => α γ β then
A γ is a production and α and β are arbitrary strings of grammar symbols.
If α1=> α2 ……. => αn we say α1 derives αn
Principles of Compiler Design21
The symbol, => means “ derives in one step”
=> means “derives zero or more steps”
=> means “derives in one or more steps”
Example:- EE+E | E*E | (E) | -E |id. The string –(id+id) is a sentence of above grammar.
E => -E => -(E) => -(E+E) => - (id+E)
=> -(id+id)The above derivation is called left most derivation and it can be re written as,
E => -E
=> -(E)
=> -(E+E)
=> - (id+E)
=> -(id+id)
we can write this as E => -(id+id)
Example for Right most Derivation:- Right most derivation is otherwise called as canonical derivations.
E => -E
=> -(E)
=> -(E+E)
=> - (id+E)
=> -(id+id)
Principles of Compiler Design22
Parse trees & Derivations:-A parse tree is a graphical representation for a derivation that filters out the choice
regarding replacement order.
For a given CFG a parse tree is a tree with the following properties.
1. The root is labeled by the start symbol
2. Each leaf is labeled by a token or ε
3. Each interior node is labeled by a NT
Ex.
E => -E E
- E
E => -(E) E
- E
( E )E => -(E+E) E
- E
( E )
E + E
E => -(id+E) E
- E
( E )
E + E
id
E => -(id+E) E
- E
( E )
E + E
id id
Principles of Compiler Design23
Top-Down Parsing:-
Top down parser builds parse trees starting from the root and creating the nodes of the
parse tree in preorder and work down to the leaves. Here also the input to the parser is scanned
from left to right, one symbol at a time.
For Example,
ScAd
Aab/a and the input symbol w=cad.
To construct a parse tree for this sentence in top down, we initially create a parse tree consisting
of a single node S.
An input symbol of pointer points to c, the first symbol of w.
w = cad
S
c A dThe leftmost leaf labeled c, matches the first symbol of w. So now advance the input pointer to
‘a’ the second symbol of w.
w = cad
and consider the next leaf, labeled A. We can then expand A using the first alternative for A to obtain the tree.
S c A d
a bWe now have a match for the second input symbol. Advance the input pointer to d,
w = cad
We now consider the third input symbol, and the next leaf labeled b. Since b does not match d,
we report failure and go back to A to see whether there is another alternative for A.
In going back to A we must reset the input pointer to position 2.
W = cad
We now try the second alternative for A to obtain the tree,S
c A d
aThe leaf a matches the second symbol of w and the leaf d matches the third symbol. Now we produced a parse tree for w = cad using the grammar ScAd and Aab/a. This is successful completion.
Principles of Compiler Design24
Difficulties of Top – Down Parsing (or) Disadvantages of Top - Down Parsing
1. Left Recursion:-
A grammar G is said to be left recursion if it has a non terminal A such that there is a
derivation , A + Aα for some α
This grammar can cause a top-down parser go into an infinite loop.
Elimination of left Recursion:-
Consider the left recursive pair of production
A Aα/ β ,
Where β does not begin with A.
This left recursion can be eliminated by replacing this pair of production with,
A βA´
A´ α A´ / ε
Parse tree of original Grammar:-
AA α/ β
A
A α
A α
β Parse tree for new grammar to eliminate left recursion:-
A
β A´
α A´
α A´
ε
Principles of Compiler Design25
Example 1:
Consider the following grammar
a. EE+T/Tb. TT*F/Fc. F(E)/id Eliminate the immediate left recursions.
Solution:-These productions are of the form AA α/ β
A β A´ A´ α A´/ε
(a) EE + T / T the production eliminating left recursion is,
ETE´ E´ +T E´/ε
(b) T T * F / F
TFT´
T´ *F T´/ ε
(c) F(E)/id This is not left recursion. So the production must be F(E)/id
A compiler while translating a source program into a functionally equivalent object code
representation may first generate an intermediate representation.
Advantages of generating intermediate representation
1. Ease of conversion from the source program to the intermediate code
2. Ease with which subsequent processing can be performed from the intermediate code
Parse Tree Intermediate Code
INTERMEDIATE LANGUAGES:There are three kinds of Intermediate representation. They are,
1. Syntax Trees2. Postfix Notation3. Three address code
1. Syntax Tree:-
A syntax tree depicts the natural hierarchical structure of a source program. A DAG
(Direct Acyclic Graph) gives the same information but in a more compact way because common
sub expressions are identified.
A syntax tree and dag for the assignment statement a:= b* -c + b* -c
assign a +
Syntax Tree * *
b uminus b uminus
c c
assign a +
DAG
*
b uminus c
Principles of Compiler Design44
Parser Intermediate code Generator
Code Generator
2. Postfix notation:-Post fix notation is a linearized representation of a syntax tree. It is a list of nodes of the
tree in which a node appears immediately after its children.
The postfix notation for the syntax tree is,
a b c uminus * b c uminus * + assign
3. Three Address Code:-
Three Address code is a sequence of statements of the general form
x := y op z
where x,y and z are names, constants or compiler generated temporaries.
op stands for any operator such as a fixed or floating point arithmetic operator or a logical
operator on a Boolean valued data.
The Three Address Code for the source language expression like x+y*z is,
t1:= y * z
t2 := x + t1
Where t1 and t2 are compiler generated temporary names
So, three address code is a linearized representation of a syntax tree or a dag in which explicit
names correspond to the interior nodes of the graph.
Three Address Code Corresponding to the syntax tree and DAG is,
Code for Syntax Tree
t1 := -c
t2 := b * t1
t3 := -c
t4 := b * t3
t5 := t2 + t4
a := t5
Code for DAG
t1 := -c
t2 := b * t1
t5 := t2 + t2
a := t5
Principles of Compiler Design45
Types of Three Address Statements:-
1. Assignment statement of the form x := y op z
2. Assignment instructions of the form x := op z
where op is a unary operation.
3. Copy statements of the form x := y
where, the value of y is assigned to x.
4. The Unconditional Jump GOTO L
5. Conditional Jumps such as if x relop y goto l
6. param x and call p, n for procedure calls and return y.
7. Indexed assignments of the form x := y[i] and x[i] := y
8. Address and pointer assignments, x :=&y, x := *y and *x := y
Implementations of Three Address Statements:
It has three types,
1. Quadruples
2. Triples
3. Indirect Triples
Quadruples:-
A Quadruple is a record structure with four fields, which we call op, arg1, arg2, and
result. The op field contains an internal code for the operator.
For Eg, the three address statements,
x := y op z is represented by
y in arg1
z in arg2
x in result.
The quadruples for the assignment a:=b* -c + b* -c are,
op arg1 arg2 result
(0)(1)(2)(3)(4)(5)
uminus*
uminus*+:=
cbcbt2
t5
t1
t3
t4
t1
t2
t3
t4
t5
a
Principles of Compiler Design46
Triples:-
A triple is a record structure with three fields: op, arg1, arg2. This method is used to
avoid entering temporary names into the symbol table.
Ex. Triple representation of a:= b * -c + b * -c
op arg1 arg2(0)(1)(2)(3)(4)(5)
uminus*
uminus*+
assign
cbcb
(1)a
(0)
(2)(3)(4)
Indirect Triples:-Listing pointers to triples rather than listing the triples themselves are called indirect
triples.Eg. Indirect Triple Representation of a := b * -c + b * -c
statement op arg1 arg2(0)(1)(2)(3)(4)(5)
(10)(11)(12)(13)(14)(15)
(10)(11)(12)(13)(14)(15)
uminus*
uminus*+
assign
cbcb
(11)a
(10)
(12)(13)(14)
Principles of Compiler Design47
BASIC BLOCKS & FLOW GRAPHS
Basic Blocks:
A block of code means a block of intermediate code with no jumps in except at the
beginning and no jumps out except at the end.
A basic block is a sequence of consecutive statements in which flow of control enters at
the beginning and leaves at the end without halt or possibility of branching except at the end.
Algorithm for Partition into Basic Blocks : -
Input: - A sequence of Three Address statements.
Output:- A basic blocks with each three address statement in exactly one block.
Method:-
1. We first determine the set of leaders, the first statement of basic blocks.
The rules we use are the following,
(i) The first statement is a leader.
(ii) Any statement that is the target of a conditional or unconditional GOTO is a
leader.
(iii) Any statement that immediately follows a GOTO or unconditional GOTO
statement is a leader.
2. For each leader, its basic block consists of the leader and all statements up to but not
including the next leader or the end of the program.
Example:-
Consider the fragment of code, it computes the dot product of two vectors A and B of
length 20.
BeginPROD:=0I:=1Do Begin
PROD:=PROD+A[I]*B[I]I:=I+1
EndWhile I<=20
End
A list of three address statements performing the computation of above program is, (for a machine with four bytes per word)
Principles of Compiler Design48
So the three address statements of the above Pascal code is,1. PROD:=02. I:=13. t1:=4*I4. t2:=A[t1]5. t3:=4*I6. t4:=B[t3]7. t5:=t2*t48. t6:=PROD+t59. PROD:=t610. t7:=I+111. I:=t712. if I<=20 GOTO (3)
The Leaders are, 1 and 3. So there are two Basic Blocks