Top Banner
27
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Compiler presentaion
Page 2: Compiler presentaion

A compiler is a computer program (or set of programs) that transforms source code written in a programming language (the source language) into another computer language (the target language, often having a binary form known as object code).

Page 3: Compiler presentaion

Source code

OptimizingObject code

Page 4: Compiler presentaion

The term decompiler is most commonly appliedto a program which translates executableprograms (the output from a compiler) intosource code in a (relatively) high levellanguage which, when compiled, will producean executable whose behavior is the same asthe original executable program

Page 5: Compiler presentaion
Page 6: Compiler presentaion

1. Lexical analysis: in a compiler linearanalysis is called lexical analysis or scanning

2. Preprocessor: in addition to a compilerseveral other programs may be required tocreate and executable target program. Asource program may be divided intomodules stored in spa rate files. The task ofcollection the source program is sometimesentrusted to distinct program called apreprocessing.

Page 7: Compiler presentaion

3. Parsing: hierarchical analysis is called parsing orsyntax analysis.

4. Semantic analysis: is the phase in which thecompiler adds semantic information to the parsetree and builds the symbol table. This phaseperforms semantic checks such as type checking(checking for type errors), or object binding(associating variable and function references withtheir definitions), or definite assignment (requiringall local variables to be initialized before use),rejecting incorrect programs or issuing warnings.

Page 8: Compiler presentaion

5. Code generation: the final phase of the compiler is the generation of target code consisting normally of relocatable machine code or assembly code.

6. Code optimization: the code optimization phase attempts to improve the intermediate code, so that faster-running machine code will result.

Page 9: Compiler presentaion

Compilers bridge source programs in high-level languages with the underlying hardware.

A compiler requires :

1) Determining the correctness of the syntax of programs.

2) Generating correct and efficient object code.

3) Run-time organization.

4) Formatting output according to assembler and/or linker conventions.

Page 10: Compiler presentaion

The front end

The middle end

The back end

Page 11: Compiler presentaion

1. The front end:

checks whether the program is correctly written in terms of the programming language syntax and semantics. Here legal and illegal programs are recognized. Errors are reported, if any, in a useful way. Type checking is also performed by collecting type information. The frontend then generates an intermediate representation or IR of the source code for processing by the middle-end.

Page 12: Compiler presentaion

2. The middle end:

Is where optimization takes place. Typical transformations for optimization are removal of useless or unreachable code, discovery and propagation of constant values, relocation of computation to a less frequently executed place (e.g., out of a loop), or specialization of computation based on the context. The middle-end generates another IR for the following backend. Most optimization efforts are focused on this part.

Page 13: Compiler presentaion

3. The back end:

Is responsible for translating the IR from the middle-end into assembly code. The target instruction(s) are chosen for each IR instruction. Register allocation assigns processor registers for the program variables where possible. The backend utilizes the hardware by figuring out how to keep parallel execution units busy, filling delay slots, and so on.

Page 14: Compiler presentaion

One classification of compilers is by the platform on which their generated code executes. This is known as the target platform.

The output of a compiler that produces code for a virtual machine (VM) may or may not be executed on the same platform as the compiler that produced it. For this reason such compilers are not usually classified as native or cross compilers.

Page 15: Compiler presentaion

EQN, a preprocessor for typesetting mathematics

Compilers for Pascal

The C compilers

The Fortran H compilers.

The Bliss/11 compiler.

Modula – 2 optimization compiler.

Page 16: Compiler presentaion

Compiler Passes

Multi PassSingle Pass

Page 17: Compiler presentaion

A single pass compiler makes a single pass overthe source text, parsing, analyzing, andgenerating code all at once.

Page 18: Compiler presentaion

let var n: integer;var c: charin begin

c := ‘&’;n := n+1

end

PUSH 2LOADL 38STORE 1[SB]LOAD 0[SB]LOADL 1CALL addSTORE 0[SB]POP 2HALTIdent

Nc

Type

Intchar

Address

0[SB]1[SB]

Page 19: Compiler presentaion

A multi pass compiler makes several passesover the program. The output of a precedingphase is stored in a data structure and used bysubsequent phases.

Page 20: Compiler presentaion
Page 21: Compiler presentaion

Automatic parallelization:

The last one of which implies automation when used in context, refers to converting sequential code into multi-threaded or vectorized (or even both) code in order to utilize multiple processors simultaneously in a shared-memory multiprocessor (SMP) machine.

Page 22: Compiler presentaion

The compiler usually conducts two passes of analysis before actual parallelization in order to determine the following:

Is it safe to parallelize the loop? Answering this question needs accurate dependence analysis and alias analysis

Is it worthwhile to parallelize it? This answer requires a reliable estimation (modeling) of the program workload and the capacity of the parallel system.

Page 23: Compiler presentaion

The Fortran code below can be auto-parallelized by a compiler because each iteration is independent of the others, and the final result of array z will be correct regardless of the execution order of the other iterations.

do i= 1n ,

z(i) = x(i) + y(i(

enddo

Page 24: Compiler presentaion

On the other hand, the following code cannot be auto-parallelized, because the value of z(i) depends on the result of the previous iteration, z(i-1).

do i=2, n

z(i) = z(i-1)*2

enddo

Page 25: Compiler presentaion

This does not mean that the code cannot be parallelized. Indeed, it is equivalent to

do i=2, n

z(i) = z(1)*2**(i-1)

enddo

Page 26: Compiler presentaion

Automatic parallelization by compilers or tools is very difficult due to the following reasons:

Dependence analysis is hard for code using indirect addressing, pointers, recursion, and indirect function calls.

loops have an unknown number of iterations.

Accesses to global resources are difficult to coordinate in terms of memory allocation, I/O, and shared variables.

Page 27: Compiler presentaion

Due to the inherent difficulties in full automatic parallelization, several easier approaches exist to get a parallel program in higher quality. They are:

Allow programmers to add "hints" to their programs to guide compiler parallelization.

Build an interactive system between programmers and parallelizing tools/compilers.

Hardware-supported speculative multithreading.