8/2/2019 SRS_CODEOPTIMISER
1/21
A
PROJECT REPORT
ONCODE OPTIMIZATION
Submitted By, Guided By,
Akshat Mishra Mr. Ranjeet Jaiswal
Anuj M Khasgiwala Lecturer, CSE.
Arpit Jain
Narendra Shakya
Team Number- A10
Oriental Institute of Science & Technology, Bhopal
Department of Computer Science & Engineering
1 | P a g e
8/2/2019 SRS_CODEOPTIMISER
2/21
Table of Contents
1. Introduction 3 1.1 About the project 4
1.2 Objective 4
2. Classification of Optimization 5
3. Problem Definition 10
3.1 Existing System
11
3.2 Proposed System
11
4. System Study 12
4.1 System Development Life Cycle
12
4.2 Software Model
13
5. Requirement Specification 15
5.1 Feasibility Study
15
5.2 Hardware Requirement
16
5.3 Software Requirement
16
6. Design 17
6.1 Input Requirement
17
2 | P a g e
8/2/2019 SRS_CODEOPTIMISER
3/21
6.2 Data Flow Diagram
17
6.3 Control Flow Diagram
18
6.4 System Flow Diagram19
7. Conclusion 20
8. Reference 20
1. INTRODUCTION
Compiler optimization is the process of tuning the output of a compiler to minimize or
maximize some attributes of an executable computer program. The most common requirement
is to minimize the time taken to execute a program; a less common one is to minimize the
amount of memory occupied. In computing, optimization is the process of modifying a system
to make some aspect of it work more efficiently or use fewer resources. There are broadly two
types of optimizations
1. Memory optimization
2. Speed optimization
Memory:
The program created need to be compact in order to use less memory space.Code optimiser
aims at minimising the code length and removing unnecessary codes or variables.
Speed:
There are two types of programs those that need to be fast and those that don't. With ever
increasing speed of processors and peripherals more applications fall into the latter category.
Of those where speed is critical there are several possible bottlenecks:
Hard disk & File system Network Operating system kernel
Languages standard library Users
3 | P a g e
8/2/2019 SRS_CODEOPTIMISER
4/21
Memory Processor
Code optimization can be also broadly categorized as
1. Platform dependent
2. Platform independent techniques
While the latter ones are effective on most or all platforms, platform dependent techniques
use specific properties of one platform, or rely on parameters depending on the single
platform or even on the single processor; writing or producing different versions of thesame code for different processors might be thus needed.
1.1 THE OVERVIEW OF PROJECT:
Code Optimizer is software which optimizes a program. In computing, Optimization is the
process of modifying a system to make some aspect of it work more efficiently or use fewer
resources. For instance, a computer program may be optimized so that it executes more rapidly,
or is capable of operating with less memory storage or other resources, or draw less power. The
system may be a single computer program, a collection of computers or even an entire network
such as the Internet. See also algorithmic efficiency for further discussion on factors relating to
improving the efficiency of an algorithm.
1.2 OBJECTIVES
The main goal or objective behind preparing for this project is to optimize the code which
makes it more efficient in reference to space usage and time complexity. It is a technique
which is mainly used to optimize the code in order to get the best output as early as
possible. Code optimizer aims to optimize-
Execution time
Not important if the program execution time is very short
Important in High Performance Computing where execution times may
be very long (days, weeks or months).
Also very important in many embedded systems, where there may be
4 | P a g e
http://en.wikipedia.org/wiki/Computer_platformhttp://en.wikipedia.org/wiki/Computer_platform8/2/2019 SRS_CODEOPTIMISER
5/21
strict requirements on the execution time
Memory usage
This is part of normal algorithm design
The memory requirements of an algorithm are fairly easy to understand Corresponds directly to the data structures allocated in the program
Power consumption
Very important in mobile systems and embedded systems
2. CLASSIFICATION OF OPTIMIZATION
There are so many Optimization technique it is worthwhile to reduce the complexity of their
study. Two useful classifications are:
The time during compilation process when an optimization can be applied.
The area of the program over which the optimization applies.
Optimization can be performed at practically every stage of compilation. Example: the constant
folding can be performed as early as during parsing. Some optimization can be delayed until
after target code has been generated.-target code is examined and rewritten to reflect
optimization.
The classification scheme for optimization that we consider is by area of the program over
which the optimization applies. It is divided into fellow category:
Local optimization.
Global optimization.
Inter-Procedural optimization.
Local Optimization
Major local optimization strategies are:
5 | P a g e
8/2/2019 SRS_CODEOPTIMISER
6/21
1. Dead code Elimination: A variable is live at a point in a program If it can be used
subsequently. Similarly the statements that never get execute or never compute values are
called the dead code.
Example:
Void main ()
{
int a;
a=5;
cout
8/2/2019 SRS_CODEOPTIMISER
7/21
cout
8/2/2019 SRS_CODEOPTIMISER
8/21
A typical example of this is the replacement of arithmetic operation by cheaper
operation. Such as multiplication by 2 can be implemented as a shift operation and as
small integer power such as x3 can be implemented as a multiplication such as x*x*x.
This optimization Is called Reduction in strength.
4. Constant Folding:
In compiler theory, constant folding and constant propagation are related
optimization techniques used by many modern compilers. A more advanced form of
constant propagation known as sparse conditional constant propagation may be utilized to
simultaneously remove dead code and more accurately propagate constants. Constant
folding is the process of simplifying constant expressions at compile time. Terms in
constant expressions are typically simple literals, such as the integer 2, but can also be
variables whose values are never modified, or variables explicitly marked as constant.
Consider the statement:-
i = 320 * 200 * 32;
Most modern compilers would not actually generate two multiply instructions and a
store for this statement. Instead, they identify constructs such as these, and substitute the
computed values at compile time (in this case, 2,048,000), usually in the intermediate
representation (IR) tree. In some compilers, constant folding is done early so that
statements such as C's array initializers can accept simple arithmetic expressions. However,
it is also common to include further constant folding rounds in later stages in the compiler,
as well. Constant folding can be done in a compiler's front end on the IR tree that represents
the high-level source language, before it is translated into three-address code, or in the back
end, as an adjunct to constant propagation.
Global Optimization
Optimization that extend beyond basic blocks but are confined to an individual procedure.
Global optimization is an extend form of the local optimization it is more difficult to
perform.
8 | P a g e
8/2/2019 SRS_CODEOPTIMISER
9/21
They are generally required a techniques called data flow analysis which attempts to
collect information across jump boundaries.
Global optimization uses the same techniques that are used in the local optimization.
But instead of considering the basic block global optimization consider the complete
procedure as a processed block.
Global Sub-Expression Elimination: An occurrence of an expression E is called Global
sub=expression if E was previously computed and the values of variables in E have not
changed since the previous computation. So we can avoid the recomputation if we can use the
previous value.
Inter-Procedural Optimization
Optimizations that extend beyond the boundaries to the entire program are called Inter-
procedural Optimization.
It is even more difficult since it involves possible several different parameters passing
mechanisms the possibility of non-local variables access and the need to compute
simultaneous information on all procedure that might call each other.
An additional compilation is the possibility that many procedures may be compiles
separately and only linked together at a later point.
The complier then cannot perform any inter-procedural optimization at all without the
involvement of a specialized form of linker that carries out optimization based on
information that the compiler has gathered.
9 | P a g e
8/2/2019 SRS_CODEOPTIMISER
10/21
3. PROBLEM DEFINITION
In the execution of any program, user may face several problems. Our project will
eliminate these problems, as follows-
Dead code Elimination - Dead Code Elimination is an optimization technique to
eliminate some variables not used at all. The variables which are useless in a program
will detect and eliminate by the optimizer.
Common Sub-Expression Elimination - Common sub expression elimination is a
compiler optimization that searches for instances of identical expressions (i.e., they all
evaluate to the same value), and analyses whether it is worthwhile replacing them with
a single variable holding the computed value.
Loop Optimization - Loop optimization plays an important role in improving cache
performance, making effective use of parallel processing capabilities, and reducing
overheads associated with executing loops. Most execution time of a scientific program
is spent on loops. Thus a lot of compiler analysis and compiler optimization techniqueshave been developed to make the execution of loops faster.
Strength Reduction - Strength reduction is a compiler optimization where expensive
operations are replaced with equivalent but less expensive operations. The classic
example of strength reduction converts "strong" multiplications inside a loop into
"weaker" additions something that frequently occurs in array addressing.
10 | P a g e
8/2/2019 SRS_CODEOPTIMISER
11/21
Constant Folding- Constant folding and constant propagation are related compiler
optimizations used by many modern compilers. An advanced form of constant
propagation known as sparse conditional constant propagation can more accurately
propagate constants and simultaneously remove dead code.
Elimination of useless instruction - Some instructions that do not modify any memory
storage can be detect and removed by optimizer.
3.1 Existing System:
There are some existing compilers in market, as follows-
Microsoft C, QuickC - Microsoft
Open Watcom- Sybase
XL C IBM Turbo C Embarcadero
Open64- Google, HP, Intel, Nvidia, PathScale and others.
3.2 Proposed System:
The proposed system is developed with the aim to overcome the drawbacks of existing
system. The proposed system has got some advantages. People from different parts of the
world can run their programs very easily. It is more personalized and maze in such a manner
that any user can understand all the options in it very easily. It is made in a quick and easy
referential manner for providing the optimized code.
It can be run easily at the time of optimization. Some of the advantages of proposed system are
as follows,
11 | P a g e
8/2/2019 SRS_CODEOPTIMISER
12/21
A Simple but effective technique for locally improving the target code is peephole
optimization.
A method for trying to improve the performance of the target program.
By examining a short sequence of target instructions and replacing these instructions by
a shorter or faster sequence whenever possible.
4. SYSTEM STUDY
The system study phase involves the initial investigation of the structure of the System,
which is currently in use, with the objective of identifying the problem and difficulties with the
existing system. The major steps involved in this phase included defining the user requirements
and studying the present system to verify the problem. The performance expected by the new
system was also defined in this phase in order to meet the user requirements. The information
gathered from various documents were analyzed and evaluated and the findings reviewed in
order to establish specific system objectives.
4.1 System development life cycle (SDLC)
The Systems Development Life Cycle (SDLC) or Software Development Life Cycle in
systems engineering and software engineering is the process of creating or altering systems,
and the models and methodologies that people use to develop these systems. The concept
generally refers to computer or information systems.
12 | P a g e
8/2/2019 SRS_CODEOPTIMISER
13/21
Systems Development Life Cycle (SDLC) is any logical process used by a systems analyst
to develop an information system, including requirements, validation, training, and user
ownership. A SDLC should result in a high quality system that meets or exceeds customer
expectations, reaches completion within time and cost estimates, works effectively and
efficiently in the current and planned Information Technology infrastructure, and is
inexpensive to maintain and cost-effective to enhance.
Below we have shown the software development life cycle of our project Code Optimizer.
4.2 SOFTWARE MODEL
INCREMENTAL MODEL:
Incremental development is a cyclic software development process developed in
response to the weaknesses of the waterfall model. It starts with an initial planning and ends
with deployment with the cyclic interaction in between.
13 | P a g e
8/2/2019 SRS_CODEOPTIMISER
14/21
The basic idea behind iterative enhancement is to develop a software systemincrementally, allowing the developerto take advantage of what was being learned during the
development of earlier, incremental, deliverable versions of the system. Learning comes fromboth the development and use of the system, where possible. Key steps in the process were to
start with a simple implementation of a subset of the software requirements and iteratively
enhance the evolving sequence of versions until the full system is implemented. At eachiteration, design modifications are made and new functional capabilities are added.
The Procedure itself consists of the Initialization step, the Iteration step, and the Project
Control List. The initialization step creates a base version of the system. The goal for thisinitial implementation is to create a product to which the user can react. It should offer asampling of the key aspects of the problem and provide a solution that is simple enough to
understand and implement easily. To guide the iteration process, a project control list is
created that contains a record of all tasks that need to be performed. It includes such items asnew features to be implemented and areas of redesign of the existing solution. The control
list is constantly being revised as a result of the analysis phase.
The iteration involves the redesign and implementation of a task from the project control
list, and the analysis of the current version of the system. The goal for the design andimplementation of any iteration is to be simple, straightforward, and modular, supporting
redesign at that stage or as a task added to the project control list. The level of design detail isnot dictated by the interactive approach. In a light-weight iterative project the code may
represent the major source of documentation of the system; however, in a mission-criticaliterative project a formal Software Design Document may be used. The analysis of iteration
is based upon user feedback, and the program analysis facilities available. It involves
analysis of the structure, modularity, usability, reliability, efficiency, & achievement ofgoals. The project control list is modified in light of the analysis results.
14 | P a g e
http://en.wikipedia.org/wiki/Softwarehttp://en.wikipedia.org/wiki/Software_developerhttp://en.wikipedia.org/wiki/Software_documentationhttp://en.wikipedia.org/wiki/Software_Design_Documenthttp://en.wikipedia.org/wiki/Usabilityhttp://en.wikipedia.org/wiki/Softwarehttp://en.wikipedia.org/wiki/Software_developerhttp://en.wikipedia.org/wiki/Software_documentationhttp://en.wikipedia.org/wiki/Software_Design_Documenthttp://en.wikipedia.org/wiki/Usability8/2/2019 SRS_CODEOPTIMISER
15/21
5. REQUIREMENT SPECIFICATION
The primary goal of the system analyst is to improve the efficiency of the existing system. For
that the study of specification of the requirements is very essential. For the development of the
new system, a preliminary survey of the existing system will be conducted. Investigation done
whether the up gradation of the system into an application program could solve the problems
and eradicate the inefficiency of the existing system.
5.1 FEASIBILITY STUDY
The initial investigation points to the question whether the project is feasible. A
feasibility is conducted to identify the best system that meets the all the requirements. This
includes an identification description, an evaluation of the proposed systems and selection of
the best system for the job
15 | P a g e
8/2/2019 SRS_CODEOPTIMISER
16/21
The requirements of the system are specified with a set of constraints such as system objectives
and the description of the out puts. It is then duty of the analyst to evaluate the feasibility of the
proposed system to generate the above results. Three key factors are to be considered during
the feasibility study.
Operation Feasibility:
An estimate should be made to determine how much effort and care will go into the
developing of the system including the training to be given to the user. Usually, people are
reluctant to changes that come in their progression. The computer initialization will certainly
affected the turn over, transfer and employee job status. Hence an additional effort is to be
made to train and educate the users on the new way of the system.
Technical Feasibility:
The main consideration is to be given to the study of available resources of the
organization where the software is to be implemented. Here the system analyst evaluates the
technical merits of the system giving emphasis on the performance, reliability, maintainability.
By taking the consideration before developing the proposed system, the resources availability
of the organization was studied. The organization was immense computer facilities equipped
with sophisticated machines and the software hence this technically feasible.
Economic Feasibility:
Economic feasibility is the most important and frequently used method for evaluating
the effectiveness of the proposed system. It is very essential because the main goal of the
proposed system is to have economically better result along with increased efficiency. Cost
benefit analysis is usually performed for this purpose. It is the comparative study of the costverses the benefit and savings that are expected from the proposed system. Since the
organization is well equipped with the required hard ware, the project was found to be
economically.
16 | P a g e
8/2/2019 SRS_CODEOPTIMISER
17/21
5.2 HARDWARE REQUIREMENT
Pantium-2/3 processor
128 RAM
1GB hard disk
5.3 SOFTWARE REQUIREMENT
Development End Text Pad , Net beans , JAVA 1.6
Operating System Windows XP/98
6. DESIGN
6.1 INPUT REQUIREMENT
In our project, we have to input a program written in C programming language.This
can be done in two ways:
1. User can provide the input by specifying filename i.e. C format.
2. User can provide the input by writing a C program on Code Window.
6.2 DATA FLOW DIAGRAM
LEVEL O:
17 | P a g e
8/2/2019 SRS_CODEOPTIMISER
18/21
LEVEL 1:
6.3 CONTROL FLOW DIAGRAM
18 | P a g e
USER Dead VariableElimination
Dead Code
Eliminatio
n
Common Sub
expression
Elimination
Constant
Folding
Code
Motion
Strength
Reduction
Optimized
Output
8/2/2019 SRS_CODEOPTIMISER
19/21
19 | P a g e
User
Start/Stop
Exit
OptimizedCode
Start/Stop
Input Code
Elimination
Process
Dead variable / Deadcode
Loop Optimization
Exit
8/2/2019 SRS_CODEOPTIMISER
20/21
6.4 SYSTEM FLOW DIAGRAM
20 | P a g e
Optimized
Code
OutputInterface
OperatorInterface
Input Choice
Operator
Interface
Operation
DiagnosticsSubsystem
CodeChecking InductionVariable SubsystemApplication
Subsystem
SelectionLanguage
Subsystem
Website Creation
Interface
Input CodeSubsystem
CheckingLoops
Subsystem
Error in codeInputInterface
DiagnosticInterface
8/2/2019 SRS_CODEOPTIMISER
21/21
7. CONCLUSION
The Code Optimization is developed using Java and fully meets the objectives of the
system for which it has been developed. The system has reached a steady state where all errors
have been eliminated. The system is operated at a high level of efficiency and all the userassociated with the system understands its advantage. The system solves the problem. It was
intended to solve as requirement specification.
8. REFERENCE
Compiler in C
Ullman and Sethi. Google
Wikipedia
21 | P a g e