Page 1
ABSTRACT
The AES encryption/decryption algorithm is widely used in modern consumer
electronic products for security. To shorten the encryption/decryption time of plenty of data,
it is necessary to adopt the algorithm of hardware implementation; however, it is possible to
meet the requirement for low cost by completely using software only. In this paper, we
implemented the AES encryption algorithm with hardware in combination with part of
software using the custom instruction mechanism provided by the ARM7 with keil platform.
The main functional blocks of this algorithm are AES-128 key expansion block: The initial
128-bit cipher key has to be expanded to eleven round keys of same length. The first round
key is the cipher key (RoundKey0) and all subsequent round keys are produced when a
function is applied to the previously generated round key. A data block of 128-bits, called
plaintext, is provided as input to the AES encryption algorithm. AES performs a number of
transformations to the plaintext, and a 128-bit output block (called cipher text) is produced as
a result.
Introduction Page 1
Page 2
CHAPTER-1
INTRODUCTION TO EMBEDDED SYSTEMS
1.1 Introduction
An Embedded system is any computer system hidden inside a product other than a
computer. Embedded systems are found in wide range of applications like expensive
industrial control applications. As the technology brought down the cost of dedicated
processors. They began to appear in moderately expansive applications such as automobiles,
communications and office equipment, televisions. Today’s embedded system is so
inexpensive that they are used in almost every electronic product in our life.
Performance goals will force us to learn and apply new techniques such as
multitasking and scheduling. The need to communicate directly with sensors actuators,
keypads, displays etc will require programmers to have a better understanding of how
alternative methods for performing input and output provide opportunities to trade speed,
complexity and cost
1.a How powerful are embedded processors
The embedded system found in most consumer products employs a single chip
controller. That includes the microprocessor, a limited amount of memory and simple input
output devices. By far the vast majority of the embedded systems in production today are
based on the 4bit, 8bit, or 16bit processors. Although 32bit processors account for relatively
small percentage of the current market, their use in embedded systems is growing at the
fastest rate. In 1998, almost 250 million 32bit embedded processor were shipped, compared
to 100million desk top computers.
1.b What programming languages are used
Although it is occasionally necessary to code some small parts of an embedded
application program in assembly language, rest of the code in even the simplest application is
written in a high level language. Traditionally the choice of the language has been ‘C’.
Programs written in ‘C’ are very portable from one compiler and/or target processor to
Introduction Page 2
Page 3
Object files Executable Image file
Run time library
Operating system image
Compiler
Linker Loader
Boot process
Assebler
RAM
another. C compilers are available for a number of different target processors, and they
generate very efficient code.
Despite the popularity of C++ and Java for desktop application programming, they are rarely
used in embedded systems because of the large run-time overhead required to support some
of their features. For example, even a relatively simple C++ program will produce about
twice as much code as the same program written in C,and the situation is
Much worse for large program that makes extensive use of the run-time library.
1.c How is building an embedded application unique?
You should already be familiar with the tools and soft ware components used to build
a desktop application program and load it into memory for execution. Desktop Application
program is as shown in fig. 1.1.
Desktop application programs.
Fig.1.1 Desktop application programs.
Introduction Page 3
Read write
Memory
(RAM)
Page 4
Object Files Rom image Executable file Image File
Program initialization Re-entrant library
Run-time kernel
Compiler
AssemblerLinker
Loader
Read writeMemory(RAM)
Read only Memory(ROM)
Rom“Burner”
Embedded Application Program
Fig.1.2 Embedded Application Program
A compiler and/or an assembler are used to build one or more object files that are
linked to gather with a run-time library to form an executable image that’s stored as a file on
the disk. When we want run a desktop application program, its executable image is loaded
from a disk into memory by a part of the operating system known as the “Loader”. The
operating system itself is already in memory, put there during the boot process.
The desktop system is intended to run a number of different application programs.
Thus, read-write main memory is used so that an entirely different application program can
be quickly and easily loaded into memory, replacing the previous application whenever
necessary.
Unlike general desktop systems embedded systems are designed to serve a single
purpose. Once the embedded software is in memory, there is usually no reason to change it.
1.2 Embedded Software Development Tools
Application programmers typically do their work on the same kind of computer on
which the application will run. If programmers edit the program, compiles its links it, tries it
out and debugs it, all on the same machin The tactic has to change for embedded systems. In
the first place, most embedded systems have specialized hardware to attach to special sensors
or to drive special controls, and the only way to try out the software is on the specialized
hardware.
Introduction Page 4
Page 5
In the second place, embedded systems often use microprocessors that have never
been used as the basis of workstations. Obviously, programs do not get magically compiled
into the instruction set for what ever microprocessor you happen to have chosen for your
system, and programs do not magically jump into the memory of your embedded system for
execution
1.2.1 Host And Target Machines
Most programming work for embedded systems is done on a host, a computer system
on which all the programming tools run. Only after the program has been written, compiled,
assembled and linked is it moved to target, the system that is shipped to customers. Some
people use the word workstation instead of host; the word target is almost universal(See in
Figure given below).
1.2.2 Cross compilers
Most desktop system used as host comes with compilers, assemblers, and linkers and
so on for the building to the programs that will run on the host. These tools are called as the
native tools”. The need of a compiler that needs run on the host system but produce the
binary instructions that will be understand by the target microprocessor. Such a program is
called a cross compiler.
The fact that program works on your host machine and compiles cleanly with your
cross compiler is no assurance that will work on your target system. The same problem that
haunts every other effort to port C programs from machine another apply. The variables
declared has int may be one size on the host and a different size on the target. Structures may
be packed differently on the two machines. Ur ability to access 16 bit and 32 bits and entities
that reside at odd numbered address may be different.
Introduction Page 5
Page 6
EC Files(.C)
Cross Compiler
.Obj
Assembly (.asm)
Cross Assembler
.Obj
Linker
HOST Machine
Target System
Locator
Fig. 1.3 Cross Compilers
1.2.3 Cross Assemblers
Another tool that will need if we must write any of the program in assembly language
is a cross assembler. As we might imagine from the name, a cross assembler is an assembler
that run on host but produces binary instructions appropriated for the target. The input to the
cross assembler must be assembly language appropriated for the target (since that is the only
assembly language that can be translated into binary instructions for the target).
Introduction Page 6
Page 7
1.2.4 Linker/Locators for Embedded Software
The first difference between a native linker and locator is the nature of the output files
that they create. The native linker creates a file on the disk drive of the host system that is
read by a part of the O.S called the loader. The locator creates file that will be used by some
program that copies the output to the target system. Later, the output from the locator will
have to run it’s own.
In an embedded system, there is no separate O.S. Linkers for embedded system is often called
as locators.
1.3 Embedded Design Methodology
The fast growing complexity and short time to market of today's real-time embedded
systems necessitates new design methods and tools to face the problem of specification,
design, analysis, scheduling, simulation, integration and validation of complex systems. In
the project, a system level method for embedded real-time systems design is developed
exploiting SystemC strength for system level dan co-design modeling. In order to support the
methodology, some extentions to SystemC are proposed starting form RTOS modeling and
framework for scheduling simulation.
Introduction Page 7
Page 8
CHAPTER-2
INTRODUCTION
2.1Information Security:
The concept of information will be taken to be an understood quantity. To introduce
cryptography, an understanding of issues related to information security in general is
necessary.
Information security manifests itself in many ways according to the situation and
requirement.
Regardless of who is involved, to one degree or another, all parties to a transaction
must have confidence that certain objectives associated with information security have been
met. Over the centuries, an elaborate set of protocols and mechanisms has been created to
deal with information security issues when the information is conveyed by physical
documents. Often the objectives of information security cannot solely be achieved through
mathematical algorithms and protocols alone, but require procedural techniques and abidance
of laws to achieve the desired result.
For example, privacy of letters is provided by sealed envelopes delivered by an
accepted mail service. The physical security of the envelope is, for practical necessity, limited
and so laws are enacted which make it a criminal offense to open mail for which one is not
authorized. It is sometimes the case that security is achieved not through the information
itself but through the physical document recording it.
For example, paper currency requires special inks and material to prevent
counterfeiting. Conceptually, the way information is recorded has not changed dramatically
over time. Whereas information was typically stored and transmitted on paper, much of it
now resides on magnetic media and is transmitted via telecommunications systems, some
wireless. What has changed dramatically is the ability to copy and alter information. One can
make thousands of identical copies of a piece of information stored electronically and each is
indistinguishable from the original. With information on paper, this is much more difficult.
Introduction Page 8
Page 9
What is needed then for a society where information is mostly stored and transmitted
in electronic form is a means to ensure information security which is independent of the
physical medium recording or conveying it and such that the objectives of information
security rely solely on digital information itself. One of the fundamental tools used in
information security is the signature. It is a building block for many other services such as
non-repudiation, data origin authentication, identification, and witnessing, to mention a few.
Having learned the basics in writing, an individual is taught how to produce a handwritten
signature for the purpose of identification. At contract age the signature evolves to take on a
very integral part of the person’s identity. This signature is intended to be unique to the
individual and serve as a means to identify, authorize, and validate.
With electronic information the concept of a signature needs to be redressed; it cannot
simply be something unique to the signer and independent of the information signed.
Electronic replication of it is so simple that appending a signature to a document not signed
by the originator of the signature is almost a triviality. Analogues of the “paper protocols”
currently in use are required. Hopefully these new electronic based protocols are at least as
good as those they replace. There is a unique opportunity for society to introduce new and
more efficient ways of ensuring information security. Much can be learned from the
evolution of the paper based system, mimicking those aspects which have served us well and
removing the inefficiencies. Achieving information security in an electronic society requires
a vast array of technical and legal skills. There is, however, no guarantee that all of the
information security objectives deemed necessary can be adequately met. The technical
means is provided through cryptography.
Introduction Page 9
Page 10
2.2 Cryptography:
Cryptography has a long and fascinating history. The most complete non-technical
account of the subject is Kahn’s The Codebreakers. This book traces cryptography from its
initial and limited use by the Egyptians some 4000 years ago, to the twentieth century where
it played a crucial role in the outcome of both world wars.
The most striking development in the history of cryptography came in 1976 when
Diffie and Hellman published New Directions in Cryptography. This paper introduced the
revolutionary concept of public-key cryptography and also provided a new and ingenious
method for key exchange, the security of which is based on the intractability of the discrete
logarithm problem. Although the authors had no practical realization of a public-key
encryption scheme at the time, the idea was clear and it generated extensive interest and
activity in the cryptographic community.
In 1978 Rivest, Shamir, and Adleman discovered the first practical public-key
encryption and signature scheme, now referred to as RSA. The RSA scheme is based on
another hard mathematical problem, the intractability of factoring large integers. This
application of a hard mathematical problem to cryptography revitalized efforts to find more
efficient methods to factor. The 1980s saw major advances in this area but none which
rendered the RSA system insecure.Another class of powerful and practical public-key
schemes was found by ElGamal in 1985. These are also based on the discrete logarithm
problem. One of the most significant contributions provided by public-key cryptography is
the digital signature. In 1991 the first international standard for digital signatures (ISO/IEC
9796) was adopted. It is based on the RSA public-key scheme. In 1994 the U.S. Government
adopted the Digital Signature Standard, a mechanism based on the ElGamal publickey
scheme.
“Cryptography is the study of mathematical techniques related to aspects of information
security such as confidentiality, data integrity, entity authentication, and data origin
authentication. Cryptography is not the only means of providing information security, but
rather one set of techniques.”
2.2.1 Cryptography Terminology
Until modern times cryptography referred almost exclusively to encryption, which is
the process of converting ordinary information (called plaintext) into unintelligible gibberish
Introduction Page 10
Page 11
(called ciphertext).Decryption is the reverse, in other words, moving from the unintelligible
cipher text back to plaintext. A cipher (or cipher) is a pair of algorithms that create the
encryption and the reversing decryption. The detailed operation of a cipher is controlled both
by the algorithm and in each instance by a "key". This is a secret parameter (ideally known
only to the communicants) for a specific message exchange context. A "cryptosystem" is the
ordered list of elements of finite possible plaintexts, finite possible cyphertexts, finite
possible keys, and the encryption and decryption algorithms which correspond to each key.
Keys are important, as ciphers without variable keys can be trivially broken with only the
knowledge of the cipher used and are therefore useless (or even counter-productive) for most
purposes. Historically, ciphers were often used directly for encryption or decryption without
additional procedures such as authentication or integrity checks.
2.2.2 Cryptographic goals
(1) Privacy or confidentiality
(2) Data integrity
(3) Authentication
(4) Non-repudiation
1. Confidentiality is a service used to keep the content of information from all but those
authorized to have it. Secrecy is a term synonymous with confidentiality and privacy. There
are numerous approaches to providing confidentiality, ranging from physical protection to
mathematical algorithms which render data unintelligible.
2. Data integrity is a service which addresses the unauthorized alteration of data. To assure
data integrity, one must have the ability to detect data manipulation by unauthorized parties.
Data manipulation includes such things as insertion, deletion, and substitution.
3. Authentication is a service related to identification. This function applies to both entities
and information itself. Two parties entering into a communication should identify each other.
Information delivered over a channel should be authenticated as to origin, date of origin, data
content, time sent, etc. For these reasons this aspect of cryptography is usually subdivided
into two major classes: entity authentication and data origin authentication. Data origin
authentication implicitly provides data integrity
Introduction Page 11
Page 12
4. Non-repudiation is a service which prevents an entity from denying previous
commitments or actions. When disputes arise due to an entity denying that certain actions
were taken, a means to resolve the situation is necessary. For example, one entity may
authorize the purchase of property by another entity and later deny such authorization was
granted. A procedure involving a trusted third party is needed to resolve the dispute. A
fundamental goal of cryptography is to adequately address these four areas in both theory and
practice. Cryptography is about the prevention and detection of cheating and other malicious
activities.
There are number of basic cryptographic tools (primitives) used to provide
information security. Examples of primitives include encryption schemes hash functions, and
digital signature schemes provides a schematic listing of the primitives considered and how
they relate. These primitives should be evaluated with respect to various criteria such as:
a whole. Cryptography, over the ages, has been an art practised by many who have
devised ad hoc techniques to meet some of the information security requirements. The last
twenty years have been a period of transition as the disciplinemoved froman art to a science.
There are now several international scientific conferences devoted exclusively to
cryptography and also an international scientific organization, the International Association
for Cryptologic Research (IACR), aimed at fostering research in the area.
Figure 2.1 A taxonomy of cryptographic primitives
Introduction Page 12
Page 13
2.2.3Cryptography Types
i) Classic Cryptography:
The earliest forms of secret writing required little more than local pen and paper
analogs, as most people could not read. More literacy, or literate opponents, required actual
cryptography. The main classical cipher types are transposition ciphers, which rearrange the
order of letters in a message (e.g., 'hello world' becomes 'ehlolowrdl' in a trivially simple
rearrangement scheme), and substitution ciphers, which systematically replace letters or
groups of letters with other letters or groups of letters (e.g., 'fly at once' becomes 'gmzbupodf'
by replacing each letter with the one following it in the Latin alphabet). Simple versions of
either have never offered much confidentiality from enterprising opponents. An early
substitution cipher was the Caesar cipher, in which each letter in the plaintext was replaced
by a letter some fixed number of positions further down the alphabet. Suetonius reports that
Julius Caesar used it with a shift of three to communicate with his generals. Atbash is an
example of an early Hebrew cipher. The earliest known use of cryptography is some carved
ciphertext on stone in Egypt (ca 1900 BCE), but this may have been done for the amusement
of literate observers rather than as a way of concealing information. Cryptography is
recommended in the Kama Sutra (ca 400 BCE) as a way for lovers to communicate without
inconvenient discovery.
The Greeks of Classical times are said to have known of ciphers (e.g., the scytale
transposition cipher claimed to have been used by the Spartan military). Steganography (i.e.,
hiding even the existence of a message so as to keep it confidential) was also first developed
in ancient times. An early example, from Herodotus, concealed a message—a tattoo on a
slave's shaved head—under the regrown hair. Another Greek method was developed by
Polybius (now called the "Polybius Square"). More modern examples of steganography
include the use of invisible ink, microdots, and digital watermarks to conceal information.
Ciphertexts produced by a classical cipher (and some modern ciphers) always reveal
statistical information about the plaintext, which can often be used to break them. After the
discovery of frequency analysis perhaps by the Arab mathematician and polymath, Al-Kindi
(also known as Alkindus), in the 9th century, nearly all such ciphers became more or less
readily breakable by any informed attacker. Such classical ciphers still enjoy popularity
today, though mostly as puzzles (see cryptogram). Al-Kindi wrote a book on cryptography
Introduction Page 13
Page 14
entitled Risalah fi Istikhraj al-Mu'amma (Manuscript for the Deciphering Cryptographic
Messages), in which described the first cryptanalysis techniques.
16th-century book-shaped French cipher machine, with arms of Henri II of France
Enciphered letter from Gabriel de Luetzd'Aramon, French Ambassador to the Ottoman
Empire, after 1546, with partial decipherment
Essentially all ciphers remained vulnerable to cryptanalysis using the frequency analysis
technique until the development of the polyalphabetic cipher, most clearly by Leon Battista
Alberti around the year 1467, though there is some indication that it was already known to
Al-Kindi. Alberti's innovation was to use different ciphers (i.e., substitution alphabets) for
various parts of a message (perhaps for each successive plaintext letter at the limit). He also
invented what was probably the first automatic cipher device, a wheel which implemented a
partial realization of his invention. In the polyalphabetic Vigenère cipher, encryption uses a
key word, which controls letter substitution depending on which letter of the key word is
used. In the mid-19th century Charles Babbage showed that the Vigenère cipher was
vulnerable to Kasiski examination, but this was first published about ten years later by
Friedrich Kasiski.
Introduction Page 14
Page 15
ii) Symmetric-Key Cryptography:
Symmetric-key cryptography refers to encryption methods in which both the sender and
receiver share the same key (or, less commonly, in which their keys are different, but related
in an easily computable way). This was the only kind of encryption publicly known until June
1976.
One round (out of 8.5) of the patented IDEA cipher, used in some versions of PGP for
high-speed encryption of, for instance, e-mail
Symmetric key ciphers are implemented as either block ciphers or stream ciphers. A
block cipher enciphers input in blocks of plaintext as opposed to individual characters, the
input form used by a stream cipher.
The Data Encryption Standard (DES) and the Advanced Encryption Standard (AES)
are block cipher designs which have been designated cryptography standards by the US
government (though DES's designation was finally withdrawn after the AES was adopted).
Despite its deprecation as an official standard, DES (especially its still-approved and much
more secure triple-DES variant) remains quite popular; it is used across a wide range of
applications, from ATM encryption to e-mail privacy and secure remote access. Many other
block ciphers have been designed and released, with considerable variation in quality. Many
have been thoroughly broken, such as FEAL.
Stream ciphers, in contrast to the 'block' type, create an arbitrarily long stream of key
material, which is combined with the plaintext bit-by-bit or character-by-character, somewhat
like the one-time pad. In a stream cipher, the output stream is created based on a hidden
internal state which changes as the cipher operates. That internal state is initially set up using
the secret key material. RC4 is a widely used stream cipher. Block ciphers can be used as
stream ciphers..
Introduction Page 15
Page 16
Cryptographic hash functions are a third type of cryptographic algorithm. They take a
message of any length as input, and output a short, fixed length hash which can be used in
(for example) a digital signature. For good hash functions, an attacker cannot find two
messages that produce the same hash. MD4 is a long-used hash function which is now
broken; MD5, a strengthenAed variant of MD4, is also widely used but broken in practice.
The U.S. National Security Agency developed the Secure Hash Algorithm series of MD5-like
hash functions: SHA-0 was a flawed algorithm that the agency withdrew; SHA-1 is widely
deployed and more secure than MD5, but cryptanalysts have identified attacks against it; the
SHA-2 family improves on SHA-1, but it isn't yet widely deployed, and the U.S. standards
authority thought it "prudent" from a security perspective to develop a new standard to
"significantly improve the robustness of NIST's overall hash algorithm toolkit." [25] Thus, a
hash function design competition is underway and meant to select a new U.S. national
standard, to be called SHA-3, by 2012.
Message authentication codes (MACs) are much like cryptographic hash functions,
except that a secret key can be used to authenticate the hash value upon receipt.
modular multiplication and exponentiation, which are much more computationally
expensive than the techniques used in most block ciphers, especially with typical key sizes.
As a result, public-key cryptosystems are commonly hybrid cryptosystems, in which a fast
high-quality symmetric-key encryption algorithm is used for the message itself, while the
relevant symmetric key is sent with the message, but encrypted using a public-key algorithm.
Similarly, hybrid signature schemes are often used, in which a cryptographic hash function is
computed, and only the resulting hash is digitally signed.
iii) Public-Key Cryptography :
Fig. 2.2 Public Key Cryptography
Introduction Page 16
Page 17
In an asymmetric key encryption scheme, anyone can encrypt messages using the public key,
but only the holder of the paired private key can decrypt. Security depends on the secrecy of
that private key.
In some related signature schemes, the private key is used to sign a message; anyone
can check the signature using the public key. Validity depends on security of the private key.
In the Diffie–Hellman key exchange scheme, each party generates a public/private
key pair and distributes the public key... After obtaining an authentic copy of each other's
public keys, Alice and Bob can compute a shared secret offline. The shared secret can be
used, for instance, as the key for a symmetric cipher.
This cryptographic approach uses asymmetric key algorithms such as RSA, hence the
more general name of "asymmetric key cryptography". Some of these algorithms have the
public key/private key property; that is, neither key is derivable from knowledge of the other;
not all asymmetric key algorithms do. Those with this property are particularly useful and
have been widely deployed, and are the source of the commonly used name.
Although unrelated, the key pair are mathematically linked. The public key is used to
transform a message into an unreadable form, decryptable only by using the (different but
matching) private key. By publishing the public key, the key producer empowers anyone who
gets a copy of the public key to produce messages only s/he can read—because only the key
producer has a copy of the private key (required for decryption). When someone wants to
send a secure message to the creator of those keys, the sender encrypts it (i.e., transforms it
into an unreadable form) using the intended recipient's public key; to decrypt the message,
the recipient uses the private key. No one else, including the sender, can do so.
The use of these algorithms also allows authenticity of a message to be checked by
creating a digital signature of a message using the private key, which can be verified using
the public key.
Public key cryptography is a fundamental and widely used technology. It is an
approach used by many cryptographic algorithms and cryptosystems. It underpins such
Internet standards as Transport Layer Security (TLS) (successor to SSL), PGP, and GPG.
Introduction Page 17
Page 18
How It Works
The distinguishing technique used in public key cryptography is the use of
asymmetric key algorithms, where the key used to encrypt a message is not the same as the
key used to decrypt it. Each user has a pair of cryptographic keys — a public encryption key
and a private decryption key. The publicly available encrypting-key is widely distributed,
while the private decrypting-key is known only to the recipient. Messages are encrypted with
the recipient's public key and can be decrypted only with the corresponding private key. The
keys are related mathematically, but parameters are chosen so that determining the private
key from the public key is prohibitively expensive. The discovery of algorithms that could
produce public/private key pairs revolutionized the practice of cryptography beginning in the
mid-1970s.
In contrast, symmetric-key algorithms, variations of which have been used for
thousands of years, use a single secret key which must be shared and kept private by both
sender and receiver for both encryption and decryption. To use a symmetric encryption
scheme, the sender and receiver must securely share a key in advance.
Because symmetric key algorithms are nearly always much less computationally
intensive, it is common to exchange a key using a key-exchange algorithm and transmit data
using that key and a symmetric key algorithm. PGP and the SSL/TLS family of schemes do
this, for instance, and are thus called hybrid cryptosystems.
Description
The two main branches of public key cryptography are:
Public key encryption: a message encrypted with a recipient's public key cannot be
decrypted by anyone except a possessor of the matching private key, it is presumed
that this will be the owner of that key and the person associated with the public key
used. This is used for confidentiality.
Digital signatures: a message signed with a sender's private key can be verified by
anyone who has access to the sender's public key, thereby proving that the sender had
access to the private key (and therefore is likely to be the person associated with the
Introduction Page 18
Page 19
public key used), and the part of the message that has not been tampered with. On the
question of authenticity, see also message digest.
An analogy to public-key encryption is that of a locked mail box with a mail slot. The
mail slot is exposed and accessible to the public; its location (the street address) is in essence
the public key. Anyone knowing the street address can go to the door and drop a written
message through the slot; however, only the person who possesses the key can open the
mailbox and read the message.
An analogy for digital signatures is the sealing of an envelope with a personal wax
seal. The message can be opened by anyone, but the presence of the seal authenticates the
sender.
iv) Private Key (secret key) Cryptography
In cryptography, a private or secret key is an encryption/decryption key known only
to the party or parties that exchange secret messages. In traditional secret key cryptography, a
key would be shared by the communicators so that each could encrypt and decrypt messages.
The risk in this system is that if either party loses the key or it is stolen, the system is broken.
A more recent alternative is to use a combination of public and private keys. In this system, a
public key is used together with a private key. See public key infrastructure (PKI) for more
information.
2.3 Advanced Encryption Standard
The Advanced Encryption Standard, in the following referenced as AES, is the winner
of the contest, held in 1997 by the US Government, after the Data Encryption Standard was
found too weak because of its small key size and the technological advancements in
processor power. Fifteen candidates were accepted in 1998 and based on public comments
the pool was reduced to five finalists in 1999. In October 2000, one of these five algorithms
was selected as the forthcoming standard: a slightly modified version of the Rijndael.
The Rijndael, whose name is based on the names of its two Belgian inventors,
JoanDaemen and Vincent Rijmen, is a Block cipher, which means that it works on fixed-
length group of bits, which are called blocks. It takes an input block of a certain size, usually
128, and produces a corresponding output block of the same size. The transformation requires
Introduction Page 19
Page 20
a second input, which is the secret key. It is important to know that the secret key can be of
any size (depending on the cipher used) and that AES uses three different key sizes: 128, 192
and 256 bits.
While AES supports only block sizes of 128 bits and key sizes of 128, 192 and 256
bits, the
Original Rijndael supports key and block sizes in any multiple of 32, with a minimum of128
and a maximum of 256 bits.
2.4 Objectives
To generate 11keys each of length 128bit.
To encrypt the pain text into cipher text using the generated keys.
To study the synthesis results of AES key expander.
2.5 Hardware and Software
Hardware : LPC2148
Software : EMBEDDED C
Software tools : KIEL µVISION
Introduction Page 20
Page 21
CHAPTER-3
DATA ENCRYPTION STANDARD
3.1 Introduction
The Data Encryption Standard (DES) is a previously predominant algorithm for the
encryption of electronic data. It was highly influential in the advancement of modern
cryptography in the academic world. Developed in the early 1970s at IBM and based on an
earlier design by Horst Feistel, the algorithm was submitted to the National Bureau of
Standards (NBS) following the agency's invitation to propose a candidate for the protection
of sensitive, unclassified electronic government data. In 1976, after consultation with the
National Security Agency (NSA), the NBS eventually selected a slightly modified version,
which was published as an official Federal Information Processing Standard (FIPS) for the
United States in 1977. The publication of an NSA-approved encryption standard
simultaneously resulted in its quick international adoption and widespread academic scrutiny.
Controversies arose out of classified design elements, a relatively short key length of the
symmetric-keyblock cipher design, and the involvement of the NSA, nourishing suspicions
about a backdoor. While these suspicions eventually have turned out to be unfounded, the
intense academic scrutiny the algorithm received over time led to the modern understanding
of block ciphers and their cryptanalysis.
DES is now considered to be insecure for many applications. This is chiefly due to the
56-bit key size being too small; in January, 1999, distributed.net and the Electronic Frontier
Foundation collaborated to publicly break a DES key in 22 hours and 15 minutes . There are
also some analytical results which demonstrate theoretical weaknesses in the cipher, although
they are infeasible to mount in practice. The algorithm is believed to be practically secure in
the form of Triple DES, although there are theoretical attacks. In recent years, the cipher has
been superseded by the Advanced Encryption Standard (AES). Furthermore, DES has been
withdrawn as a standard by the National Institute of Standards and Technology (formerly the
National Bureau of Standards).
3.2 Description
Introduction Page 21
Page 22
DES is the archetypal block cipher — an algorithm that takes a fixed-length string of
plaintext bits and transforms it through a series of complicated operations into another
ciphertext bitstring of the same length. In the case of DES, the block size is 64 bits. DES also
uses a key to customize the transformation, so that decryption can supposedly only be
performed by those who know the particular key used to encrypt. The key ostensibly consists
of 64 bits; however, only 56 of these are actually used by the algorithm. Eight bits are used
solely for checking parity, and are thereafter discarded. Hence the effective key length is 56
bits, and it is never quoted as such. Every 8th bit of the selected key is discarded, that is,
positions 8, 16, 24, 32, 40, 48, 56, 64 are removed from the 64 bit key leaving behind only
the 56 bit key.
Figure.3.1DES Algorithm Structure
The algorithm's overall structure is shown in Figure 1: there are 16 identical stages of
processing, termed rounds. There is also an initial and final permutation, termed IP and FP,
which are inverses (IP "undoes" the action of FP, and vice versa). IP and FP have almost no
Introduction Page 22
Page 23
cryptographic significance, but were apparently included in order to facilitate loading blocks
in and out of mid-1970s hardware.
Before the main rounds, the block is divided into two 32-bit halves and processed
alternately; this criss-crossing is known as the Feistel scheme. The Feistel structure ensures
that decryption and encryption are very similar processes the only difference is that the
subkeys are applied in the reverse order when decrypting. The rest of the algorithm is
identical. This greatly simplifies implementation, particularly in hardware, as there is no need
for separate encryption and decryption algorithms.The ⊕ symbol denotes the exclusive-OR
(XOR) operation. The F-function scrambles half a block together with some of the key. The
output from the F-function is then combined with the other half of the block, and the halves
are swapped before the next round. After the final round, the halves are not swapped; this is a
feature of the Feistel structure which makes encryption and decryption similar processes.
3.3 Modes of Operation
ECB (Electronic Code Book)
o This is the regular DES algorithm.
o Data is divided into 64-bit blocks and each block is encrypted one at a time.
o Separate encryptions with different blocks are totally independent of each
other.
o This means that if data is transmitted over a network or phone line,
transmission errors will only affect the block containing the error.
o It also means, however, that the blocks can be rearranged, thus scrambling a
file beyond recognition, and this action would go undetected.
o ECB is the weakest of the various modes because no additional security
measures are implemented besides the basic DES algorithm.
o However, ECB is the fastest and easiest to implement, making it the most
common mode of DES.
3.4 Disadvantage of DES
Introduction Page 23
Page 24
DES performs lots of bit manipulation in substitution and permutation boxes in each
of 16 rounds. For example, switching bit 30 with 16 is much simpler in hardware than
software. DES encrypts data in 64 bit block size and uses effectively a 56 bit key. 56 bit key
space amounts to approximately 72 quadrillion possibilities. Even though it seems large but
according to today’s computing power it is not sufficient and vulnerable to brute force
attack. Therefore, DES could not keep up with advancement in technology and it is no longer
appropriate for security.
Because DES was widely used at that time, the quick solution was to introduce 3DES
which is secure enough for most purposes today. 3DES is a construction of applying DES
three times in sequence. 3DES with three different keys (K1, K2 and K3) has effective key
length is 168 bits (The use of three distinct key is recommended of 3DES.). Another
variation is called two-key (K1 and K3 is same) 3DES reduces the effective key size to 112
bits which is less secure. Two-key 3DES is widely used in electronic payments industry.
3DES takes three times as much CPU power than compare with its predecessor which is
significant performance hit. AES outperforms 3DES both in software and in hardware.
The Rijndael algorithm has been selected as the Advance Encryption Standard (AES)
to replace 3DES. AES is modified version of Rijndael algorithm
Rijndael was submitted by Joan Diemen and Vincent Rijmen. When considered
together Rijndael’s combination of security, performance, efficiency, implementability, and
flexibility made it an appropriate selection for the AES.
By design AES is faster in software and works efficiently in hardware. It works fast
even on small devices such as smart phones, smart cards etc. AES provides more security
due to larger block size and longer keys. AES uses 128 bit fixed block size and works with
128, 192 and 256 bit keys. Rigndael algorithm in general is flexible enough to work with key
and block size of any multiple of 32 bit with minimum of128 bits and maximum of 256 bit.
CHAPTER-4
Introduction Page 24
Page 25
ADVANCED ENCRYPTION STANDARD
4.1 Introduction
What is AES?AES is short for Advanced Encryption Standard and is a United States encryption
standard defined in Federal Information Processing Standard (FIPS) 192, published in
November 2001. It was ratified as a federal standard in May 2002. AES is the most recent of
the four current algorithms approved for federal us in the United States. One should not
compare AES with RSA, another standard algorithm, as RSA is a different category of
algorithm. Bulk encryption of information itself is seldom performed with RSA.RSA is used
to transfer other encryption keys for use by AES for example, and for digital signatures.
AES is a symmetric encryption algorithm processing data in block of 128 bits. A bit
can take the values zero and one, in effect a binary digit with two possible values as opposed
to decimal digits, which can take one of 10 values. Under the influence of a key, a 128-bit
block is encrypted by transforming it in a unique way into a new block of the same size. AES
is symmetric since the same key is used for encryption and the reverse transformation,
decryption. The only secret necessary to keep for security is the key. AES may configured to
use different key-lengths, the standard defines 3 lengths and the resulting algorithms are
named AES-128, AES-192 and AES-256 respectively to indicate the length in bits of the key.
Each additional bit in the key effectively doubles the strength of the algorithm, when defined
as the time necessary for an attacker to stage a brute force attack, i.e. an exhaustive search of
all possible key combinations in order to find the right one
Some Background on AES
In 1997 the US National Institute of Standards and Technology put out a call for
candidates for a replacement for the ageing Data Encryption Standard, DES. 15 candidates
were accepted for further consideration, and after a fully public process and three open
international conferences, the number of candidates was reduced to five. In February 2001,
the final candidate was announced and comments were solicited. 21 organizations and
individuals submitted comments. None had any reservations about the suggested
algorithm.About AES Axantum Software AB Svante Seleborg2(3)AES is founded on solid
Introduction Page 25
Page 26
and well-published mathematical ground, and appears to resist all known attacks well.
There’s a strong indication that in fact no back-door or known weakness exists since it has
been published for a long time, has been the subject of intense scrutiny by researchers all
over the world, and such enormous amounts of economic value and information is already
successfully protected by AES. There are no unknown factors in its design, and it was
developed by Belgian researchers in Belgium therefore voiding the conspiracy theories
sometimes voiced concerning an encryption standard developed by a United States
government agency.
A strong encryption algorithm need only meet only single main criteria:
There must be no way to find the unencrypted clear text if the key is unknown, except
brute force, i.e. to try all possible keys until the right one is found.
A secondary criterion must also be met:
The number of possible keys must be so large that it is computationally infeasible to
actually stage a successful brute force attack in short enough a time.
The older standard, DES or Data Encryption Standard, meets the first criterion, but no
longer the secondary one – computer speeds have caught up with it, or soon will.
AES meets both criteria in all of its variants: AES-128, AES-192 and AES-256
The AES algorithm is a round-based, symmetric block cipher. It processes data blocks
of fixed size (128 bits) using cipher keys of length 128, 196 or 256 bits. Depending on the
key used, it is usually abbreviated as AES-128, AES-196 or AES-256 respectively. In this
project only AES-128 is considered, as it is the most popular variant of the algorithm. The
functional blocks of the algorithm are Key expansion and encryption. In this project we are
concentrating on the key generation algorithm. The initial 128-bit cipher key has to be
expanded to new eleven round keys of same length. In order to produce a new round key, two
transformations have to be performed, RotWord and SubWord. The first one simply
cyclically shifts the bytes of the first 32-bit word of the previous key by one position to the
left. SubWord on the other hand performs the SubBytes transformation to each byte of the
rotated word. Simple bit wise xors are then needed in order to produce the final round key.
The SubWord (SubBytes) transformation is implemented with a ROM (LUT).
Introduction Page 26
Page 27
Figure 4.1 AES Algorithm Structure
Fig 4.2 Architecture of AES key expander
4.2 CipherA cipher (pronounced SAI-fuhr) is any method of encrypting text (concealing its
readability and meaning). It is also sometimes used to refer to the encrypted text message itself
Introduction Page 27
Page 28
although here the term ciphertext is preferred. Its origin is the Arabic sifr, meaning empty or zero. In
addition to the cryptographic meaning, cipher also means someone insignificant, and a combination of
symbolic letters as in an entwined weaving of letters for a monogram.
Some ciphers work by simply realigning the alphabet (for example, A is represented by F, B
is represented by G, and so forth) or otherwise manipulating the text in some consistent pattern.
However, almost all serious ciphers use both a key (a variable that is combined in some way with the
unencrypted text) and an algorithm (a formula for combining the key with the text). A block cipher is
one that breaks a message up into chunks and combines a key with each chunk (for example, 64-bits
of text). A stream cipher is one that applies a key to each bit, one at a time. Most modern ciphers are
block ciphers
4.3 Description of the cipher
AES is based on a design principle known as a substitution-permutation network. It is
fast in both software and hardware. Unlike its predecessor, DES, AES does not use a Feistel
network.
AES has a fixed block size of 128 bits and a key size of 128, 192, or 256 bits, whereas
Rijndael can be specified with block and key sizes in any multiple of 32 bits, with a minimum
of 128 bits. The blocksize has a maximum of 256 bits, but the keysize has no theoretical
maximum.
AES operates on a 4×4 column-major order matrix of bytes, termed the state (versions
of Rijndael with a larger block size have additional columns in the state). Most AES
calculations are done in a special finite field.
The AES cipher is specified as a number of repetitions of transformation rounds that
convert the input plaintext into the final output of ciphertext. Each round consists of several
processing steps, including one that depends on the encryption key. A set of reverse rounds
are applied to transform ciphertext back into the original plaintext using the same encryption
key.
4.3.1 Block cipher
A block cipher is a method of encrypting text (to produce ciphertext) in which a
cryptographic key and algorithm are applied to a block of data (for example, 64 contiguous
Introduction Page 28
Page 29
bits) at once as a group rather than to one bit at a time. The main alternative method, used
much less frequently, is called the stream cipher.
So that identical blocks of text do not get encrypted the same way in a message
(which might make it easier to decipher the ciphertext), it is common to apply the ciphertext
from the previous encrypted block to the next block in a sequence. So that identical messages
encrypted on the same day do not produce identical ciphertext, an initialization vector derived
from a random number generator is combined with the text in the first block and the key. This
ensures that all subsequent blocks result in ciphertext that doesn't match that of the first
encrypting.
4.3.2 Stream Cipher
Stream ciphers can encrypt plaintext messages of variable length. The one-time pad
can be thought of as an example – each message uses a portion of the key with length equal
to the length of the plaintext message. (Then that portion of the key is never re-eused.)
The ideas that resulted in modern stream ciphers originated with another AT&T Bell
Labs engineer, Gilbert Vernam (1890 – 1960). In 1917, Vernam developed a scheme to
encrypt teletype transmissions. Unlike Morse code, which uses symbols of different lengths
to substitute for letters of the alphabet, teletype transmission used what we would today call a
5-bit code for the letters of the alphabet and certain keyboard commands. Its use was similar
to the way that the 8-bit ASCII code is used today in computing.
4.3.3 Cipher text
Ciphertext is encrypted text. Plaintext is what you have before encryption, and
ciphertext is the encrypted result. The term cipher is sometimes used as a synonym for
ciphertext, but it more properly means the method of encryption rather than the result.
4.4 Key size
In cryptography, key size or key length is the size measured in bits of the key used in
a cryptographic algorithm (such as a cipher). An algorithm's key length is distinct from its
Introduction Page 29
Page 30
cryptographic security, which is a logarithmic measure of the fastest known computational
attack on the algorithm, also measured in bits. The security of an algorithm cannot exceed its
key length (since any algorithm can be cracked by brute force), but it can be smaller. For
example, Triple DES has a key size of 168 bits but provides at most 112 bits of security,
since an attack of complexity 2112 is known. This property of Triple DES is not a weakness
provided 112 bits of security is sufficient for an application. Most symmetric-key algorithms
in common use are designed to have security equal to their key length. No asymmetric-key
algorithms with this property are known; elliptic curve cryptography comes the closest with
an effective security of roughly half its key length.
Significance
Keys are used to control the operation of a cipher so that only the correct key can
convert encrypted text (ciphertext) to plaintext. Many ciphers are based on publicly known
algorithms or are open source, and so it is only the difficulty of obtaining the key that
determines security of the system, provided that there is no analytic attack (i.e., a 'structural
weakness' in the algorithms or protocols used), and assuming that the key is not otherwise
available (such as via theft, extortion, or compromise of computer systems). The widely
accepted notion that the security of the system should depend on the key alone has been
explicitly formulated by AugusteKerckhoffs (in the 1880s) and Claude Shannon (in the
1940s); the statements are known as Kerckhoffs' principle and Shannon's Maxim
respectively.
A key should therefore be large enough that a brute force attack (possible against any
encryption algorithm) is infeasible – i.e., would take too long to execute. Shannon's work on
information theory showed that to achieve so called perfect secrecy, it is necessary for the
key length to be at least as large as the message to be transmitted and only used once (this
algorithm is called the One-time pad). In light of this, and the practical difficulty of managing
such long keys, modern cryptographic practice has discarded the notion of perfect secrecy as
a requirement for encryption, and instead focuses on computational security, under which the
computational requirements of breaking an encrypted text must be infeasible for an attacker.
The preferred numbers commonly used as key sizes (in bits) are powers of two,
potentially multiplied with a small odd integer.
4.4.1 Symmetric Algorithm Key Lengths
Introduction Page 30
Page 31
US Government export policy has long restricted the 'strength' of cryptography which
can be sent out of the country. For many years the limit was 40 bits. Today, a key length of
40 bits offers little protection against even a casual attacker with a single PC, a predictable
and inevitable consequence of governmental restrictions limiting key length. In response, by
the year 2000, most of the major US restrictions on the use of strong encryption were relaxed.
However, not all regulations have been removed, and encryption registration with the U.S.
Bureau of Industry and Security is still required to export "mass market encryption
commodities, software and components with encryption exceeding 64 bits" (75 F.R. 36494 ).
When the Data Encryption Standard cipher was released in 1977, a key length of 56
bits was thought to be sufficient. There was speculation at the time, however, that the NSA
has deliberately reduced the key size from the original value of 112 bits (in IBM's Lucifer
cipher) or 64 bits (in one of the versions of what was adopted as DES) so as to limit the
strength of encryption available to non-US users. The NSA has major computing resources
and a large budget; some thought that 56 bits was NSA-breakable in the late '70s. However,
by the late 90s, it became clear that DES could be cracked in a few days' time-frame with
custom-built hardware such as could be purchased by a large corporation.[5] The book
Cracking DES (O'Reilly and Associates) tells of the successful attempt to break 56-bit DES
by a brute force attack mounted by a cyber civil rights group with limited resources; see EFF
DES cracker. 56 bits is now considered insufficient length for symmetric algorithm keys, and
may have been for some time. More technically and financially capable organizations were
surely able to do the same long before the effort described in the book. Distributed.net and its
volunteers broke a 64-bit RC5 key in several years, using about seventy thousand (mostly
home) computers.
The Advanced Encryption Standard published in 2001 uses a key size of (at
minimum) 128 bits. It also can use keys up to 256 bits (a specification requirement for
submissions to the AES contest). 128 bits is currently thought, by many observers, to be
sufficient for the foreseeable future for symmetric algorithms of AES's quality. The U.S.
Government requires 192 or 256-bit AES keys for highly sensitive data.
4.4.2 Asymmetric Algorithm Key Lengths
The effectiveness of public key cryptosystems depends on the intractability
(computational and theoretical) of certain mathematical problems such as integer
Introduction Page 31
Page 32
factorization. These problems are time consuming to solve, but usually faster than trying all
possible keys by brute force. Thus, asymmetric algorithm keys must be longer for equivalent
resistance to attack than symmetric algorithm keys. As of 2002, a key length of 1024 bits was
generally considered the minimum necessary for the RSA encryption algorithm.
As of 2003RSA Security claims that 1024-bit RSA keys are equivalent in strength to
80-bit symmetric keys, 2048-bit RSA keys to 112-bit symmetric keys and 3072-bit RSA keys
to 128-bit symmetric keys. RSA claims that 1024-bit keys are likely to become crackable
sometime between 2006 and 2010 and that 2048-bit keys are sufficient until 2030. An RSA
key length of 3072 bits should be used if security is required beyond 2030. NIST key
management guidelines further suggest that 15360-bit RSA keys are equivalent in strength to
256-bit symmetric keys.
The Finite Field Diffie-Hellman algorithm has roughly the same key strength as RSA
for the same key sizes. The work factor for breaking Diffie-Hellman is based on the discrete
logarithm problem, which is related to the integer factorization problem on which RSA's
strength is based. Thus, a 3072-bit Diffie-Hellman key has about the same strength as a 3072-
bit RSA key.
One of the asymmetric algorithm types, elliptic curve cryptography, or ECC, appears
to be secure with shorter keys than those needed by other asymmetric key algorithms. NIST
guidelines state that ECC keys should be twice the length of equivalent strength symmetric
key algorithms. So, for example, a 224-bit ECC key would have roughly the same strength as
a 112-bit symmetric key. These estimates assume no major breakthroughs in solving the
underlying mathematical problems that ECC is based on. A message encrypted with an
elliptic key algorithm using a 109-bit long key has been broken by brute force.
The NSA specifies that "Elliptic Curve Public Key Cryptography using the 256-bit
prime modulus elliptic curve as specified in FIPS-186-2 and SHA-256 are appropriate for
protecting classified information up to the SECRET level. Use of the 384-bit prime modulus
elliptic curve and SHA-384 are necessary for the protection of TOP SECRET information.
4.5 Effect Of Quantum Computing Attacks On Key Strength
The two best known quantum computing attacks are based on Shor's algorithm and
Grover's algorithm. Of the two, Shor's offers the greater risk to current security systems.
Introduction Page 32
Page 33
Derivatives of Shor's algorithm are widely conjectured to be effective against all
mainstream public-key algorithms including RSA, Diffie-Hellman and elliptic curve
cryptography. According to Professor Gilles Brassard, an expert in quantum computing: "The
time needed to factor an RSA integer is the same order as the time needed to use that same
integer as modulus for a single RSA encryption. In other words, it takes no more time to
break RSA on a quantum computer (up to a multiplicative constant) than to use it legitimately
on a classical computer." The general consensus is that these public key algorithms are
insecure at any key size if sufficiently large quantum computers capable of running Shor's
algorithm become available. The implication of this attack is that all data encrypted using
current standards based security systems such as the ubiquitous SSL used to protect e-
commerce and Internet banking and SSH used to protect access to sensitive computing
systems is at risk. Encrypted data protected using public-key algorithms can be archived and
may be
4.6 Key Generation
Key generation is the process of generating keys for cryptography. A key is used to
encrypt and decrypt whatever data is being encrypted/decrypted.
Modern cryptographic systems include symmetric-key algorithms (such as DES and
AES) and public-key algorithms (such as RSA). Symmetric-key algorithms use a single
shared key; keeping data secret requires keeping this key secret. Public-key algorithms use a
public key and a private key. The public key is made available to anyone (often by means of
a digital certificate). A sender encrypts data with the public key; only the holder of the private
key can decrypt this data.
Since public-key algorithms tend to be much slower than symmetric-key algorithms,
modern systems such as TLS and SSH use a combination of the two: one party receives the
other's public key, and encrypts a small piece of data (either a symmetric key or some data
used to generate it). The remainder of the conversation uses a (typically faster) symmetric-
key algorithm for encryption.
4.6.1 Key Generation Steps
The Rotate Word Step
Introduction Page 33
Page 34
Figure 4.3 Rotate Word
4.6.2 The Subbytes Step
Figure 4.4 The Sub Bytes
In the SubBytes step, each byte in the state is replaced with its entry in a fixed 8-bit
lookup table, S; bij = S(aij).
In the SubBytes step, each byte in the matrix is updated using an 8-bit substitution box,
the Rijndael S-box. This operation provides the non-linearity in the cipher. The S-box used is
derived from the multiplicative inverse over GF(28), known to have good non-linearity
properties. To avoid attacks based on simple algebraic properties, the S-box is constructed by
combining the inverse function with an invertible affine transformation. The S-box is also
Introduction Page 34
Page 35
chosen to avoid any fixed points (and so is a derangement), and also any opposite fixed
points.
4.7 RCON
Rcon is what the Rijndael documentation calls the exponentiation of 2 to a user-specified
value. Note that this operation is not performed with regular integers, but in Rijndael's finite
field.
In polynomial form, 2 is
, and we
compute
in or equivalently,
in .
Figure 4.5 RCON
4.8 S-BOX
Introduction Page 35
Page 36
In cryptography, an S-Box (Substitution-box) is a basic component of symmetric key
algorithms which performs substitution. In block ciphers, they are typically used to obscure
the relationship between the key and the ciphertext — Shannon's property of confusion. In
many cases, the S-Boxes are carefully chosen to resist cryptanalysis.
In general, an S-Box takes some number of input bits, m, and transforms them into
some number of output bits, n: an m×n S-Box can be implemented as a lookup table with 2m
words of n bits each. Fixed tables are normally used, as in the Data Encryption Standard
(DES), but in some ciphers the tables are generated dynamically from the key; e.g. the
Blowfish and the Twofish encryption algorithms. Bruce Schneier describes IDEA's modular
multiplication step as a key-dependent S-Box.
Figure 4.6 S-Box
4.9 Lookup Table
Introduction Page 36
Page 37
In computer science, a lookup table is a data structure, usually an array or associative
array, often used to replace a runtime computation with a simpler array indexing operation.
The savings in terms of processing time can be significant, since retrieving a value from
memory is often faster than undergoing an 'expensive' computation or input/output operation.[1] The tables may be precalculated and stored in static program storage or calculated (or "pre-
fetched") as part of a programs initialization phase (memoization). Lookup tables are also
used extensively to validate input values by matching against a list of valid (or invalid) items
in an array and, in some programming languages, may include pointer functions (or offsets to
labels) to process the matching input.
History
Part of a 20th century table of common logarithms in the reference book Abramowitz
and Stegun.
Before the advent of computers, lookup tables of values were used by people to speed
up hand calculations of complex functions, such as in trigonometry, logarithms, and
statistical density functions.
In ancient India, Aryabhata created one of the first sine tables, which he encoded in a
Sanskrit-letter-based number system. In 493 A.D., Victorius of Aquitaine wrote a 98-column
multiplication table which gave (in Roman numerals) the product of every number from 2 to
50 times and the rows were "a list of numbers starting with one thousand, descending by
hundreds to one hundred, then descending by tens to ten, then by ones to one, and then the
fractions down to 1/144" . Modern school children are often taught to memorize "times
tables" to avoid calculations of the most commonly used numbers (up to 9 x 9 or 12 x 12).
Early in the history of computers, input/output operations were particularly slow -
even in comparison to processor speeds of the time. It made sense to reduce expensive read
operations by a form of manual caching by creating either static lookup tables (embedded in
the program) or dynamic prefetched arrays to contain only the most commonly occurring data
items. Despite the introduction of systemwide caching that now automates this process,
application level lookup tables can still improve performance for data items that rarely, if
ever, change.
Introduction Page 37
Page 38
4.10 Encryption
Encryption is the conversion of data into a form, called a cipher text, that cannot be
easily understood by unauthorized people. Decryption is the process of converting encrypted
data back into its original form, so it can be understood.
The use of encryption/decryption is as old as the art of communication. In wartime, a
cipher, often incorrectly called a code, can be employed to keep the enemy from obtaining
the contents of transmissions. (Technically, a code is a means of representing a signal without
the intent of keeping it secret; examples are Morse code and ASCII.) Simple ciphers include
the substitution of letters for numbers, the rotation of letters in the alphabet, and the
"scrambling" of voice signals by inverting the sideband frequencies. More complex ciphers
work according to sophisticated computer algorithms that rearranges the data bits in digital
signals.
In order to easily recover the contents of an encrypted signal, the correct decryption
key is required. The key is an algorithm that undoes the work of the encryption algorithm.
Alternatively, a computer can be used in an attempt to break the cipher. The more complex
the encryption algorithm, the more difficult it becomes to eavesdrop on the communications
without access to the key.
Encryption/decryption is especially important in wireless communications. This is
because wireless circuits are easier to tap than their hard-wired counterparts. Nevertheless,
encryption/decryption is a good idea when carrying out any kind of sensitive transaction,
such as a credit-card purchase online, or the discussion of a company secret between different
departments in the organization. The stronger the cipher -- that is, the harder it is for
unauthorized people to break it -- the better, in general. However, as the strength of
encryption/decryption increases, so does the cost.
In recent years, a controversy has arisen over so-called strong encryption. This refers
to ciphers that are essentially unbreakable without the decryption keys. While most
companies and their customers view it as a means of keeping secrets and minimizing fraud,
some governments view strong encryption as a potential vehicle by which terrorists might
evade authorities. These governments, including that of the United States, want to set up a
key-escrow arrangement. This means everyone who uses a cipher would be required to
Introduction Page 38
Page 39
provide the government with a copy of the key. Decryption keys would be stored in a
supposedly secure place, used only by authorities, and used only if backed up by a court
order. Opponents of this scheme argue that criminals could hack into the key-escrow database
and illegally obtain, steal, or alter the keys. Supporters claim that while this is a possibility,
implementing the key escrow scheme would be better than doing nothing to prevent criminals
from freely using encryption/decryption.
i)The Shiftrows Step
Figure 4.7 The Shiftrows Step
In the ShiftRows step, bytes in each row of the state are shifted cyclically to the left.
The number of places each byte is shifted differs for each row.
The ShiftRows step operates on the rows of the state; it cyclically shifts the bytes in
each row by a certain offset. For AES, the first row is left unchanged. Each byte of the
second row is shifted one to the left. Similarly, the third and fourth rows are shifted by offsets
of two and three respectively. For blocks of sizes 128 bits and 192 bits, the shifting pattern is
the same. In this way, each column of the output state of the ShiftRows step is composed of
bytes from each column of the input state. (Rijndael variants with a larger block size have
slightly different offsets). For a 256-bit block, the first row is unchanged and the shifting for
the second, third and fourth row is 1 byte, 3 bytes and 4 bytes respectively—this change only
applies for the Rijndael cipher when used with a 256-bit block, as AES does not use 256-bit
blocks.
Introduction Page 39
Page 40
ii)The Mix columns Step
Figure 4.8 Mix coloumns
In the MixColumns step, each column of the state is multiplied with a fixed
polynomial c(x).
In the MixColumns step, the four bytes of each column of the state are combined
using an invertible linear transformation. The MixColumns function takes four bytes as input
and outputs four bytes, where each input byte affects all four output bytes. Together with
ShiftRows, MixColumns provides diffusion in the cipher.
iii)The Add roundkey Step
Figure 4.9 The Add roundkey Step
Introduction Page 40
Page 41
In the AddRoundKey step, the subkey is combined with the state. For each round, a
subkey is derived from the main key using Rijndael's key schedule; each subkey is the same
size as the state. The subkey is added by combining each byte of the state with the
corresponding byte of the subkey using bitwise XOR.
FiFIgure 4.10 Encryption
4.11 Read-Only Memory
Read-only memory (ROM) is a class of storage medium used in computers and other
electronic devices. Data stored in ROM cannot be modified, or can be modified only slowly
or with difficulty, so it is mainly used to distribute firmware (software that is very closely tied
to specific hardware, and unlikely to need frequent updates).
In its strictest sense, ROM refers only to mask ROM (the oldest type of solid state
ROM), which is fabricated with the desired data permanently stored in it, and thus can never
be modified. Despite the simplicity, speed and economies of scale of mask ROM, field-
programmability often make reprogrammable memories more flexible and inexpensive. As of
2007, actual ROM circuitry is therefore mainly used for applications such as microcode, and
similar structures, on various kinds of digital processors (i.e. not only CPUs).
Other types of non-volatile memory such as erasable programmable read only
memory (EPROM) and electrically erasable programmable read-only memory (EEPROM or
Flash ROM) are sometimes referred to, in an abbreviated way, as "read-only memory"
(ROM), but this is actually a misnomer because these types of memory can be erased and re-
programmed multiple times. When used in this less precise way, "ROM" indicates a non-
Introduction Page 41
Page 42
volatile memory which serves functions typically provided by mask ROM, such as storage of
program code and nonvolatile data.
4.12 Register
In digital electronics, especially computing, a register stores bits of information, in a
way that all the bits can be written to or read out simultaneously. The hardware registers
inside a central processing unit (CPU) are called processor registers. Signals from a state
machine to the register control when registers transmit to or accept information from other
registers. Sometimes the state machine routes information from one register through a
functional transform, such as an adder unit, and then to another register that stores the results.
Typical uses of hardware registers include configuration and start-up of certain
features, especially during initialization, buffer storage e.g. video memory for graphics cards,
input/output (I/O) of different kinds, and status reporting such as whether a certain event has
occurred in the hardware unit.
Reading a hardware register in "peripheral units" -- computer hardware outside the
CPU—involves accessing its memory-mapped I/O address or port-mapped I/O address with a
"load" or "store" instruction, issued by the processor. Hardware registers are addressed in
words, but sometimes only use a few bits of the word read in to, or written out to the register.
Strobe registers have the same interface as normal hardware registers, but instead of
storing data, they trigger an action each time they are written to (or, in rare cases, read from).
They are a means of signaling.
Commercial design tools such as Socrates Bitwise by Duolog Technologies, simplify
and automate memory-mapped register specification and code generation for hardware,
firmware, hardware verification, testing and documentation.
Using IP-XACT IEEE 1685, commercial design tools, such as Socrates Bitwise by
Duolog Technologies and MRV Magillem Register View by MAGILLEM, provide a real
synchronization between the register description and the RTL hardware platform description,
then collaborative work in the design flow can be addressed.
CHAPTER-5
INTRODUCTION TO KEIL SOFTWARE&EMBEDDED C
Introduction Page 42
Page 43
5.1KEIL µVision IDE OverviewThe µVision IDE from Keil combines project management, make facilities, source
code editing, program debugging, and complete simulation in one powerful environment. The
µVision development platform is easy-to-use and it helps you quickly create embedded
programs that work. The µVision editor and debugger are integrated in a single application
that provides a seamless embedded project development environment.
µVision is the Keil Integrated Development and Debugging Environment that helps you
quickly create and test embedded applications for ARM7, ARM9, Cortex-M3, C16x, ST10,
XC16x, C251, and C51 embedded micro controllers. It combines all aspects of embedded
project development including source code editing, project organization and management,
revision control, make facility, target debugging, simulation, and Flash programming.
µVision offers a significant advantage to new users and to developers who must get projects
working quickly.
A51 Macro Assembler
The A51 Assembler is a macro assembler for the 8051 family of microcontrollers. It
supports all 8051 derivatives. It translates symbolic assembly language mnemonics into
relocatable object code where the utmost speed, small code size, and hardware control are
critical. The macro facility speeds development and conserves maintenance time since
common sequences need only be developed once. The A51 assembler supports symbolic
access to all features of the 8051 architecture.
The A51 assembler translates assembler source files into a relocatable object modules.
The DEBUG directive adds full symbolic information to the object module and supports
debugging with the µVision Debugger or an in-circuit emulator.
In addition to object files, the A51 assembler generates list files which may optionally
include symbol table and cross reference information.
C51 C Compiler
Compiler Details
Compiler Directives
Introduction Page 43
Page 44
Code Optimizer
Memory Models
Memory Types
Pointers
Interrupt Functions
Library Reference
The Keil C51 C Compiler for the 8051 microcontroller is the most popular 8051 C
compiler in the world. It provides more features than any other 8051 C compiler available
today.
The C51 Compiler allows you to write 8051 microcontroller applications in C that, once
compiled, have the efficiency and speed of assembly language. Language extensions in the
C51 Compiler give you full access to all resources of the 8051.
The C51 Compiler translates C source files into relocatable object modules which contain
full symbolic information for debugging with the µVision Debugger or an in-circuit emulator.
In addition to the object file, the compiler generates a listing file which may optionally
include symbol table and cross reference information.
Features
Nine basic data types, including 32-bit IEEE floating-point,
Flexible variable allocation with bit, data, bdata, idata, xdata, and pdata memory
types,
Interrupt functions may be written in C,
Full use of the 8051 register banks, Complete symbol and type information for
source-level debugging,
Built-in interface for the RTX51 Real-Time Kernel,
Support for dual data pointers on Atmel, AMD, Cypress, Dallas Semiconductor,
Infineon, Philips, and Triscend microcontrollers,
Support for the Philips 8xC750, 8xC751, and 8xC752 limited instruction sets,
Support for the Infineon 80C517 arithmetic unit.
Introduction Page 44
Page 45
BL51 Code Banking Linker/Locator
The BL51 code banking linker/locator combines OMF51 object modules and creates
executable 8051 programs. The linker resolves external and public references and assigns
absolute or fixed addresses to relocatable program segments.
The BL51 Linker processes object files created by the Keil C51 Compiler and A51
Assembler and the Intel PL/M-51 Compiler and ASM-51 Assembler. These object modules
must adhere to the OMF51 object module specification. BL51 outputs an absolute OMF51
object module that may be loaded into practically any emulator, the Keil µVision Debugger,
or the OH51 Object-HEX converter (to create an Intel HEX file).
OH51 Object-HEX Converter
The OH51 Object-HEX converter creates Intel HEX files from absolute OMF51 object
modules. Absolute object files may be created by the following:
The BL51 code banking linker.
The A51 assembler.
The OC51 banked object converter.
Intel HEX files are ASCII files that contain a hexadecimal representation of your program.
They may be easily loaded into a device programmer for writing EPROMs or other memory
devices.
Several utilities are available that may help you with your HEX files:
HEX2BIN converts an Intel HEX file into a flat BINARY file.
BIN2HEX converts a flat BINARY file into an Intel HEX file.
The following documents provide additional information about the different output file
formats.
Description of the Intel OMF51 Object Module Format.
Description of the Intel HEX File Format.
Introduction Page 45
Page 46
OC51 Banked Object Converter
The OC51 banked object file converter creates an absolute object module for each
code bank in a banked object module. You do not need this utility unless you have created a
code banking program using the BL51 code banking linker.
When you create a code banking application, all symbolic and source-level information is
maintained in the banked object module and is transferred by OC51 to each individual
absolute object module for each code bank.
Once you have used OC51 to create the absolute object modules, you may use OH51 to
create an Intel HEX file for each code bank.
Why You Need A Simulator
We agree that you can probably create, test, and debug your embedded applications
without a simulator. However, there are several reasons why a simulator (like the µVision
Debugger) can make your engineering tasks easier and save you lots of development time.
Customers with the simulator spend less time debugging simple program errors. The
simulator lets them learn about things like on-chip peripherals and addressing modes without
designing real hardware
It is our experience that customers who have a simulator require LESS technical
support and are able to get up-to-speed with the tools faster. The simulator makes it
easy to write and test code and learn about programming your microcontroller.
The µVision Debugger provides complete simulation support for on-chip peripherals
like PWM, Power saving modes, A/D, Serial I/O, and so on.
It is easier for our support engineers to explain complex problems if you have a
simulator.
It is easier to discover if a problem is in the hardware or software when you use a
simulator. For example, if the application works in the simulator and if it works in the
emulator, there's most likely a problem with the target hardware.
The simulator requires no setup time. An emulator may require configuration and a
target board before you can debug.
Introduction Page 46
Page 47
The simulator is not a replacement for an emulator. A simulator is a different tool entirely.
While an emulator allows you to debug software running on your target hardware, a
simulator allows you to debug your software as well as your understanding of the
microcontroller and the programming language. There are no real-time debugging effects of
a simulator.
For debugging embedded applications, we have a general list of favorite tools that we use in-
house.
Logic Probe
Digital Multi-Meter
High-speed Analog Oscilloscope
High-speed Digital Storage Oscilloscope
Logic Analyzer (with a disassembly pod)
Emulator
Software Simulator
5.2 Embedded C Features:
Memory Areas
Memory Types
Memory Models
Memory Type Specifiers
Variable Data Type Specifiers
Bit Variables and Bit-addressable Data
Special Function Registers
5.3 Creating a new project:
To create a new project, simply start MicroVision and select “Project”=>”New
Project” from the pull–down menus. In the file dialog that appears, choose a name and
directory for the project. It is recommended that a new directory be created for each
project, as several files will be generated. Once the project has been named, the dialog
shown in the figure below will appear, prompting the user to select a target device. In this
lab, the chip being used is the “AT89C52,” which is listed under the heading “Atmel”.
Introduction Page 47
Page 48
Fig 5.1: Window for choosing target device.
Next, Micro Vision must be instructed to generate a HEX file upon program
compilation. A HEX file is a standard file format for storing executable code that is to be
loaded onto the microcontroller. In the “Project Workspace” pane at the left, right–click on
“Target 1” and select “Options for ‘Target 1’ ”.Under the “Output” tab of the resulting
options dialog, ensure that both the “Create Executable” and “Create HEX File” options are
checked. Then click “OK”.
Fig 5.2: Project Options Dialog
Next, a file must be added to the project that will contain the project code. To do this,
expand the “Target 1” heading, right–click on the “Source Group 1” folder, and select “Add
files…” Create a new blank file (the file name should end in “.asm”), select it, and click
“Add.” The new file should now appear in the “Project Workspace” pane under the “Source
Group 1” folder. Double-click on the newly created file to open it in the editor. All code for
this lab will go in this file. To compile the program, first save all source files by clicking on
the “Save All” button, and then click on the “Rebuild All Target Files” to compile the
program as shown in the figure below. If any errors or warnings occur during compilation,
they will be displayed in the output window at the bottom of the screen. All errors and
warnings will reference the line and column number in which they occur along with a
Introduction Page 48
Page 49
description of the problem so that they can be easily located. Note that only errors indicate
that the compilation failed, warnings do not (though it is generally a good idea to look into
them anyway).
Fig 5.3: Project Workspace Pane
Fig 5.4: “Save All” and “Build All Target Files” Buttons
When the program has been successfully compiled, it can be simulated using the
integrated debugger in Keil MicroVision. To start the debugger, select “Debug”=>”Start/Stop
Debug Session” from the pull–down menus.
At the left side of the debugger window, a table is displayed containing several key
parameters about the simulated microcontroller, most notably the elapsed time (circled in the
figure below). Just above that, there are several buttons that control code execution. The
“Run” button will cause the program to run continuously until a breakpoint is reached,
whereas the “Step Into” button will execute the next line of code and then pause (the current
position in the program is indicated by a yellow arrow to the left of the code).
Introduction Page 49
Page 50
Fig 5.5: μVision3 Debugger window
5.4 PROGRAMMER:
The programmer used is a powerful programmer for the Atmel 89 series of
microcontrollers that includes 89C51/52/55, 89S51/52/55 and many more.
It is simple to use & low cost, yet powerful flash microcontroller programmer for the
Atmel 89 series. It will Program, Read and Verify Code Data, Write Lock Bits, Erase and
Blank Check. All fuse and lock bits are programmable. This programmer has intelligent
onboard firmware and connects to the serial port. It can be used with any type of computer
and requires no special hardware. All that is needed is a serial communication port which all
computers have.
All devices also have a number of lock bits to provide various levels of software and
programming protection. These lock bits are fully programmable using this programmer.
Lock bits are useful to protect the program to be read back from microcontroller only
allowing erase to reprogram the microcontroller.
FLOWCHART
The flowchart given below represents the working of the system.
Introduction Page 50
Page 51
5.5 Source Code:
/************ System Header Files **********/
#include <E:\ARM2010\MyFiles\lpc2148.h>#include <E:\ARM2010\MyFiles\lcd0801.h>#include <E:\ARM2010\MyFiles\lcdSys.h>
Introduction Page 51
Page 52
#include <E:\ARM2010\MyFiles\Myfunctions.h>
/************ Functions Prototype **********/
/************ UART0 Functions Prototype ****/
void initialize_uart(void);void initialize_clock(void);void feed(void);
/************ AES Functions Prototype ******/
unsigned long int word (unsigned char,unsigned char,unsigned char,unsigned char);unsigned char byte(unsigned long int,unsigned char);unsigned long int rotword(unsigned long int);unsigned long int rotword1(unsigned long int);unsigned long int rotword2(unsigned long int);unsigned long int inv_rotword1(unsigned long int);unsigned long int inv_rotword2(unsigned long int);unsigned long int inv_rotword3(unsigned long into); unsigned long int subbytes(unsigned char,unsigned char, unsigned char,unsigned char);unsigned long int inv_subbytes(unsigned char,unsigned char,unsigned char,unsigned char);unsigned long int RCON(unsigned long int,unsigned long int);unsigned long int xor_wi(unsigned long int,unsigned long int);
/* Macro Definitions */
#define TEMT (1<<6)#define RDR 0x01#define LINE_FEED 0x0A#define CARRIAGE_RET 0x0D#define PLOCK 0x400 // clock initialisation related
/************ Globel Variables *************/
int main(void){
/************ Local variables **************/
//unsigned long int s,v,y,z,u,p,r,x,t,a,b,c,d;
Introduction Page 52
Page 53
//unsigned char D[16];//unsigned char rcon[10]={0x01,0x02,0x04,0x08,0x10,0x20,0x40,0x80,0x1b,0x36};//unsigned char key[11][16]={0x2b,0x7e,0x15,0x16,0x28,0xae,0xd2,0xa6,0xab,0xf7,0x15,0x88,0x09,0xcf,0x4f,0x3c};
//unsigned char text[16]={0x32,0x43,0xf6,0xa8,0x88,0x5a,0x30,0x8d,0x31,0x31,0x98,0xa2,0xe0,0x37,0x07,0x34};
/*************** UART0 variable Declarations ***************/
unsigned char text[64];unsigned char Data[16];unsigned char Enc[16]; unsigned char Edata[64];
unsigned char Flag=0;
unsigned char C; unsigned int J;
//unsigned char A[4][4]={0x02,0x03,0x01,0x01,0x01,0x02,0x03,0x01,0x01,0x01,0x02,0x03,0x03,0x01,0x01,0x02};//unsigned char B[4][4]={0x0e,0x0b,0x0d,0x09,0x09,0x0e,0x0b,0x0d,0x0d,0x09,0x0e,0x0b,0x0b,0x0d,0x09,0x0e};
/************** Decryption variable Declarations************/
//unsigned char loop,i,k,j,q[4],e[4],f[4],g[4],m[15],r_key[16],h[4][4],mix_col[4][4],l,n,o,inmix_col[4][4];unsigned char i;unsigned char Key[16]={0x11,0x15,0x12,0x14,0x61,0x21,0x66,0x88,0xAA,0xBB,0x22,0x55,0x77,0x88,0x99,0xBB};
unsigned Index,Index1;
/*************** ARM Initilisation *************************/
Introduction Page 53
Page 54
PINSEL0 = 0X00000000; PINSEL1 = 0X00000000; IO0DIR = 0XFFFFFFFF;
IO1DIR = 0XFFFF0000;
/*********** Device initialsatiojn ********/
lcd_init(); //LCD Initialisationinitialize_clock(); //System clock Initialisationinitialize_uart(); //UART0 Device InitialisationFlag=0;
/************ System Application code ******/
lcd_clear;
lcd_print(" AES Algorithm ",L1); lcd_print("on ARM LPC-2148",L2); wait_sec;
wait_sec; lcd_clear;
lcd_print(" Enter U'r ",L1); lcd_print("* Message #",L2); wait_sec;
wait_sec;
while (1) {
while(U0LSR & RDR) // Cheking for UART0 RX interrupt
{ //3 C=U0RBR;
if(Flag==0) { //2
switch (C)
{ //1
Introduction Page 54
Page 55
case '*' : lcd_clear; lcd_print(" RECEIVING ",L1); i =0; Flag =0;
wait_sec;lcd_clear;lcd_putchar(0x80,0);
break;
case '#' : i=0;lcd_clear;
Flag=1;
break;
default : text[i]=C; // Storing message
if(i==16)
lcd_putchar(0xC0,0);
if(i==32){lcd_clear;lcd_putchar(0x80,0);}
if(i==48)lcd_putchar(0xC0,0);
lcd_putchar(text[i],1); i++;
Flag=0;
for(J=0; J< 1000000; J++);
break; } //1
} //2 } //3
Introduction Page 55
Page 56
/******************************/ if(Flag==1)
{ // 3
for(Index1=0;Index1<=3;Index1++) { //2
switch (Index1)
{ // 1
case 0x00 : lcd_clear; lcd_print(" I BLOCK (16)
",L1);
lcd_putchar(0xC0,0); for(i=0;i<=15;i++)
lcd_putchar(text[i],1);wait_sec;wait_sec;
break;
case 0x01 : lcd_clear; lcd_print(" II BLOCK (16) ",L1);
lcd_putchar(0xC0,0); for(i=0;i<=15;i++)
lcd_putchar(text[i+16],1); wait_sec; wait_sec; break;
case 0x02 : lcd_clear; lcd_print(" III BLOCK (16)",L1);
lcd_putchar(0xC0,0);
for(i=0;i<=15;i++) lcd_putchar(text[i+32],1);
wait_sec; wait_sec;
break;
Introduction Page 56
Page 57
case 0x03 : lcd_clear; lcd_print(" IV BLOCK (16) ",L1);
lcd_putchar(0xC0,0);
for(i=0;i<=15;i++) lcd_putchar(text[i+48],1);
wait_sec; wait_sec;
break;
default : break;
} //1 }
//2
for(Index=0;Index<=3;Index++) //Encrption loop for four blocks
{ //2
switch (Index) { //1
case 0x00 : for(i=0;i<=15;i++) Data[i]=text[i];
break;
case 0x01 : for(i=0;i<=15;i++) Data[i]=text[i+16];
break;
case 0x02 : for(i=0;i<=15;i++) Data[i]=text[i+32];
Introduction Page 57
Page 58
break;case 0x03 : for(i=0;i<=15;i++)
Data[i]=text[i+48];
break;
default : break;
} //1
lcd_clear;
wait_sec;
/***********************************************************************/
for (i=0;i<=15;i++) Enc[i]=Key[i]^Data[i];
/***********************************************************************/
switch (Index)
{ //1case 0x00 : for(i=0;i<=15;i++)
Edata[i]=Enc[i];
break;
case 0x01 : for(i=0;i<=15;i++) Edata[i+16]=Enc[i]; break;
case 0x02 : for(i=0;i<=15;i++) Edata[i+32]=Enc[i];
Introduction Page 58
Page 59
break;
case 0x03 : for(i=0;i<=15;i++) Edata[i+48]=Enc[i]; break;
} //1
} //2
lcd_clear; lcd_print(" Encryption ",L1);
lcd_print(" Over ",L2); wait_sec;
for(Index1=0;Index1<=3;Index1++) { //2
switch (Index1) {
//1 case 0x00 : lcd_clear;
lcd_print(" I BLOCK (16) ",L1);
lcd_putchar(0xC0,0); for(i=0;i<=15;i++)
lcd_putchar(Edata[i],1);
wait_sec;wait_sec;
break;
case 0x01 : lcd_clear; lcd_print(" II BLOCK (16) ",L1);
lcd_putchar(0xC0,0); for(i=0;i<=15;i++)
Introduction Page 59
Page 60
lcd_putchar(Edata[i+16],1); wait_sec; wait_sec; break;
case 0x02 : lcd_clear; lcd_print(" III BLOCK (16)",L1);
lcd_putchar(0xC0,0);
for(i=0;i<=15;i++) lcd_putchar(Edata[i+32],1);
wait_sec; wait_sec;
break;
case 0x03 : lcd_clear; lcd_print(" IV BLOCK (16) ",L1);
lcd_putchar(0xC0,0);
for(i=0;i<=15;i++) lcd_putchar(Edata[i+48],1);
wait_sec; wait_sec;
break;
default : break;
} //1
} //2
//*************************** Decryption Started ****************
for(Index=0;Index<=3;Index++) //Encrption loop for four blocks { //2
switch (Index) { //1
case 0x00 : for(i=0;i<=15;i++)
Introduction Page 60
Page 61
Data[i]=Edata[i];
break;
case 0x01 : for(i=0;i<=15;i++) Data[i]=Edata[i+16];
break;
case 0x02 : for(i=0;i<=15;i++) Data[i]=Edata[i+32];
break;
case 0x03 : for(i=0;i<=15;i++) Data[i]=Edata[i+48];
break;
default : break;
} //1
/***********************************************************************/
for (i=0;i<=15;i++) Data[i]=Key[i]^Data[i];
/***********************************************************************/
switch (Index)
{ //1case 0x00 : for(i=0;i<=15;i++)
Edata[i]=Data[i];
Introduction Page 61
Page 62
break;
case 0x01 : for(i=0;i<=15;i++) Edata[i+16]=Data[i]; break;
case 0x02 : for(i=0;i<=15;i++) Edata[i+32]=Data[i]; break;
case 0x03 : for(i=0;i<=15;i++) Edata[i+48]=Data[i]; break;
} //1
} //2
lcd_clear;
lcd_print(" Decryption ",L1); lcd_print(" Over ",L2); wait_sec;
Flag=0;
for(Index1=0;Index1<=3;Index1++) { //2
switch (Index1) {
//1 case 0x00 : lcd_clear;
lcd_print(" I BLOCK (16) ",L1);
lcd_putchar(0xC0,0); for(i=0;i<=15;i++)
lcd_putchar(Edata[i],1);
wait_sec;wait_sec;
Introduction Page 62
Page 63
break;
case 0x01 : lcd_clear; lcd_print(" II BLOCK (16) ",L1);
lcd_putchar(0xC0,0); for(i=0;i<=15;i++)
lcd_putchar(Edata[i+16],1); wait_sec; wait_sec; break;
case 0x02 : lcd_clear; lcd_print(" III BLOCK (16)",L1);
lcd_putchar(0xC0,0);
for(i=0;i<=15;i++) lcd_putchar(Edata[i+32],1);
wait_sec; wait_sec;
break;
case 0x03 : lcd_clear; lcd_print(" IV BLOCK (16) ",L1);
lcd_putchar(0xC0,0);
for(i=0;i<=15;i++) lcd_putchar(Edata[i+48],1);
wait_sec; wait_sec;
break;
default : break;
} //1
} //2
lcd_clear; lcd_print(" Enter U'r ",L1);
lcd_print("* Next Message #",L2); wait_sec;
Introduction Page 63
Page 64
wait_sec; Flag=0;
} //3
} //4 } //5
/****************** UART Functions ***************/
/*************** System Initialization ***************/
void initialize_uart(){
/* Initialize Pin Select Block for Tx and Rx */PINSEL0=0x5;/* Enable FIFO's and reset them */
// U0FCR=0x7; U0FCR=0x07;
/* Set DLAB and word length set to 8bits */U0LCR=0x83;/* Baud rate set to 9600 */U0DLL=0xC2;U0DLM=0x0;/* Clear DLAB */U0LCR=0x3;
U0TER=0x80; // U0TER[7]--> 1 TxEna 0-->0 disable}
void initialize_clock(){
/* Initialize PLL (Configured for a 12MHz crystal) to boost processor clock to 60MHz */
/* Setting Multiplier and divider values */
PLLCFG=0x24;feed();
Introduction Page 64
Page 65
/* Enabling the PLL */PLLCON=0x1;feed();
/* Wait for the PLL to lock to set frequency */
while(!(PLLSTAT & PLOCK)){}
/* Connect the PLL as the clock source */PLLCON=0x3;feed();
/* Setting peripheral Clock (pclk) to System Clock (cclk)*/
VPBDIV=0x02;}
/**********************************************************Feed Sequence for PLL**********************************************************/
void feed(){
PLLFEED=0xAA;PLLFEED=0x55;
}
Introduction Page 65
Page 66
CHAPTER-7
PROJECT DIAGRAMS
7.1 Block Diagram of AES Encryption Algorithm for text data Networks
based on ARM7 processor
Introduction Page 66
Page 67
7.2 SCHMATIC DIAGRAM
Introduction Page 67
PCM
AX-232RS-232
ARM7MCU
RPS
Enter KEY
LCD 16X2
Page 68
CHAPTER-8
HARDWARE DESCRIPTION
Hardware Modules:
Introduction Page 68
Page 69
MAX232.
ARM7 PROCESSOR
RF Tx & Rx pairs.
voltage regulator, RPS.
LCD.
8.1 REGULATED POWER SUPPLY
A variable regulated power supply, also called a variable bench power supply, is
one where you can continuously adjust the output voltage to your requirements. Varying the
output of the power supply is the recommended way to test a project after having double
checked parts placement against circuit drawings and the parts placement guide. This type of
regulation is ideal for having a simple variable bench power supply. Actually this is quite
important because one of the first projects a hobbyist should undertake is the construction of
a variable regulated power supply. While a dedicated supply is quite handy e.g. 5V or 12V,
it's much handier to have a variable supply on hand, especially for testing. Most digital logic
circuits and processors need a 5 volt power supply. To use these parts we need to build a
regulated 5 volt source. Usually you start with an unregulated power supply ranging from 9
volts to 24 volts DC (A 12 volt power supply is included with the Beginner Kit and the
Microcontroller Beginner Kit.).
To make a 5 volt power supply, we use a LM7805 voltage regulator IC .
The LM7805 is simple to use. You simply connect the positive lead of your
unregulated DC power supply (anything from 9VDC to 24VDC) to the Input pin, connect the
Introduction Page 69
Page 70
negative lead to the Common pin and then when you turn on the power, you get a 5 volt
supply from the Output pin.
A. CIRCUIT FEATURES
Brief description of operation: Gives out well regulated +5V output, output current
capability of 100 mA
Circuit protection: Built-in overheating protection shuts down output when regulator
IC gets too hot
Circuit complexity: Very simple and easy to build
Circuit performance: Very stable +5V output voltage, reliable operation
Availability of components: Easy to get, uses only very common basic components
Design testing: Based on datasheet example circuit, I have used this circuit
successfully as part of many electronics projects
Applications: Part of electronics devices, small laboratory power supply
Power supply voltage: Unregulated DC 8-18V power supply
Power supply current: Needed output current + 5 mA
Component costs: Few dollars for the electronics components + the input transformer
cost
B. BLOCK DIAGRAM
Introduction Page 70
Page 71
Fig 8.1 Power Supply Block Diagram
C. CIRCUIT DIAGRAM
8.2 CIRCUIT DIAGRAM OF POWER SUPPLY
BASIC POWER SUPPLY CIRCUIT
Above is the circuit of a basic unregulated dc power supply. A bridge
rectifier D1 to D4 rectifies the ac from the transformer secondary, which may also be a block
rectifier such as WO4 or even four individual diodes such as 1N4004 types. (See later re
rectifier ratings).
The principal advantage of a bridge rectifier is you do not need a centre
tap on the secondary of the transformer. A further but significant advantage is that the ripple
frequency at the output is twice the line frequency (i.e. 50 Hz or 60 Hz) and makes filtering
somewhat easier.
As a design example consider we wanted a small unregulated bench
supply for our projects. Here we will go for a voltage of about 12 - 13V at a maximum output
current (IL) of 500ma (0.5A). Maximum ripple will be 2.5% and load regulation is 5%.
Introduction Page 71
Page 72
Now the RMS secondary voltage (primary is whatever is consistent with
your area) for our power transformer T1 must be our desired output Vo PLUS the voltage
drops across D2 and D4 ( 2 * 0.7V) divided by 1.414. This means that Vsec = [13V + 1.4V] /
1.414 which equals about 10.2V. Depending on the VA rating of your transformer, the
secondary voltage will vary considerably in accordance with the applied load. The secondary
voltage on a transformer advertised as say 20VA will be much greater if the secondary is only
lightly loaded.
If we accept the 2.5% ripple as adequate for our purposes then at
13V this becomes 13 * 0.025 = 0.325 Vrms. The peak to peak value is 2.828 times this value.
Vrip = 0.325V X 2.828 = 0.92 V and this value is required to calculate the value of C1.
Also required for this calculation is the time interval for charging pulses. If you are on
a 60Hz system it is 1/ (2 * 60 ) = 0.008333 which is 8.33 milliseconds. For a 50Hz system it
is 0.01 sec or 10 milliseconds.
Remember the tolerance of the type of capacitor used here is very loose. The
important thing to be aware of is the voltage rating should be at least 13V X 1.414 or 18.33.
Here you would use at least the standard 25V or higher (absolutely not 16V).With our
rectifier diodes or bridge they should have a PIV rating of 2.828 times the Vsec or at least
29V. Don't search for this rating because it doesn't exist. Use the next highest standard or
even higher. The current rating should be at least twice the load current maximum i.e. 2 X
0.5A or 1A. A good type to use would be 1N4004, 1N4006 or 1N4008 types.
These are rated 1 Amp at 400PIV, 600PIV and 1000PIV respectively. Always be
on the lookout for the higher voltage ones when they are on special.
TRANSFORMER RATING:
In our example above we were taking 0.5A out of the Vsec of 10V. The VA
required is 10 X 0.5A = 5VA. This is a small PCB mount transformer available in Australia
and probably elsewhere.
This would be an absolute minimum and if you anticipated drawing the maximum
current all the time then go to a higher VA rating.
The two capacitors in the primary side are small value types and if you don't know
precisely and I mean precisely what you are doing then OMIT them. Their loss won't cause
Introduction Page 72
Page 73
you heartache or terrible problems.
THEY MUST BE HIGH VOLTAGE TYPES RATED FOR A.C USE
The fuse F1 must be able to carry the primary current but blow under excessive current, in
this case we use the formula from the diagram. Here N = 240V / 10V or perhaps 120V / 10V.
The fuse calculates in the first instance to [ 2 X 0.5A ] / [240 / 10] or .04A or 40 ma. In the
second case .08A or 80 ma. The difficulty here is to find suitable fuses of that low a current
and voltage rating. In practice you use the closest you can get (often 100 ma ). Don't take that
too literal and use 1A or 5A fuses.
CONSTRUCTION
The whole project MUST be enclosed in a suitable box. The main switch
(preferably double pole) must be rated at 240V or 120V at the current rating. All exposed
parts within the box MUST be fully insulated, preferably with heat shrink tubing.
8.2. LCD (LIQUID CRYSTAL DISPLAY)
Introduction Page 73
Page 74
An 8051 program must interact with the outside world using input
and output devices that communicate directly with a human being. One of the most common
devices attached to an 8051 is an LCD display. Some of the most common LCDs connected
to the 8051 are 16x2 and 20x2 displays. This means 16 characters per line by 2 lines and 20
characters per line by 2 lines, respectively.
Fortunately, a very popular standard exists which allows us to communicate with the
vast majority of LCDs regardless of their manufacturer. The standard is referred to as
HD44780U, which refers to the controller chip which receives data from an external source
(in this case, the 8051) and communicates directly with the LCD.
The 44780 standard requires 3 control lines as well as either 4 or 8 I/O lines for the
data bus. The user may select whether the LCD is to operate with a 4-bit data bus or an 8-bit
data bus. If a 4-bit data bus is used the LCD will require a total of 7 data lines (3 control lines
plus the 4 lines for the data bus). If an 8-bit data bus is used the LCD will require a total of 11
data lines (3 control lines plus the 8 lines for the data bus).
The three control lines are referred to as EN, RS, and RW.
The EN line is called "Enable." This control line is used to tell the
LCD that you are sending it data. To send data to the LCD, your program should make sure
this line is low (0) and then set the other two control lines and/or put data on the data bus
Finally, the data bus consists of 4 or 8 lines (depending on the mode of
Introduction Page 74
Page 75
Introduction Page 75
Page 76
Technical Specifications:
Power Requirements: 5 VDC
Communication: 4-bit or 8-bit Parallel Interface
Dimensions: ~3.25L x ~1.75W x 0.25H in (~85L x ~45W x ~6H mm)
8.3 Light-emitting diode (LED)
A light-emitting diode (LED) is a semiconductor device that emits incoherent narrow-
spectrum light when electrically biased in the forward direction of the p-n junction. This
effect is a form of electroluminescence.
An LED is usually a small area source, often with extra optics added to the chip to
shape its radiation pattern [10]. The color of the emitted light depends on the composition and
condition of the semiconducting material used, and can be infrared, visible, or near-
ultraviolet
Introduction Page 76
Page 77
8.4 MAX 232 :
DESCRIPTION:
The MAX232 is a dual driver/receiver that includes a capacitive voltage generator to
supply TIA/EIA-232-F Voltage levels from a single 5-V supply. Each receiver converts
TIA/EIA-232-F inputs to 5-V TTL/CMOS levels. These receivers have a typical threshold of
1.3 V, a typical hysteresis of 0.5 V, and can accept ±30-V inputs. Each driver converts
TTL/CMOS input levels into TIA/EIA-232-F levels.
Applications:
Portable Computers
Low-Power Modems
Interface Translation
Battery-Powered RS-232 Systems
Multidrop RS-232 Networks
Introduction Page 77
Page 78
Introduction Page 78
Page 79
CHAPTER-9
ARM 7 PROCESSOR
9.1 General description
The LPC2148 microcontrollers are based on a 16-bit/32-bit ARM7TDMI-S CPU with
real-time emulation and embedded trace support, that combine microcontroller with embedded
high speed flash memory of 512 kB. A 128-bit wide memory interface and unique accelerator
architecture enable 32-bit code execution at the maximum clock rate. For critical code size
applications, the alternative 16-bit Thumb mode reduces code by more than 30 % with minimal
performance penalty. Due to their tiny size and low power consumption, LPC2148 are ideal for
applications where miniaturization is a key requirement, such as access control and point-of-sale.
Serial communications interfaces ranging from a USB 2.0 Full-speed device, multiple UARTs,
SPI, SSP to I2C-bus and on-chip SRAM of 32 kB, make these devices very well suited for
communication gateways and protocol converters, soft modems, voice recognition and low end
imaging, providing both large buffer size and high processing power. Various 32-bit timers,
single or dual 10-bit ADC(s), 10-bit DAC, PWM channels and 45 fast GPIO lines with up to
nine edge or level sensitive external interrupt pins make these microcontrollers suitable for
industrial control and medical systems.
Features
ARM7TDMI-S based high-performance 32-bit RISC Microcontroller with Thumb
extensions, 16-bit/32-bit ARM7TDMI-S microcontroller in a tiny LQFP64 package.
512KB on-chip Flash ROM with In-System Programming (ISP) and In-Application
Programming (IAP), 32KB RAM.
Vectored Interrupt Controller,
Two 10bit ADCs with 14 channels,
USB 2.0 Full Speed Device Controller,
Two UARTs, one with full modem interface.
Two I2C serial interfaces,
Page 80
Two SPI serial interfaces
Two 32-bit timers,
Watchdog Timer,
PWM unit,
Real Time Clock with optional battery backup,
Brown out detect circuit
General purpose I/O pins.
CPU clock up to 60 MHz,
On-chip crystal oscillator and On-chip PLL
9.2 Architecture :
Fig9.1 Architecture
9.3 Pin Diagram:
Page 81
Architectural overview:
The ARM7TDMI-S is a general purpose 32-bit microprocessor, which offers high
performance and very low power consumption. The ARM architecture is based on Reduced
Instruction Set Computer (RISC) principles, and the instruction set and related decode
mechanism are much simpler than those of microprogrammed Complex Instruction Set
Computers (CISC). This simplicity results in a high instruction throughput and impressive real-
time interrupt response from a small and cost-effective processor core. Pipeline techniques are
employed so that all parts of the processing and memory systems can operate continuously.
Typically, while one instruction is being executed, its successor is being decoded, and a third
instruction is being fetched from memory. The ARM7TDMI-S processor also employs a unique
architectural strategy known as Thumb, which makes it ideally suited to high-volume
applications with memory restrictions, or applications where code density is an issue. The key
idea behind Thumb is that of a super-reduced instruction set. Essentially, the ARM7TDMI-S
processor has two instruction sets:
• The standard 32-bit ARM set.
Page 82
• A 16-bit Thumb set.
The Thumb set’s 16-bit instruction length allows it to approach twice the density of
standard ARM code while retaining most of the ARM’s performance advantage over a
traditional 16-bit processor using 16-bit registers. This is possible because Thumb code operates
on the same 32-bit register set as ARM code. Thumb code is able to provide up to 65 % of the
code size of ARM, and 160 % of the performance of an equivalent ARM processor connected to
a 16-bit memory system. The particular flash implementation in the LPC2141/42/44/46/48
allows for full speed execution also in ARM mode. It is recommended to program performance
critical and short code sections (such as interrupt service routines and DSP algorithms) in ARM
mode. The impact on the overall code size will be minimal but the speed can be increased by
30% over Thumb mode.
9.4 On-chip flash program memory
The LPC2148 incorporate a 512 kB flash memory system respectively. This memory
may be used for both code and data storage. Programming of the flash memory may be
accomplished in several ways. It may be programmed In System via the serial port. The
application program may also erase and/or program the flash while the application is running,
allowing a great degree of flexibility for data storage field firmware upgrades, etc. Due to the
architectural solution chosen for an on-chip boot loader, flash memory available for user’s code
on LPC2148 is 512 kB respectively. The LPC2148 flash memory provides a minimum of
100,000 erase/write cycles and 20 years of data-retention.
On-chip static RAM
On-chip static RAM may be used for code and/or data storage. The SRAM may be
accessed as 8-bit, 16-bit, and 32-bit. The LPC2148 provide 32 kB of static RAM respectively. In
case of LPC2146/48 only, an 8 kB SRAM block intended to be utilized mainly by the USB can
also be used a
Page 83
Memory mapThe general purpose RAM for data storage and code storage and execution.
The LPC2148 memory map incorporates several distinct regions. In addition, the CPU interrupt
vectors may be remapped to allow them to reside in either flash memory (the default) or on-chip
static RAM. “System control”.
Interrupt controller
The Vectored Interrupt Controller (VIC) accepts all of the interrupt request inputs and
categorizes them as Fast Interrupt Request (FIQ), vectored Interrupt Request (IRQ), and non-
vectored IRQ as defined by programmable settings. The programmable assignment scheme
means that priorities of interrupts from the various peripherals can be dynamically assigned and
adjusted. Fast interrupt request (FIQ) has the highest priority. If more than one request is
assigned to FIQ, the VIC combines the requests to produce the FIQ signal to the ARM processor.
The fastest possible FIQ latency is achieved when only one request is classified as FIQ, because
then the FIQ service routine does not need to branch into the interrupt service routine but can run
Figure 9.3.Memory map
Page 84
The interrupt vector location. If more than one request is assigned to the FIQ class, the
FIQ service routine will read a word from the VIC that identifies which FIQ source(s) is (are)
requesting an interrupt. Vectored IRQs have the middle priority. Sixteen of the interrupt requests
can be assigned to this category. Any of the interrupt requests can be assigned to any of the 16
vectored IRQ slots, among which slot 0 has the highest priority and slot 15 has the lowest. Non-
vectored IRQs have the lowest priority. The VIC combines the requests from all the vectored and
non-vectored IRQs to produce the IRQ signal to the ARM processor. The IRQ service routine
can start by reading a register from the VIC and jumping there. If any of the vectored IRQs are
pending, the VIC provides the address of the highest-priority requesting IRQs service routine,
otherwise it provides the address of a default routine that is shared by all the non-vectored IRQs.
The default routine can read another VIC register to see what IRQs are active.
Interrupt sources
Each peripheral device has one interrupt line connected to the Vectored Interrupt
Controller, but may have several internal interrupt flags. Individual interrupt flags may also
represent more than one interrupt source.
Pin connect block
The pin connect block allows selected pins of the microcontroller to have more than one
function. Configuration registers control the multiplexers to allow connection between the pin
and the on chip peripherals. Peripherals should be connected to the appropriate pins prior to
being activated, and prior to any related interrupt(s) being enabled. Activity of any enabled
peripheral function that is not mapped to a related pin should be considered undefined. The Pin
Control Module with its pin select registers defines the functionality of the microcontroller in a
given hardware environment. After reset all pins of Port 0 and 1 are configured as input with the
following exceptions: If debug is enabled, the JTAG pins will assume their JTAG functionality;
if trace is enabled, the Trace pins will assume their trace functionality. The pins associated with
the I2C0 and I2C1 interface are open drain.
Page 85
Fast general purpose parallel I/O (GPIO)
Device pins that are not connected to a specific peripheral function are controlled by the
GPIO registers. Pins may be dynamically configured as inputs or outputs. Separate registers
allow setting or clearing any number of outputs simultaneously. The value of the output register
may be read back, as well as the current state of the port pins. LPC2148 introduce accelerated
GPIO functions over prior LPC2000 devices:
GPIO registers are relocated to the ARM local bus for the fastest possible I/O timing.
Mask registers allow treating sets of port bits as a group, leaving other bits unchanged.
All GPIO registers are byte addressable.
Entire port value can be written in one instruction.
Features Bit-level set and clear registers allow a single instruction set or clear of any number of
bits in one port.
Direction control of individual bits.
Separate control of output set and clear.
All I/O default to inputs after reset.
10-bit ADC
The LPC2141/42 contain one and the LPC2144/46/48 contain two analog to digital
converters. These converters are single 10-bit successive approximation analog to digital
converters. While ADC0 has six channels, ADC1 has eight channels. Therefore, total number of
available ADC inputs for LPC2141/42 is 6 and for LPC2144/46/48 is 14.
Features 10 bit successive approximation analog to digital converter.
Page 86
Measurement range of 0 V to VREF (2.0 V ≤ VREF ≤ VDDA).
Each converter capable of performing more than 400,000 10-bit samples per second.
Every analog input has a dedicated result register to reduce interrupt overhead.
Burst conversion mode for single or multiple inputs.
Optional conversion on transition on input pin or timer match signal.
Global Start command for both converters .
9.5 10-bit DACThe DAC enables the LPC2141/42/44/46/48 to generate a variable analog output. The
maximum DAC output voltage is the VREF voltage.
Features 10-bit DAC.
Buffered output.
Power-down mode available.
9.6 General purpose timers/external event counters
The Timer/Counter is designed to count cycles of the peripheral clock (PCLK) or an
externally supplied clock and optionally generate interrupts or perform other actions at specified
timer values, based on four match registers. It also includes four capture inputs to trap the timer
value when an input signal transitions, optionally generating an interrupt. Multiple pins can be
selected to perform a single capture or match function, providing an application with ‘or’ and
‘and’, as well as ‘broadcast’ functions among them.
The LPC2141/42/44/46/48 can count external events on one of the capture inputs if the minimum
external pulse is equal or longer than a period of the PCLK. In this configuration, unused capture
lines can be selected as regular timer capture inputs, or used as external interrupts.
Features A 32-bit timer/counter with a programmable 32-bit prescaler.
External event counter or timer operation.
Page 87
Four 32-bit capture channels per timer/counter that can take a snapshot of the timer
value when an input signal transitions. A capture event may also optionally generate
an interrupt.
9.7 Watchdog timer
The purpose of the watchdog is to reset the microcontroller within a reasonable amount
of time if it enters an erroneous state. When enabled, the watchdog will generate a system reset if
the user program fails to ‘feed’ (or reload) the watchdog within a predetermined amount of time.
Features Internally resets chip if not periodically reloaded.
Debug mode.
Enabled by software but requires a hardware reset or a watchdog reset/interrupt to be
disabled.
Incorrect/Incomplete feed sequence causes reset/interrupt if enabled.
Flag to indicate watchdog reset.
Programmable 32-bit timer with internal pre-scaler.
Selectable time period from (TPCLK × 256 × 4) to (TPCLK × 232 × 4) in multiples of
TPCLK × 4.
9.8 Real-time clock
The RTC is designed to provide a set of counters to measure time when normal or idle
operating mode is selected. The RTC has been designed to use little power, making it suitable for
battery powered systems where the CPU is not running continuously (Idle mode).
Features
Measures the passage of time to maintain a calendar and clock.
Ultra-low power design to support battery powered systems.
Page 88
Provides Seconds, Minutes, Hours, Day of Month, Month, Year, Day of Week, and
Day of Year.
Can use either the RTC dedicated 32 kHz oscillator input or clock derived from the
external crystal/oscillator input at XTAL1. Programmable reference clock divider
allows fine adjustment of the RTC.
Dedicated power supply pin can be connected to a battery or the main 3.3 V.
9.9 Pulse width modulator
The PWM is based on the standard timer block and inherits all of its features, although
only the PWM function is pinned out on the LPC2141/42/44/46/48. The timer is designed to
count cycles of the peripheral clock (PCLK) and optionally generate interrupts or perform other
actions when specified timer values occur, based on seven match registers. The PWM function is
also based on match register events. The ability to separately control rising and falling edge
locations allows the PWM to be used for more applications. For instance, multi-phase motor
control typically requires three non-overlapping PWM outputs with individual control of all three
pulse widths and positions. Two match registers can be used to provide a single edge controlled
PWM output. One match register (MR0) controls the PWM cycle rate, by resetting the count
upon match. The other match register controls the PWM edge position. Additional single edge
controlled PWM outputs require only one match register each, since the repetition rate is the
same for all PWM outputs. Multiple single edge controlled PWM outputs will all have a rising
edge at the eginning of each PWM cycle, when an MR0 match occurs. Three match registers
can be used to provide a PWM output with both edges controlled. Again, the MR0 match register
controls the PWM cycle rate. The other match registers control the two PWM edge positions.
Additional double edge controlled PWM outputs require only two match registers each, since the
repetition rate is the same for all PWM outputs. With double edge controlled PWM outputs,
specific match registers control the rising and falling edge of the output. This allows both
positive going PWM pulses (when the rising edge occurs prior to the falling edge), and negative
going PWM pulses (when the falling edge occurs prior to the rising edge).
Page 89
Features Seven match registers allow up to six single edge controlled or three double edge
controlled PWM outputs, or a mix of both types.
The match registers also allow:
– Continuous operation with optional interrupt generation on match.
– Stop timer on match with optional interrupt generation.
– Reset timer on match with optional interrupt generation.
9.9 System control
Crystal oscillator
On-chip integrated oscillator operates with external crystal in range of 1 MHz to 25 MHz.
The oscillator output frequency is called fosc and the ARM processor clock frequency is referred
to as CCLK for purposes of rate equations, etc. fosc and CCLK are the same value unless the
PLL is running and connected. Refer to for additional information.
PLL
The PLL accepts an input clock frequency in the range of 10 MHz to 25 MHz. The input
frequency is multiplied up into the range of 10 MHz to 60 MHz with a Current Controlled
Oscillator (CCO). The multiplier can be an integer value from 1 to 32 (in practice, the multiplier
value cannot be higher than 6 on this family of microcontrollers due to the upper frequency limit
of the CPU). The CCO operates in the range of 156 MHz to 320 MHz, so there is an additional
divider in the loop to keep the CCO within its frequency range while the PLL is providing the
desired output frequency. The output divider may be set to divide by 2, 4, 8, or 16 to produce the
output clock. Since the minimum output divider value is 2, it is insured that the PLL output has a
50 % duty cycle. The PLL is turned off and bypassed following a chip reset and may be enabled
by software. The program must configure and activate the PLL, wait for the PLL to Lock, then
connect to the PLL as a clock source. The PLL settling time is 100 μs.
Page 90
Brownout detector
The LPC2141/42/44/46/48 include 2-stage monitoring of the voltage on the VDD pins. If
this voltage falls below 2.9 V, the BOD asserts an interrupt signal to the VIC. This signal can be
enabled for interrupt; if not, software can monitor the signal by reading dedicated register. The
second stage of low voltage detection asserts reset to inactivate the LPC2141/42/44/46/48 when
the voltage on the VDD pins falls below 2.6 V. This reset prevents alteration of the flash as
operation of the various elements of the chip would otherwise become unreliable due to low
voltage. The BOD circuit maintains this reset down below 1 V, at which point the POR circuitry
maintains the overall reset. Both the 2.9 V and 2.6 V thresholds include some hysteresis. In
normal operation, this hysteresis allows the 2.9 V detection to reliably interrupt, or a regularly-
executed event loop to sense the condition.
Code security
This feature of the LPC2141/42/44/46/48 allow an application to control whether it can
be debugged or protected from observation. If after reset on-chip boot loader detects a valid
checksum in flash and reads 0x8765 4321 from address 0x1FC in flash, debugging will be
disabled and thus the code in flash will be
protected from observation. Once debugging is disabled, it can be enabled only by performing a
full chip erase using the ISP.
External interrupt inputs
The LPC2141/42/44/46/48 include up to nine edge or level sensitive External Interrupt
Inputs as selectable pin functions. When the pins are combined, external events can be processed
as four independent interrupt signals. The External Interrupt Inputs can optionally be used to
wake-up the processor from Power-down mode. Additionally capture input pins can also be used
as external interrupts without the option to wake the device up from Power-down mode.
Page 91
Memory mapping control
The Memory Mapping Control alters the mapping of the interrupt vectors that appear
beginning at address 0x0000 0000. Vectors may be mapped to the bottom of the on-chip flash
memory, or to the on-chip static RAM. This allows code running in different memory spaces to
have control of the interrupts.
9.10 Power control
The LPC2141/42/44/46/48 supports two reduced power modes: Idle mode and Power-
down mode. In Idle mode, execution of instructions is suspended until either a reset or interrupt
occurs. Peripheral functions continue operation during Idle mode and may generate interrupts to
cause the processor to resume execution. Idle mode eliminates power used by the processor
itself, memory systems and related controllers, and internal buses. In Power-down mode, the
oscillator is shut down and the chip receives no internal clocks. The processor state and registers,
peripheral registers, and internal SRAM values are preserved throughout Power-down mode and
the logic levels of chip output pins remain static. The Power-down mode can be terminated and
normal operation resumed by either a reset or certain specific interrupts that are able to function
without clocks. Since all dynamic operation of the chip is suspended, Power-down mode reduces
chip power consumption to nearly zero. Selecting an external 32 kHz clock instead of the PCLK
as a clock-source for the on-chip RTC will enable the microcontroller to have the RTC active
during Power-down mode. Power-down current is increased with RTC active. However, it is
significantly lower than in Idle mode. A Power Control for Peripherals feature allows individual
peripherals to be turned off if they are not needed in the application, resulting in additional
power savings during active and idle mode.
9.11 Emulation and debugging
Page 92
The LPC2141/42/44/46/48 support emulation and debugging via a JTAG serial port. A
trace port allows tracing program execution. Debugging and trace functions are multiplexed only
with GPIOs on Port 1. This means that all communication, timer and interface peripherals
residing on Port 0 are available during the development and debugging phase as they are when
the application is run in the embedded system itself.
Embedded ICE
Standard ARM Embedded ICE logic provides on-chip debug support. The debugging of
the target system requires a host computer running the debugger software and an Embedded ICE
protocol convertor. Embedded ICE protocol convertor converts the remote debug protocol
commands to the JTAG data needed to access the ARM core. The ARM core has a Debug
Communication Channel (DCC) function built-in. The DCC allows a program running on the
target to communicate with the host debugger or another separate host without stopping the
program flow or even entering the debug state. The
DCC is accessed as a co-processor 14 by the program running on the ARM7TDMI-S core. The
DCC allows the JTAG port to be used for sending and receiving data without affecting the
normal program flow. The DCC data and control registers are mapped in to addresses in the
Embedded ICE logic.
Embedded trace
Since the LPC2141/42/44/46/48 have significant amounts of on-chip memory, it is not
possible to determine how the processor core is operating simply by observing the external pins.
The Embedded Trace Microcell (ETM) provides real-time trace capability for deeply embedded
processor cores. It outputs information about processor execution to the trace port. The ETM is
connected directly to the ARM core and not to the main AMBA system bus. It compresses the
trace information and exports it through a narrow trace port. An external trace port analyzer must
capture the trace information under software debugger control. Instruction trace (or PC trace)
Page 93
shows the flow of execution of the processor and provides a list of all the instructions that were
executed. Instruction trace is significantly compressed
by only broadcasting branch addresses as well as a set of status signals that indicate the pipeline
status on a cycle by cycle basis. Trace information generation can be controlled by selecting the
trigger resource. Trigger resources include address comparators, counters and sequencers. Since
trace information is compressed the software debugger requires a static image of the code being
executed. Self-modifying code can not be traced because of this restriction.
Real Monitor
RealMonitor is a configurable software module, developed by ARM Inc., which enables
real-time debug. It is a lightweight debug monitor that runs in the background while users debug
their foreground application. It communicates with the host using the DCC, which is present in
the EmbeddedICE logic. The LPC2141/42/44/46/48 contain a specific configuration of
RealMonitor software programmed into the on-chip flash memory.
CHAPTER-10
Page 94
APPICATIONS
1. Provides security for images sent through wireless technology.
2. Security for text data.
3. Widely used in modern consumer electronic products for security.
CHAPTER-11
Page 95
ADVANTAGES&DISADVANTAGES
11.1 ADVANTAGES
1. Key Bytes are used for providing high security for any kind of data.
2. AES has been widely used in many applications such as :
Internet routers
• Mobile phone applications
• Electronic financial transactions
3. This security system is used for National Army Selections (NAS).
11.2 DISADVANTAGES
1.If there is increase in key bytes, the complexity increases
CHAPTER-12
Page 96
CONCLUSION
The embedded system found in most consumer products employs a single chip controller.
That includes the microprocessor, a limited amount of memory and simple input output devices.
By far the vast majority of the embedded systems in production today are based on the 4bit, 8bit,
or 16bit processors. Although 32bit processors account for relatively small percentage of the
current market, their use in embedded systems is growing at the fastest rate
In this project, we have implemented the AES encryption and decryption algorithm with
hardware in combination with part of software using the custom instruction mechanism provided
by the ARM7 With a language of embedded C using of keil platform , we explored various
combinations of hardware and software to realize the AES algorithm and discussed possible best
solutions of different needs.
CHAPTER-13
Page 97
BIBLIOGRAPHY
Reference books:
1. ARM PROCESSOR AND EMBEDDED SYSTEM., Janice Gillispie Mazadi
2. ELECTRONIC COMPONENTS Ramesh S. Gaonkar
3. EMBEDDED SOFTWARE PRIMER. David .E. Simon.
Reference Websites:
1. www.mitel.databook.com
2. www.atmel.databook.com
3. www.franklin.com
4. www.keil.com
5. http://www.ikalogic.com/cat_microcontrollers.php
6. http://www.electronicsforu.com/Electronicsforu/articles/subcategory.asp?cid=23&id=14
7. http://electrosofts.com/dtmf/