Top Banner
NEHRU ARTS AND SCIENCE COLLEGE T.M.PALAYAM,COIMBATORE NAME OF THE PAPER: NETWORK SECURITY AND CRYPTOGRAPHY STAFF NAME: A.SENTHIL KUMAR ACADEMIC YEAR : 2011 - 2012 UNIT-I: Service mechanism and attacks – The OSI security architecture – A model for network security – symmetric Cipher model – Substitution techniques – transposition techniques – simplified des – block chipper principles – the strength of des – block chipper design principles and modes of operation. Service mechanism and attacks Having identified the relevant security threats to a system, the system operator can apply various security services and mechanisms to confront these threats and implement a desired security policy. In this section we provide a general description of such services and techniques. The science behind these methods is researched and developed as part of the broad discipline of Cryptography. Cryptography embodies the mathematical principles, means, and methods for the transformation of data in order to hide its information content, prevent its undetected
42
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: unit 1

NEHRU ARTS AND SCIENCE COLLEGE

T.M.PALAYAM,COIMBATORE

NAME OF THE PAPER: NETWORK SECURITY AND CRYPTOGRAPHY

STAFF NAME: A.SENTHIL KUMAR

ACADEMIC YEAR : 2011 - 2012

UNIT-I:

Service mechanism and attacks – The OSI security architecture – A model for network

security – symmetric Cipher model – Substitution techniques – transposition techniques –

simplified des – block chipper principles – the strength of des – block chipper design

principles and modes of operation.

Service mechanism and attacks

Having identified the relevant security threats to a system, the system operator can apply various

security services and mechanisms to confront these threats and implement a desired security

policy. In this section we provide a general description of such services and techniques. The

science behind these methods is researched and developed as part of the broad discipline of

Cryptography. Cryptography embodies the mathematical principles, means, and methods for the

transformation of data in order to hide its information content, prevent its undetected

modification, and/or prevent its unauthorized use. Cryptographic functions may be used as part

of encipherment, decipherment, data integrity, authentication exchanges, password storage and

checking, etc. to help achieve confidentiality, integrity, and/or authentication.

The following subsections summarize some key security services and mechanisms.

Encipherment and Data Confidentiality

Encipherment is a security mechanism that involves the transformation of data into some

unreadable form. Its purpose is to ensure privacy by keeping the information hidden from anyone

for whom it is not intended, even those who can see enciphered data. Decipherment is the reverse

Page 2: unit 1

of encipherment. That is, it is the transformation of encrypted data back into some intelligible

form. Encipherment which is performed on cleartext (intelligible data) to produce ciphertext

(encrypted data whose semantic content is not available). The result of decipherment is either

cleartext, or ciphertext under some cover.

Encipherment can provide confidentiality of either data or traffic flow information and can play a

part in, or complement other security mechanisms.

Encipherment and Decipherment require the use of some secret information, usually referred to

as a key, which directs specific transformations. This is one of two cryptovariables used: The

other is the initialization variable, which is sometimes required to preserve the apparent

randomness of ciphertext.

Encipherment techniques can be symmetric or secret key, where knowledge of the encipherment

key implies knowledge of the private decipherment key and vice versa, or asymmetric. In

asymmetric algorithms, generally one key is called public (because it is publicly available),

while the other is called private (because it is kept secret). Once a private key has been

compromised, the system (or at least the use of that private key) is no longer secure. Both

encipherment techniques are used to provide the data confidentiality service.

Modern cryptographic systems also provides mechanisms for authentication, for instance

through digital signatures that bind a document to the possessor of a specific key, or digital

timestamps which bind a document to its creation at a given time. In general the existence of an

encipherment mechanism implies the use of a key management mechanism.

Public Key Cryptography

Figure 6.1 illustrates a simple public key cryptographic system that provides data confidentiality.

When Alice wishes to send a secret message to Bob, she looks up Bob's public key in a directory,

uses it to encrypt the message, and sends it off. Bob then uses his private key to decrypt the

message and read it. No one listening in can decrypt the message. Anyone can send Bob an

encrypted message but only Bob can read it. Clearly one requirement is that no one can figure

out the private key from the corresponding public key.

Page 3: unit 1

   

Figure 6.1: A Public Key Cryptographic

System (PKCS)

1#1

Digital Signatures

Digital signature is the process of binding some information (e.g., a document) to its originator

(e.g., the signer).

The essential characteristic of a digital signature is that the signed data unit cannot be created

without using the private key. This means that

1. The signed data unit cannot be created by any individual except the holder of the private key.

2. The recipient cannot create the signed data unit.

3. The sender cannot deny sending the signed data unit.

Therefore, using only publicly available information-the public key-it is possible to identify the

signer of a data unit as the possessor of the private key. It is also possible to prove the identity of

the signer of the data unit to a reliable third party in case of later conflict.

Page 4: unit 1

Thus, a digital signature attests to the contents of a message, as well as to the identity of the

signer. As long as a secure hash function (a function that is easy to compute in one direction than

the opposite direction) is used, one cannot take away a person's signature from one document and

transpose it on another one, or alter a signed message in any way. The slightest change in a

digitally signed document will cause the digital signature verification process to fail. However, if

a signature verification fails, it is in general difficult to determine whether there was an

attempted forgery or simply a transmission error.

In short, a digital signature mechanism involves the two procedures of signing a data unit, and

verifying the signed data unit. The former process uses information which is private (i.e. unique

and confidential) to the signer. The second process uses procedures and information which are

publicly available but from which the signer's private information cannot be deduced.

   

Figure 6.2: A Digital Signature Mechanism

Figure 6.2 illustrates a digital signature mechanism. To sign a message, Alice appends the

information she wishes to send to an enciphered summary of the information. The summary is

produced by means of a one-way hash function (h), while the enciphering is carried out using

Alice's secret key (E). Thus the message sent to Bob is of the form:

Page 5: unit 1

X{info} = info + Xs[h(info)]

The encipherment using the secret key ensures that the signature cannot be forged. The one-way

nature of the hash function ensures that false information, generated so as to have the same hash

result (and thus signature), cannot be substituted.

In his turn, upon receipt of Alice's message, Bob verifies the signature by applying the one-way

hash function to the information, and comparing the result with that obtained by deciphering the

signature using the public key of Alice. If these two are the same, it is verified that Alice is the

"true" sender of the message. It should be clear and imperative that for the authentication to be

performed correctly, both Alice and Bob must be using the same hash function.

Authentication

Authentication is defined by [KAUF95] as "the process of reliably verifying the identity of

someone (or something)".

Authentication can be "One-Way" or "Two-Way."6.3 Each of these is described below.

¥ One way Authentication: Involves a single transfer of information from one user (A) intended

for another (B), and establishes the following:

¥ the identity of A and that the authentication token was generated by A;

¥ the identity of B and that the authentication token was intended to be sent to B;

¥ the integrity and originality (the property of not having been sent two or three times) of the

authentication token being transferred.

¥ Two-way Authentication: Involves, in addition, a reply from B to A and establishes, in

addition, the following:

¥ that the authentication token generated in the reply was actually generated by B and was

intended to be sent to A;

Page 6: unit 1

¥ the integrity and originality of the authentication token sent in the reply;

¥ (optionally) the mutual secrecy of part of the tokens.

Corroboration of identity is often established by demonstrating the possession of a secret key.

Authentication may be accomplished by applying symmetric or asymmetric cryptographic

techniques.

When using private keys (symmetric) corroboration of identity is often based on a "shared

secret."

When using public keys (asymmetric), authentication is accomplished based on digital signatures

and digital timestamps. Since the digital signature binds the possessor of the private key with a

document and the timestamp can be verified to protect against replays, corroboration of identity

can be established by combining digital signature and a timestamp.

Traffic Flow Confidentiality

Cryptographic protocols are designed to resist attacks and also, sometimes, traffic analysis. A

specific traffic analysis countermeasure, traffic flow confidentiality, aims to conceal the presence

or absence of data and its characteristics. This is important because knowledge of the activity can

be as useful to the bad guys as the content of the activity itself.

If cyphertext is relayed, the address must be in the clear at the relays and gateways. If the data

are enciphered only on each link, and are deciphered (and are thus made vulnerable) in the relay

or gateway, the architecture is said to use link-by-link confidentiality (or encipherment). If only

the address (and similar control data) are in the clear in the relay or gateway, the architecture is

said to use end-to-end data confidentiality (or encipherment). End-to-end encryption is more

desirable from a security point of view, but considerably more complex architecturally.

Furthermore, traffic padding can be used to provide various levels of protection against traffic

analysis. This mechanism can be effective only if the traffic is protected by a confidentiality

service.

Page 7: unit 1

Data Integrity

Data integrity is the property of data which has not been altered or destroyed in an unauthorized

manner. It is achieved via a calculated cryptographic checkvalue. The checkvalue may be

derived in one or more steps and is a mathematical function of the cryptovariables and the data.

These checkvalues are associated with the data to be guarded. If the checkvalue is matched by

the value calculated by the data recipient, data integrity is assumed.

Two aspects of data integrity are: the integrity of a single data unit or field, and the integrity of a

stream of data units or fields. Determining the integrity of a single data unit involves two

processes, one at the sender, and the other at the receiver. The sender appends to the data unit a

quantity which is a function of the data itself. This quantity may be supplementary information

such as a block code or a cryptographic check value and may itself be enciphered. The receiver

generates a corresponding quantity and compares it with the received quantity to determine

whether the data has been modified in transit.

Protecting the integrity of a sequence of data units (against misordering, losing, replaying, and

inserting or modifying the data) requires additionally some form of explicit ordering such as

sequence numbering, time stamping, or cryptographic chaining.

Key Management

Key management encompasses the generation, distribution, and control of cryptographic keys. It

is implied by the use of cryptographic algorithms. Important points to be considered are:

1. The use of a lifetime based on time, use, or other criteria, for each key defined, implicitly, or

explicitly. The longer a key's lifetime, the greater the probability that the key will be

compromised by the bad guys.

2. The proper identification of keys according to their functions so that they are used only for

their intended function. The greater the key's exposure (to multiple applications) the greater the

probability that the key will be compromised.

Page 8: unit 1

3. Physical distribution and archiving of keys. This is both a logistics and security issue,

especially in distributed systems such as WANs.

Points to be considered concerning key management for symmetric key algorithms include:

1. The use of a confidentiality service in the key management protocol.

2. The use of a key hierarchy ("flat" hierarchies using only data-enciphering keys, multilayer key

hierarchies, etc.)

3. The division of responsibilities so that no one person has a complete copy of an important key.

For asymmetric key management, confidentiality services are used to convey the secret keys.

Additionally an integrity service (or a service with proof of origin) is needed to convey the

public keys.

Access Control

Access control mechanisms are used to enforce a policy of limiting access to a resource to only

those users who are authorized. These techniques include the use of access control lists or

matrices, passwords, capabilities, and labels, the possession of which may be used to indicate

access rights.

Network Layer Security Considerations

Network Layer Security Protocol (NLSP)

NLSP is an international standard that specifies a protocol to be used by end systems and

intermediate systems in order to provide security services in the network layer. It is defined by

ISO 11577. Much of the material appearing here is from the American National Standards

Institute (ANSI) which is the official U.S. representative to ISO.

NLSP specifies a series of services and functional requirements for implementation. The

services, as defined in ISO 7498-2 are:

Page 9: unit 1

¥ peer entity authentication.

¥ data origin authentication.

¥ access control.

¥ connection confidentiality.

¥ connectionless confidentiality.

¥ traffic flow confidentiality.

¥ connection integrity without recovery (including data unit integrity, in which individual SDUs

on a connection are integrity protected).

¥ connectionless integrity.

The Procedures of this protocol are defined in terms of:

¥ requirements on the cryptographic techniques that can be used in an instance on this protocol.

¥ requirements on the information carried in the security association used in an instance of

communication.

Although the degree of protection afforded by some security mechanisms depends on the use of

some specific cryptographic techniques, correct operation of this protocol is not dependent on the

choice of any particular encipherment of decipherment algorithm that is left as a local matter for

the communicating systems.

Furthermore, neither the choice nor the implementation of a specific security policy are within

the scope of this international standard. The choice of a specific security policy, and hence the

degree of protection that will be achieved, is left as a local matter among the systems that are

using a single instance of secure communications. NLSP does not require that multiple instances

of secure communications involving a single open system must use the same security protocol.

Page 10: unit 1

NLSP supports cryptographic protection either between End Systems (and in this case resembles

the Transport Layer Security Protocol - TLSP) or between Intermediate Systems that are located

at the borders of security domains. This latter aspect makes NLSP quite appealing to those who

would like to provide security services not by securing each and every system in a domain but by

forcing all external communications to transit through a small set of secure systems (assuming

that communications within the domain need no security services). In this sense, one can see

NLSP as supporting (at the domain level) administrative policies (mandatory security) while

TLSP is more tuned towards discretionary communication policies.

The OSI security architecture

Security architecture for OSI, define such a systematic approach. The OSI security architecture is

useful to managers, as a way of organizing the task of providing security.

It was developed as an international standard.

The OSI security architecture focus on security attack, mechanism, and services. These can be

defined briefly as fallows:

Security Attack: Any action that compromise the security of information owned by an

organization.

Security Mechanism: A process that is designed to detect, prevent or recover from a

security attack. And security mechanism is a method which is used to protect your

message from unauthorized entity.

Security Services: Security Services is the services to implement security policies and

implemented by security mechanism.

A model for network security

Introduction to the Network Security Model (NSM)

The Open Systems Interconnection model (OSI), developed in 1983 bythe International

Organization for Standardization (ISO), has been used as a framework to teach networking

basics and troubleshoot networking issues for the last 25 years. It has been so influential

Page 11: unit 1

in network development and architecture that even most of the network communication protocols

in use today have a structure thatis based on it. But just as the OSI model never fails us, we find

that we are lacking a standard that all network security professionals can adhere to, a Network

Security Model (NSM). Today’s sophisticated and complex networks provide the fundamental

need for the NSM.

The proposed Network Security Model (NSM) is a seven layer model that divides the daunting

task of securing a network infrastructure into seven manageable sections. The model is generic

and can apply to all security implementation and devices. The development of the NSM is

important because unity is needed in securing networks, just as unity was needed in the

architecture of networks with the development of the OSI model. When an attack on a network

has succeeded it is much easier to locate the underlying issue and fix it with the use of the NSM.

The NSM will provide a way to teach and implement basic network security measures and

devices as well as locate underlying issues that may have allowed an attack to succeed.

Traditionally we work from the bottom up to determine which layer has failed on the OSI

model, but on the NSM we will work from the top down to determine which layer has failed. See

the NSM (Figure 1.1). Once the layer of failure is found, we can determine that all of the layers

above this layer have also failed. A network security professional will be able to quickly

determine if other possible hosts have been compromised with the breech of the layer and how to

secure it against the same attack in the future.

Throughout the paper we will be working from the top down describing what each layer is and

how the layers of the NSM work together to accomplish complete network security.

Physical

VLAN

Physical

ACL

Software

User

Administrative

IT Department

Figure 1.1 – The Network Security Model

Page 12: unit 1

1.2 Why do we need a Network Security Model?

A well structured NSM will give the security community a way tostudy, implement, and

maintain network security that can be applied to any network. In study, it can be used as a tool to

breakdown network security into seven simple layers with a logical process.

Traditional books have always presented network security in an unorganized fashion where some

books cover issues that other books may completely neglect. In implementation, it can be used

by network architects to insure that they are not missing any important security details while

designing a network. In maintaining existing networks it can be used to develop maintenance

schedules and lifecycles for the security of the existing network. It can to detect where breaches

have occurred so that an attack can be mitigated.

The NSM is beneficial to all types of professionals. Let us not forget professionals who are

transitioning into positions previously held by other network security professionals. Currently,

learning what security techniques are implemented on a network and which ones have not can be

a daunting task when the basic security structure of the network is unclear. The NSM provides

that basic structure. It provides the new professional with the knowledge to discover what

has been implemented and what has not been implemented from a security standpoint. Without

an NSM, the network security community faces potential chaos as professionals continue to

implement their own versions of secure networks without adequate structure.

symmetric Cipher model

Symmetric cipher model (figure 2.1) consists of five ingredients, these ingredients are :

Plaintext

Encryption algorithm which performs substitutions/transformations on plaintext

Secret key which controls exact substitutions/transformations used in encryption

algorithm

Ciphertext

Decryption algorithm which is the inverse of encryption algorithm

 

Page 13: unit 1

Symmetric cipher mathematical model (figure 2.2) can be described as follows:

X : plaintext X = [ X 1, X 2, … , X M].

K : key K = [K 1, K 2,…….., K j]

Y : ciphertext Y = [Y 1, Y 2, …... , Y N]

 

Y = E K(X) E K : Encryption algorithm

X = D K(Y) D K : Decryption algorithm

 

A source produces a message in plaintext X = [ X 1, X 2, … , X M]. The M elements of X are

letters in some finite alphabet, in classical encryption scheme the alphabet consists of 26 capital

letters while in modern encryption scheme the binary alphabet [0,1] is used.

Page 14: unit 1

A key of the form K = [K 1, K 2,…….., K j] is generated to be used for encryption, if the key is

generated at the message source, then it must be provided to the destination by means of some

secure channel. But if the key is generated by trusted third party, the key must be securely

delivered to both source and destination.

The encryption algorithm E takes the plaintext X and the key K as input then it transforms X into

cipher text Y = [Y 1, Y 2, …... , Y N], so we can write this asY = E K(X). This means that Y is

produced using encryption algorithm E as a function of the plaintext X and the key K.

At the destination end the ciphertext Y is transformed into the original plaintext X using the

decryption algorithm D and the shared key K, we can write this as X = D K(Y).

An opponent (Cryptanalyst) can get Y but he can't access to K or X. He may be try to recover X

or K. It is assumed that the opponent knows the encryption and decryption algorithms.

Generally assume that the algorithm is known. This allows easy distribution of software and

hardware implementations. Hence assume just keeping key secret is sufficient to secure

encrypted messages. Have plaintext X, ciphertext Y, key K, encryption algorithm E k, decryption

algorithm D k.

Substitution techniques

In cryptography, a Caesar cipher, also known as a Caesar's cipher, the shift cipher, Caesar's

code or Caesar shift, is one of the simplest and most widely known encryption techniques. It is

a type of substitution cipher in which each letter in the plaintext is replaced by a letter some fixed

number of positions down the alphabet. For example, with a shift of 3, A would be replaced by

D, B would become E, and so on. The method is named after Julius Caesar, who used it to

communicate with his generals.

The encryption step performed by a Caesar cipher is often incorporated as part of more complex

schemes, such as the Vigenère cipher, and still has modern application in the ROT13 system. As

with all single alphabet substitution ciphers, the Caesar cipher is easily broken and in practice

offers essentially no communication security.

Page 15: unit 1

S-DES Key Generation

S-DES depends on the use of a 10-bit key shared between sender and receiver. From this key,

two 8-bit subkeys are produced for use in particular stages of the encryption and decryption

algorithm. Figure C.2 depicts the stages followed to produce the subkeys.

First, permute the key in the following fashion. Let the 10-bit key be designated as (k1, k2,

k3, k4, k5, k6, k7, k8, k9, k10). Then the permutation P10 is defined as:

P10(k1, k2, k3, k4, k5, k6, k7, k8, k9, k10) = (k3, k5, k2, k7, k4, k10, k1, k9, k8, k6)

P10 can be concisely defined by the display:

P10

3 5 2 7 4 10 1 9 8 6

This table is read from left to right; each position in the table gives the identity of the input

bit that produces the output bit in that position. So the first output bit is bit 3 of the input; the

second output bit is bit 5 of the input, and so on. For example, the key (1010000010) is permuted

to (1000001100). Next, perform a circular left shift (LS-1), or rotation, separately on the first

five bits and the second five bits. In our example, the result is (00001 11000).

Next we apply P8, which picks out and permutes 8 of the 10 bits according to the following

rule:

P8

6 3 7 4 8 5 10 9

The result is subkey 1 (K1). In our example, this yields (10100100)

We then go back to the pair of 5-bit strings produced by the two LS-1 functions and

perform a circular left shift of 2 bit positions on each string. In our example, the value (00001

11000) becomes (00100 00011). Finally, P8 is applied again to produce K2. In our example, the

result is (01000011).

C.3 S-DES Encryption

Page 16: unit 1

Figure C.3 shows the S-DES encryption algorithm in greater detail. As was mentioned,

encryption involves the sequential application of five functions. We examine each of these.

Initial and Final Permutations

The input to the algorithm is an 8-bit block of plaintext, which we first permute using the IP

function:

IP

2 6 3 1 4 8 5 7

This retains all 8 bits of the plaintext but mixes them up. At the end of the algorithm, the inverse

permutation is used:

8/5/05

C-4

IP–1

4 1 3 5 7 2 8 6

It is easy to show by example that the second permutation is indeed the reverse of the first;

that is, IP–1(IP(X)) = X.

The Function fK

The most complex component of S-DES is the function fK, which consists of a combination of

permutation and substitution functions. The functions can be expressed as follows. Let L and R

be the leftmost 4 bits and rightmost 4 bits of the 8-bit input to fK, and let F be a mapping (not

necessarily one to one) from 4-bit strings to 4-bit strings. Then we let

fK(L, R) = (L F(R, SK), R)

where SK is a subkey and is the bit-by-bit exclusive-OR function. For example, suppose the

output of the IP stage in Figure C.3 is (10111101) and F(1101, SK) = (1110) for some key SK.

Then fK(10111101) = (01011101) because (1011) (1110) = (0101).

We now describe the mapping F. The input is a 4-bit number (n1n2n3n4). The first operation

is an expansion/permutation operation:

E/P

4 1 2 3 2 3 4 1

For what follows, it is clearer to depict the result in this fashion:

n4 n1 n2 n3

n2 n3 n4 n1

Page 17: unit 1

The 8-bit subkey K1 = (k11, k12, k13, k14, k15, k16, k17, k18) is added to this value using

exclusive-

OR:

n4 k11 n1 k12 n2 k13 n3 k14

n2 k15 n3 k16 n4 k17 n1 k18

Let us rename these 8 bits:

p0,0 p0,1 p0,2 p0,3

p1,0 p1,1 p1,2 p1,3

The first 4 bits (first row of the preceding matrix) are fed into the S-box S0 to produce a 2-

bit output, and the remaining 4 bits (second row) are fed into S1 to produce another 2-bit output.

The S-boxes operate as follows. The first and fourth input bits are treated as a 2-bit number

that specify a row of the S-box, and the second and third input bits specify a column of the Sbox.

The entry in that row and column, in base 2, is the 2-bit output. For example, if (p0,0p0,3) =

(00) and (p0,1p0,2) = (10), then the output is from row 0, column 2 of S0, which is 3, or (11) in

binary. Similarly, (p1,0p1,3) and (p1,1p1,2) are used to index into a row and column of S1 to

produce an additional 2 bits.

Next, the 4 bits produced by S0 and S1 undergo a further permutation as follows:

P4

2 4 3 1

The output of P4 is the output of the function F.

The Switch Function

The function fK only alters the leftmost 4 bits of the input. The switch function (SW)

interchanges the left and right 4 bits so that the second instance of fK operates on a different 4

bits. In this second instance, the E/P, S0, S1, and P4 functions are the same. The key input is K2.

C.4 Analysis of Simplified DES

A brute-force attack on simplified DES is certainly feasible. With a 10-bit key, there are only

210

= 1024 possibilities. Given a ciphertext, an attacker can try each possibility and analyze the

result to determine if it is reasonable plaintext.

What about cryptanalysis? Let us consider a known plaintext attack in which a single

Page 18: unit 1

plaintext (p1, p2, p3, p4, p5, p6, p7, p8) and its ciphertext output (c1, c2, c3, c4, c5, c6, c7, c8)

are

known and the key (k1, k2, k3, k4, k5, k6, k7, k8, k9, k10) is unknown. Then each ci is a

polynomial

function gi of the pj's and kj's. We can therefore express the encryption algorithm as 8 nonlinear

equations in 10 unknowns. There are a number of possible solutions, but each of these could be

calculated and then analyzed. Each of the permutations and additions in the algorithm is a linear

mapping. The nonlinearity comes from the S-boxes. It is useful to write down the equations for

these boxes. For clarity, rename (p0,0, p0,1,p0,2, p0,3) = (a, b, c, d) and (p1,0, p1,1,p1,2, p1,3) =

(w, x,

y, z), and let the 4-bit output be (q, r, s, t) Then the operation of the S0 is defined by the

following equations:

q = abcd ab ac b d

r = abcd abd ab ac ad a c 1

where all additions are modulo 2. Similar equations define S1. Alternating linear maps with these

nonlinear maps results in very complex polynomial expressions for the ciphertext bits, making

cryptanalysis difficult. To visualize the scale of the problem, note that a polynomial equation in

10 unknowns in binary arithmetic can have 210 possible terms. On average, we might therefore

8/5/05

C-6

expect each of the 8 equations to have 29 terms. The interested reader might try to find these

equations with a symbolic processor. Either the reader or the software will give up before much

progress is made.

C.5 Relationship to DES

DES operates on 64-bit blocks of input. The encryption scheme can be defined as:

IP

-1

o fK16

oSW o fK15

oSWoLoSWo fK1

o IP

Page 19: unit 1

A 56-bit key is used, from which sixteen 48-bit subkeys are calculated. There is an initial

permutation of 64 bits followed by a sequence of shifts and permutations of 48 bits.

Within the encryption algorithm, instead of F acting on 4 bits (n1n2n3n4), it acts on 32 bits

(n1…n32). After the initial expansion/permutation, the output of 48 bits can be diagrammed as:

n32 n1 n2 n3 n4 n5

n4 n5 n6 n7 n8 n9

n28 n29 n30 n31 n32 n1

This matrix is added (exclusive-OR) to a 48-bit subkey. There are 8 rows, corresponding to

8 S-boxes. Each S-box has 4 rows and 16 columns. The first and last bit of a row of the

preceding matrix picks out a row of an S-box, and the middle 4 bits pick out a column.

Block Cipher Principle

I. SYMMETRIC ENCRYPTION PRINCIPLES

This lecture discusses the principles of all known contemporary symmetric key cryptosystems.

All these

systems have evolved from early classical ciphers discussed in the previous lectures. As we have

seen,

these classical ciphers may operate in the following two ways.

• Stream cipher, such as Vigen`ere cipher, encrypts one letter at a time.

• Block cipher, such as Hill cipher, treats a n-letter block of plaintext as a whole and produce a

ciphertext block of equal length.

A. Block Cipher Principles

Page 20: unit 1

As block cipher have different modes of operation (we will discuss this topic later in this lecture)

and applies to a broader range of applications than stream cipher, we will focus on its design

principles in this lecture.

A block cipher transform a plaintext block of n letters into an encrypted block. For the alphabet

with 26 letters, there are 26n possible different plaintext blocks. The most general way of

encrypting a n-letter block is to take each of the plaintext blocks and map it to a cipher block

(arbitrary n-letter substitution cipher). For decryption to be possible, such mapping needs to be

one-to-one (i.e., each plaintext block must be mapped to a unique ciphertext block). The number

of different one-to-one mappings among n-letter blocks is (26n)!.

The length of block n can not be too short in order to secure the cryptographic scheme. For

example, n = 1 gives a monoalphabetic cipher. Such schemes, as we have seen, are vulnerable to

frequency analysis and brute-force attacks. However, an arbitrary reversible substitution cipher

for a large block size n is not practical. Let’s consider the problem of specifying a mapping of all

possible n-letter blocks. In a cipher, each key specifies such a mapping. Let’s assume the key

consists of a block of k letters. Then the number of all possible keys is 26k. Then for a n-letter

arbitrary substitution block cipher, the key size needs to satisfy 26k _ (26n)!, i.e., k _ n × 26n!.

So the major challenge to design a symmetric key cryptographic scheme is to provide enough

security (e.g., using a reasonable large block size) with a reasonable small size key1.

1.It is fairly obvious that the key length can not be too short either. Otherwise the cryptographic

scheme would also be vulnerable to brute-force attack where the attackers may search through all

possible keys.

2.However, how do we know that a cryptographic system is secure enough? To answer this

question,Claude Shannon theoretically deduced the following principles that should be followed

to design secure cryptographic systems. These principles aim at thwarting cryptanalysis based on

known statistical properties of the plaintext.

• Confusion. In Shannon’s original definitions, confusion makes the relation between the key and

the ciphertext as complex as possible. Ideally, every letter in the key influences every letter of

the ciphertext block. Replacing every letter with the one next to it on the typewriter keyboard is a

simple example of confusion by substitution. However, good confusion can only be achieved

when each character of the ciphertext depends on several parts of the key, and this dependence

Page 21: unit 1

appears to be random to the observer. Ciphers that do not offer much confusion (such as

Vigen`ere cipher) are vulnerable to frequency analysis.

• Diffusion. Diffusion refers to the property that the statistics structure of the plaintext is

dissipated into long range statistics of the ciphertext. In contrast to confusion, diffusion spreads

the influence of a single plaintext letter over many ciphertext letters. In terms of the frequency

statistics of letters, digrams, etc in the plaintext, diffusion randomly spreads them across several

characters in the ciphertext. This means that much more ciphertexts are needed to do a

meaningful statistical attackon the cipher.

B. The Feistel Network

Product ciphers use the two classical encryption forms: substitution and transposition,

alternatively in multiple rounds to achieve both confusion and diffusion respectively. Shannon

was the first to investigate the product cryptosystem (so called substitution-permutation network)

and show that some sophisticated heuristic ciphers were nothing other than products of some

simpler ciphers. Most importantly, Shannon identified the necessary condition of the cipher

strength increases as a result of cascading simple ciphers.

One possible way to build a secret key algorithm using substitution-permutation-network is to

break the input into manageable-sized chunks, do a substitution on each small chunk, and then

take the outputs of all the substitutions and run them through a permuter that is as big as the

input, which shuffles the letters around. Then the process is repeated, so that each letter winds up

as an input to each of the substitutions.

Since modern cryptosystems are all computer-based, from now on we will assume that both plain

and cipher text are strings of bits ({0, 1}), instead of strings of letters ({a, b, c, ..., z}).

The Feistel network shown in Fig. 1 is a particular form of the substitution-permutation network.

The input to a Feistel network is a plaintext block of n bits, and a key K. The plaintext block is

divided intotwo halves, L0 and R0. The two halves of the data pass through r rounds of

processing and then combine to produce the ciphertext block. Each round i has as input Li−1 and

Ri−1, derived from the previous round, as well as a subkey Ki, derived from the overall key K.

In general, the subkey Ki are different from K and from each other. In this structure, a

substitution is performed via the round function F, and permutation is performed that

interchanges the two halves of the data.

Page 22: unit 1

The exact realization of a Feistel network depends on the choices of the following parameters

and design features.

• Block size: Larger block size means greater security, but reduces encryption/decryption speed.

• Key size: Larger key size means greater security but may decrease encryption/decryption

speed.

• Number of rounds: Multiple rounds offer increasing security.

• Subkey generation algorithm: Greater complexity in subkey generation leads to greater

security.

4• Round function: Greater complexity in round function means greater difficulty of

cryptanalysis.

It is worth noting that the process of decryption with a Feistel network is essentially the same as

the encryption process by using the ciphertext as input to the network, but using the subkey Ki in

reverse order, as shown in Fig 2. The reason is explained as follows. Let’s consider the last step

in encryption,which gives,

5 LE16 = RE15 (1)

RE16 = LE15 _ F(RE15,K16) (2)

On the decryption side,

LD1 = RD0 = LE16 = RE15 (3)

RD1 = LD0 _ F(RD0,K16) (4)

= RE16 _ F(RE15,K16) (5)

= [LE15 _ F(RE15,K16)] _ F(RE15,K16) (6)

= LE15 (7)

The process can be done iteratively. Finally, we will see that the output of the decryption is the

same as the input to the encryption (i.e., original plaintext).

Modes of operation

Electronic codebook (ECB)

The simplest of the encryption modes is the electronic codebook (ECB) mode, in which the

message is split into blocks and each is encrypted separately. The disadvantage of this method is

that identical plaintext blocks are encrypted to identical ciphertext blocks; it does not hide data

Page 23: unit 1

patterns. Thus, in some senses it doesn't provide message confidentiality at all, and is not

recommended for cryptographic protocols.

Here's a striking example of the degree to which ECB can reveal patterns in the plaintext. A

pixel-map version of the image on the left was encrypted with ECB mode to create the center

image:

Page 24: unit 1

Original Encrypted using ECB mode Encrypted securely

The image on the right is how the image might look encrypted with CBC, CTR or any of the

other more secure modes -- indistinguishable from random noise. Note that the random

appearance of the image on the right tells us very little about whether the image has been

securely encrypted; many kinds of insecure encryption have been developed which would

produce output just as random-looking.

ECB mode can also make protocols without integrity protection even more susceptible to replay

attacks, since each block gets decrypted in exactly the same way. For example, the Phantasy Star

Online: Blue Burst online video game uses Blowfish in ECB mode. Before the key exchange

system was cracked leading to even easier methods, cheaters repeated encrypted "monster killed"

message packets, each an encrypted Blowfish block, to illegitimately gain experience points

quickly.

Cipher-block chaining (CBC)

In the cipher-block chaining (CBC) mode, each block of plaintext is XORed with the previous

ciphertext block before being encrypted. This way, each ciphertext block is dependent on all

plaintext blocks up to that point.

Page 25: unit 1

Cipher feedback (CFB) and output feedback (OFB)

The cipher feedback (CFB) and output feedback (OFB) modes make the block cipher into a

stream cipher: they generate keystream blocks, which are then XORed with the plaintext blocks

to get the ciphertext. Just as with other stream ciphers, flipping a bit in the ciphertext produces a

flipped bit in the plaintext at the same location.

With cipher feedback a keystream block is computed by encrypting the previous ciphertext

block.

Output feedback generates the next keystream block by encrypting the last one.

Counter (CTR)

Page 26: unit 1

Like OFB, counter mode turns a block cipher into a stream cipher. It generates the next

keystream block by encrypting successive values of a "counter". The counter can be any simple

function which produces a sequence which is guaranteed not to repeat for a long time, although

an actual counter is the simplest and most popular. CTR mode has very similar characteristics to

OFB, but also allows a random access property for decryption.

Integrity protection and error propagation

The block cipher modes of operation presented above provide no integrity protection. This

means that an attacker who does not know the key may still be able to modify the data stream in

ways useful to them. It is now generally well understood that wherever data is encrypted, it is

nearly always essential to provide integrity protection for security. For secure operation, the IV

Page 27: unit 1

and ciphertext generated by these modes should be authenticated with a secure MAC, which is

checked before decryption.

Before these issues were well understood, it was common to discuss the "error propogation"

properties of a mode of operation as a means of evaluating it. It would be observed, for example,

that a one-block error in the transmitted ciphertext would result in a one-block error in the

reconstructed plaintext for ECB mode encryption, while in CBC mode such an error would affect

two blocks:

Some felt that such resilience was desirable in the face of random errors, while others argued that

it increased the scope for attackers to modify the message to their own ends.

However, when proper integrity protection is used such an error will result (with high

probability) in the entire message being rejected - if resistance to random error is desirable, error

correcting codes should be applied after encryption.

AEAD block cipher modes of operation such as IACBC, IAPM, OCB, EAX, and CWC mode

directly provide both encryption and authentication.

Initialization vector (IV)

Page 28: unit 1

All modes (except ECB) require an initialization vector, or IV - a sort of dummy block to kick

off the process for the first real block, and also provide some randomisation for the process.

There is no need for the IV to be secret, but it is important that it is never reused with the same

key. For CBC and CFB, reusing an IV leaks some information. For OFB and CTR, reusing an IV

completely destroys security. In addition, the IV used in CFB mode must be randomly generated

and kept secret until the first block of plaintext is made available for encryption.

Padding

Because a block cipher works on units of a fixed size, but messages come in a variety of lengths,

some modes (mainly CBC) require that the final block be padded before encryption. Several

padding schemes exist. The simplest is simply to add null bytes to the plaintext to bring its length

up to a multiple of the block size, but care must be taken that the original length of the plaintext

can be recovered; this is so, for example, if the plaintext is a C style string which contains no null

bytes except at the end. Slightly more complex is the original DES method, which is to add a

single one bit, followed by enough zero bits to fill out the block; if the message ends on a block

boundary, a whole padding block will be added. Most sophisticated are CBC-specific schemes

such as ciphertext stealing or residual block termination, which do not cause any extra ciphertext

expansion, but these schemes are relatively complex.

CFB, OFB and CTR modes do not require any special measures to handle messages whose

lengths are not multiples of the block size since they all work by XORing the plaintext with the

output of the block cipher,