Top Banner
Information theory Cryptography and Authentication A.J. Han Vinck Essen, 2003
28

Information theory

Jan 15, 2016

Download

Documents

torgny

Information theory. Cryptography and Authentication A.J. Han Vinck Essen, 2003. Cryptographic model. senderreceiver attacker secrecyencrypt Mdecrypt M read M find key authenticationsigntest validity modify generate. - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Information theory

Information theory

Cryptography and Authentication

A.J. Han VinckEssen, 2003

Page 2: Information theory

Cryptographic model

sender receiver attacker

secrecy encrypt M decrypt M read M

find key

authentication sign test validity modify

generate

Page 3: Information theory

General (classical) Communication Model

source M encrypter C decrypter M destination

analyst M‘

Secure key channel

K K

Page 4: Information theory

no information providing ciphers

ShannonShannon (1949): Perfect secrecy condition

Prob. distribution (M) = Prob. distribution (M|C)

and thus: H(M|C) = H(M)

(no gain if we want to guess the message given the cipher)

Page 5: Information theory

Perfect secrecy condition

Furthermore: for perfect secrecy H(M) H(K)

H(M|C) H(MK|C) = H(K|C) + H(M|CK)

C and K M

= H(K|C) H(K)

H(M) = H(M|C) H(K) perfect secrecy!

Page 6: Information theory

Imperfect secrecy

How much ciphertext do we need to find

unique key-message pair

given the cipher?

Minimum is called unicity distance.

Page 7: Information theory

Imperfect secrecy

Suppose we observe a piece of ciphertext

Key K, H(K)

source M, H(M)

Cipher C

CL

K and ML determineCL

Key equivocation: H(K| CL) = H(K,CL) – H(CL)

H(CL ) Llog2|C|

Page 8: Information theory

Question: When is H(K| CL) = 0 ?

H(K| CL) = H(K) + H(CL|K) – H(CL)

= H(K) + H(ML) – H(CL)

Let HS(M) be the normalized entropy per output symbol

U = the least value of L such that H(K| CL) = 0

Using: H(CL ) Llog2|C|; H(ML) LHS(M)

U H(K) / [ log2|C|- HS (M)]

( K ML CL )

Page 9: Information theory

conclusion

Make HS (M) as large as possible:

USE DATA REDUCTION !!

H(K| CL)H(K)

L

U is called the unicity point

H(K) / [ log2|C|- HS (M)]

Page 10: Information theory

examples: U H(K) / [ log2|C|- HS(M) ]

Substitution cipher: H(K) = log2 26!

English: HS(M) 2; |M|= |C|=26; U 32 symbols

DES: U 56 / [ 8 – 2 ] 9 ASCII symbols

Page 11: Information theory

examples: U H(K) / [ log2|C|- HS(M) ]

Permutation cipher: period 26; H(K) = log226!

English: H(M) 2; |M|= |C|=26; U 32 symbols

Vigenere: key length 80, U 140 !

Page 12: Information theory

Plaintext-ciphertext attack

H(K,ML, CL) = H(K|ML, CL) + H( CL|ML ) + H(ML)

= H(CL|K,ML) + H( K|ML ) + H(ML)

H(K|ML, CL) = H(K) - H( CL|ML )

H( CL|ML ) Llog2|C|

thus: U H(K) / log2|C|

CL ← K,ML K independent from ML

Page 13: Information theory

Wiretapping model

sender noiselesss channel receiver

noise

wiretapper

Send: n binary digits

0: Xn of even weight

1: Xn of odd weight

Wiretapper:

Pe = P(Zn has odd # of errors)

= 1- ½(1+(1-2p)n)

S

Zn

Xn Xn

Page 14: Information theory

Wiretapping

Result:

for p ½ Pe ½

and H(S|Zn) = 1

for p 0, Pe np

and H(S|Zn) h(np)

Pe = 1- ½(1+(1-2p)n)

Page 15: Information theory

Wiretapping general strategy

Encoder: use R = k/n error correcting code C

carrier c { 2k codewords }

message m { 2nh(p) vectors as correctable noise }

select c at random and transmit c m

Note: 2k 2nh(p) 2n k/n 1 – h(p)

Page 16: Information theory

Communication sender-receiver

transmit = receive : c m

first decode: c

calculate m = c m c = m

m c

c m

Page 17: Information theory

Wiretapper:

receive z = c m n‘

- first decode: c

possible when m n‘ is decodable noise

- calculate: c m n‘ c = m n‘

m‘ = m n‘ is one of 2nh(p‘) messages

the # of noise sequences n‘ is |n‘ | ~ 2nh(p‘)

c m

n‘

z = c m n‘

Page 18: Information theory

Wiretapping general strategy

Result: information rate R = h(p)

p‘ small: c decodable and H(Sk|Zn) = nh(p‘)

p‘ p: H(Sk|Zn) = nh(p)H(Sk|Zn) nh(p)

P P‘

Page 19: Information theory

Wiretapping general strategy picture

Volume 2nh(p‘)

Volume 2nh(p)

codeword

2k codewords

2n vectors

Page 20: Information theory

authentication

Encryption table:

message X

Key K ( X, K ) Y

unique cipher: Y

Page 21: Information theory

Authentication: impersonation

message: 0 1 select y at random

00 00 10 Pi (y = correct) = ½

key 01 01 00 P(key = i ) = 1/4

10 11 01 P(message = i ) = 1/2

11 10 11 cipherPi is probability that an injected cipher is valid

Page 22: Information theory

Authentication: bound

Let : |X| # messages

|K| # keys

|Y| # ciphers

Pi prob (random cipher = valid) |X|/|Y|

= probability that we choose one of the colors in a specific row

´ specified by the key

|X|

|K|

Page 23: Information theory

Cont‘d

Since: ( Y, K) X H(X) = H(Y|K)

An improved (Simmons) bound gives:

Pi 2H(X)/2H(Y) = 2H(Y|K)-H(Y) = 2-I(Y;K)

Page 24: Information theory

Cont‘d

Pi 2-I(Y;K) = 2+ H(K |Y) - H(K)

For low probability of success: H(K|Y) = 0

For perfect secrecy: H(K|Y) = H(K)

Contradiction!

Page 25: Information theory

Cont‘d

0 1

0 00 01

1 10 11

0 1

0 00 01

1 01 00

prob success = ½ prob success = 1

H(K|Y) = 0 H(K|Y) = 1

no secrecy perfect secrecy

Prob ( key = 0 ) = Prob ( key = 0 ) = ½; Prob (X = 0) = Prob(X =

1) = 1/2

Page 26: Information theory

Authentication: impersonation

X= 0 1 P(X=0) = P(X=1)= ½

K= 0 0 1 P(K=0) = P(K=1)= ½

1 1 2 H(K) =1; H(K|Y) = ½

P(0,1,2) = (¼, ½, ¼) Pi = ½+2x¼x½= 0.75

random choise: Pi = 2/3

Pi 2H(Y|K)-H(Y) = 21-1.5 = 2 –0.5 = 0.7

Page 27: Information theory

Authentication: substitution

message

0 1

0 0 2

Key 1 1 3

2 0 3

3 1 2 cipher

Active wiretapping:

replace an observed cipher by another cipher

Example: observe 0 replace by 3

probability of success = ½ (accepted only if key = 2)

Page 28: Information theory

Authentication: substitution examples

0 1

0 0 2

1 1 3

2 0 3

3 1 2

0 1

0 3

1 2

2 1

3 0

0 1

0 2

1 0

3 1

2 3

Ps = ½

H(X|Y) = 0

Ps = 1

H(X|Y) = 1

Ps = ½

H(X|Y) = 1

Ps = probability( substitution is successful)

H(K) = 2; H(K|Y) = 1;Pi ½