Top Banner
9/20/2005 J. Liang: SFU ENSC 424 1 ENSC 424 - Multimedia Communications Engineering Topic 6: Arithmetic Coding 1 Jie Liang Engineering Science Simon Fraser University [email protected]
23
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: 06 Arithmetic 1

9/20/2005J. Liang: SFU ENSC 424 1

ENSC 424 - Multimedia Communications EngineeringTopic 6: Arithmetic Coding 1

Jie LiangEngineering Science

Simon Fraser [email protected]

Page 2: 06 Arithmetic 1

9/20/2005J. Liang: SFU ENSC 424 2

Outline

� Introduction�Basic Encoding and Decoding�Scaling and Incremental Coding� Integer Implementation�Adaptive Arithmetic Coding�Binary Arithmetic Coding�Applications

�JBIG, H.264, JPEG 2000

Page 3: 06 Arithmetic 1

9/20/2005J. Liang: SFU ENSC 424 3

Huffman Coding: The Retired Champion� Replacing an input symbol with a codeword� Need a probability distribution� Hard to adapt to changing statistics� Need to store the codeword table� Minimum codeword length is 1 bit

� Replace the entire input with a single floating-point number

� Does not need the probability distribution� Adaptive coding is very easy� No need to keep and send codeword table� Fractional codeword length

Arithmetic Coding: The Rising Star

Page 4: 06 Arithmetic 1

9/20/2005J. Liang: SFU ENSC 424 4

History of Arithmetic Coding� Claude Shannon: 1916-2001

� A distant relative of Thomas Edison� 1932: Went to University of Michigan.� 1937: Master thesis at MIT became the foundation of digital circuit design:

� “The most important, and also the most famous, master's thesis of the century“� 1940: PhD, MIT� 1940-1956: Bell Lab (back to MIT after that)� 1948: The birth of Information Theory

� A mathematical theory of communication, Bell System Technical Journal.� Earliest idea of arithmetic coding

� Robert Fano: 1917-� Shannon-Fano code: proved to be sub-optimal by Huffman� 1952: First Information Theory class. Students included:

� David Huffman: Huffman Coding� Peter Elias: Recursive implementation of arithmetic coding

� Frederick Jelinek� Also Fano’s student: PhD MIT 1962 (now at Johns Hopkins)� 1968: Further development of arithmetic coding

� 1976: Rediscovered by Pasco and Rissanen� Practical implementation: since 1980’s

Bell Lab for Sale: http://www.spectrum.ieee.org/sep05/1683

Page 5: 06 Arithmetic 1

9/20/2005J. Liang: SFU ENSC 424 5

Introduction

000 010 011 100

1

00

010 011

� Recall table look-up decoding of Huffman code� N: alphabet size� L: Max codeword length� Divide [0, 2^L] into N intervals� One interval for one symbol� Interval size is roughly

proportional to symbol prob.

� Arithmetic coding applies this idea recursively� Normalizes the range [0, 2^L] to [0, 1].� Map an input sequence to a unique tag in [0, 1).

0 1

abcd…..

dcba…..

Page 6: 06 Arithmetic 1

9/20/2005J. Liang: SFU ENSC 424 6

Arithmetic Coding� Disjoint and complete partition of the range [0, 1)

[0, 0.8), [0.8, 0.82), [0.82, 1)� Each interval corresponds to one symbol� Interval size is proportional to symbol probability

� Observation: once the tag falls into an interval, it never gets out of it

0 1� The first symbol restricts the tag

position to be in one of the intervals� The reduced interval is partitioned

recursively as more symbols are processed.

0 1

0 1

0 1

a b c

Page 7: 06 Arithmetic 1

9/20/2005J. Liang: SFU ENSC 424 7

Some Questions to think about:

�Why compression is achieved this way?�How to implement it efficiently?�How to decode the sequence?�Why is it better than Huffman code?

Page 8: 06 Arithmetic 1

9/20/2005J. Liang: SFU ENSC 424 8

Possible Ways to Terminate Encoding

1. Define an end of file (EOF) symbol in the alphabet. Assign a probability for it.

2. Encode the lower end of the final range.3. If number of symbols is known to the

decoder, encode any nice number in the final range.

0 1

a b c EOF

Page 9: 06 Arithmetic 1

9/20/2005J. Liang: SFU ENSC 424 9

Example:Prob.Symbol

0.183

0.022

0.81

1 2 3

0 0.8 0.82 1.0

� Map to real line range [0, 1)� Order does not matter

� Decoder needs to use the same order

� Disjoint but complete partition: � 1: [0, 0.8): 0, 0.799999…9� 2: [0.8, 0.82): 0.8, 0.819999…9� 3: [0.82, 1): 0.82, 0.999999…9

Page 10: 06 Arithmetic 1

9/20/2005J. Liang: SFU ENSC 424 10

Range 0.002881 2 3

0.7712 0.773504 0.7735616 0.77408

Range 0.1441 2 3

0.656 0.7712 0.77408 0.8

Encoding � Input sequence: “1321”

Range 1

Termination: Encode the lower end (0.7712) to signal the end.

1 2 3

0 0.8 0.82 1.0

Range 0.81 2 3

0 0.64 0.656 0.8

Difficulties: 1. Shrinking of interval requires very high precision for long sequence.2. No output is generated until the entire sequence has been processed.

Page 11: 06 Arithmetic 1

9/20/2005J. Liang: SFU ENSC 424 11

Encoder Pseudo Code

∫∞−

=≤=x

X dxxpxXPxF )()()(

X

CDF

1 2 3 4

0.20.4

0.81.0

Probability Mass Function

X1 2 3 4

0.2 0.2

0.4

0.2

� Cumulative Density Function (CDF)� For continuous distribution:

∑−∞=

==≤=i

kX kXPiXPiF )()()(

� For discrete distribution:

� Properties:� Non-decreasing� Piece-wise constant� Each segment is closed at the lower end.

Page 12: 06 Arithmetic 1

9/20/2005J. Liang: SFU ENSC 424 12

Encoder Pseudo Codelow=0.0, high=1.0;

while (not EOF) {

n = ReadSymbol();

RANGE = HIGH - LOW;

HIGH = LOW + RANGE * CDF(n);LOW = LOW + RANGE * CDF(n-1);

}

output LOW;

0.7712+0.00288*0=0.7712

0.656+0.144*0.82=0.77408

0.0 + 0.8*1=0.8

0.0+1.0*0.8=0.8

1.0

HIGH

1.00.0Initial

0.002880.656+0.144*0.8=0.77122

0.0023040.7712+0.00288*0.8=0.7735041

0.1440.0 + 0.8*0.82=0.6563

0.80.0+1.0*0 = 0.01

RANGELOWInput

� Keep track of � LOW, HIGH, RANGE

�Any two are sufficient, e.g., LOW and RANGE.

Page 13: 06 Arithmetic 1

9/20/2005J. Liang: SFU ENSC 424 13

Decode 11 2 3

0.7712 0.773504 0.7735616 0.77408

Decode 21 2 3

0.656 0.7712 0.77408 0.8

DecodingDecode 1

1 2 3

0 0.8 0.82 1.0

Decode 31 2 3

0 0.64 0.656 0.8

Receive 0.7712

Drawback: need to recalculate all thresholds each time.

Page 14: 06 Arithmetic 1

9/20/2005J. Liang: SFU ENSC 424 14

1 2 3

0 0.8 0.82 1.0

1 2 3

0 0.8 0.82 1.0

1 2 3

0 0.8 0.82 1.0

1 2 3

0 0.8 0.82 1.0

Receive 0.7712Decode 1

x =(0.7712-0) / 0.8= 0.964Decode 3

Simplified Decodingrange

lowxx

−←� Normalize RANGE to [0, 1) each time

� No need to recalculate the thresholds.

x =(0.964-0.82) / 0.18= 0.8Decode 2

x =(0.8-0.8) / 0.02= 0Decode 1.Stop.

Page 15: 06 Arithmetic 1

9/20/2005J. Liang: SFU ENSC 424 15

Decoder Pseudo Code

Low = 0; high = 1;

x = GetEncodedNumber();

While (x ≠ low) {

n = DecodeOneSymbol(x);

output symbol n;

x = (x - CDF(n-1)) / (CDF(n) - CDF(n-1));

};

Page 16: 06 Arithmetic 1

9/20/2005J. Liang: SFU ENSC 424 16

Outline

� Introduction�Basic Encoding and Decoding�Scaling and Incremental Coding� Integer Implementation�Adaptive Arithmetic Coding�Binary Arithmetic Coding�Applications

�JBIG, H.264, JPEG 2000

Page 17: 06 Arithmetic 1

9/20/2005J. Liang: SFU ENSC 424 17

Scaling and Incremental Coding� Problems of Previous examples:

� Need high precision� No output is generated until the entire sequence is

encoded

� Key Observation:� As the RANGE reduces, many MSB’s of LOW and HIGH become

identical:� Example: Binary form of 0.7712 and 0.773504:

0.1100010.., 0.1100011.., � We can output identical MSB’s and re-scale the rest:

� � Incremental encoding� This also allows us to achieve infinite precision with finite-precision

integers.

� Three kinds of scaling: E1, E2, E3

Page 18: 06 Arithmetic 1

9/20/2005J. Liang: SFU ENSC 424 18

E1 and E2 Scaling� E1: [LOW HIGH) in [0, 0.5)

� LOW: 0.0xxxxxxx (binary),� HIGH: 0.0xxxxxxx.

0 0.5 1.0

0 0.5 1.0� E2: [LOW HIGH) in [0.5, 1)� LOW: 0.1xxxxxxx, � HIGH: 0.1xxxxxxx.

� Output 0, then shift left by 1 bit� [0, 0.5) �[0, 1): E1(x) = 2 x

0 0.5 1.0

� Output 1, subtract 0.5,shift left by 1 bit� [0.5, 1) �[0, 1): E2(x) = 2(x - 0.5)

0 0.5 1.0

Page 19: 06 Arithmetic 1

9/20/2005J. Liang: SFU ENSC 424 19

Encoding with E1 and E20 0.8 1.0

Input 1

0 0.656 0.8

Input 3

E2: Output 12(x – 0.5)

0.312 0.5424 0.54816 0.6

Input 2

0.0848 0.09632

E2: Output 1

0.1696 0.19264

E1: 2x, Output 0

E1: Output 0

0.3392 0.38528

0.6784 0.77056

E1: Output 0

0.3568 0.54112

E2: Output 1Input 1

0.3568 0.504256

Encode any value in the tag, e.g., 0.5

Output 1All outputs: 1100011

Prob.Symbol

0.183

0.022

0.81

Page 20: 06 Arithmetic 1

9/20/2005J. Liang: SFU ENSC 424 20

To verify

�LOW = 0.5424 (0.10001010... in binary), HIGH = 0.54816 (0.10001100... in binary).

�So we can send out 10001 (0.53125)�Equivalent to E2�E1�E1�E1�E2

�After left shift by 5 bits:�LOW = (0.5424 – 0.53125) x 32 = 0.3568�HIGH = (0.54816 – 0.53125) x 32 = 0.54112�Same as the result in the last page.

Page 21: 06 Arithmetic 1

9/20/2005J. Liang: SFU ENSC 424 21

Comparison with Huffman� Input Symbol 1 does not cause any output� Input Symbol 3 generates 1 bit� Input Symbol 2 generates 5 bits� Symbols with larger probabilities generates less

number of bits.� Sometimes no bit is generated at all

� Advantage over Huffman coding

� Large probabilities are desired in arithmetic coding� Can use context-adaptive method to create larger probability

and to improve compression ratio.

Prob.Symbol

0.183

0.022

0.81� Note: Complete all possible scaling before

encoding the next symbol

Page 22: 06 Arithmetic 1

9/20/2005J. Liang: SFU ENSC 424 22

Incremental Decoding

0 0.8 1.0

Decode 1: Need ≥ 5 bits (verify)Read 6 bits: Tag: 110001, 0.765625

Input 1100011

0 0.656 0.8 Decode 3, E2 scalingTag: 100011 (0.546875)

0.312 0.5424 0.54816 0.6

0.0848 0.09632

Decode 2, E2 scalingTag: 000110 (0.09375)

0.1696 0.19264

E1: Tag: 001100 (0.1875)

E1: Tag: 011000 (0.375)

0.3392 0.38528

0.6784 0.77056

E1: Tag: 110000 (0.75)

Decode 10.3568 0.54112

E2: Tag: 100000 (0.5)

� Summary: Complete all possible scaling before further decodingAdjust LOW, HIGH and Tag together.

Page 23: 06 Arithmetic 1

9/20/2005J. Liang: SFU ENSC 424 23

Summary� Introduction�Encoding and Decoding�Scaling and Incremental Coding

�E1, E2

�Next:� Integer Implementation

� E3 scaling

�Adaptive Arithmetic Coding�Binary Arithmetic Coding�Applications

� JBIG, H.264, JPEG 2000