Top Banner
2011 School of Information Theory 27–30 May 2011, UT Austin Stationary Codes Shannon meets Ornstein Robert M. Gray Stanford University [email protected] Research partially supported by Stationary Codes 1
116

Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University [email protected] Research partially supported by Stationary Codes

Mar 13, 2021

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

2011 School of Information Theory27–30 May 2011, UT Austin

Stationary Codes

Shannon meets Ornstein

Robert M. GrayStanford University

[email protected]

Research partially supported by

Stationary Codes 1

Page 2: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Part IFlipping coins, stationary codes, information sources, modeling,entropy, process distance, optimal fakes

Stationary Codes 2

Page 3: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Introduction: Flipping coins

Arguably the simplestnontrivial random process is a sequenceZ = {Zn; n ∈ Z}of independent tosses of a fair coin

· · · 01001100010100000101100111 · · ·

Process plays a basic role in the theory, practice, interpretation, andteaching of random processes and information theory

— moreover coin flips provide a building block for modeling moregeneral processes and the process arises naturally inside optimalsource codes

Stationary Codes 3

Page 4: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Modeling example: stationary coding of coin flips

Zn - g - Xn = g(. . . ,Zn−1,Zn,Zn+1, . . .)

stationary code = time-invariant (or shift-invariant) possiblynonlinear filter

⇔ Shift input sequence⇒ shift output sequence

Nice property of stationary codes: preserve nice statisticalproperties of input: stationarity, ergodicity, mixing, K, B

(will define later)

How general a class of stationary processes has Z at its ♥?

Call this class B(1): B = Bernoulli, 1=log2 (input alphabet size)

Stationary Codes 4

Page 5: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

An example in B(1)

{Zn} - Zn Zn−1 Zn−2 shift register

?@@@@R

����

&%'$

g

function, table

-Xn = g(Zn,Zn−1,Zn−2)

ZnZn−1Zn−2 Xn

000 0.7683001 -0.4233010 -0.1362011 1.3286100 0.4233101 0.1362110 -1.3286111 -0.7683

Output marginal distribution resembles N(0, 1)

Stationary Codes 5

Page 6: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Stationary Codes 6

Page 7: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Output process has constrained structure, sequences lie on adirected graph called a trellis (a tree if the shift register has infinitelength)

Stationary Codes 7

Page 8: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Trellis of a stationary code

Nodes denote shift-register states, lines denote transitions orbranches depending on state and input

00

01

10

11

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

-

g(000)�����������������

g(100)

@@@@@@@@R

g(001)

���������

g(101)

@@@@@@@@R

g(010)

���������

g(110)

AAAAAAAAAAAAAAAAU

g(011)

-g(111)

-

g(000)�����������������

g(100)

@@@@@@@@R

g(001)

���������

g(101)

@@@@@@@@R

g(010)

���������

g(110)

AAAAAAAAAAAAAAAAU

g(011)

-g(111)

-

g(000)�����������������

g(100)

@@@@@@@@R

g(001)

���������

g(101)

@@@@@@@@R

g(010)

���������

g(110)

AAAAAAAAAAAAAAAAU

g(011)

-g(111)

-

g(000)�����������������

g(100)

@@@@@@@@R

g(001)

���������

g(101)

@@@@@@@@R

g(010)

���������

g(110)

AAAAAAAAAAAAAAAAU

g(011)

-g(111)

-

g(000)�����������������

g(100)

@@@@@@@@R

g(001)

���������

g(101)

@@@@@@@@R

g(010)

���������

g(110)

AAAAAAAAAAAAAAAAU

g(011)

-g(111)

If g(z0z1z2) are all distinct, can recover input sequence from outputsequence. This stationary code is invertible

Stationary Codes 8

Page 9: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Another example: Binary autoregressive process

A linear (mod 2, GF(2)) time-invariant (LTI) filter:

Zn - ⊕ - Xn = Xn−1 ⊕ Zn

unit delay �

6

binary in, binary out — symmetric binary Markov/autoregressiveprocess

Again invertible with stationary code: Zn = Xn ⊕ Xn−1

Stationary Codes 9

Page 10: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Another example: LTI with real arithmetic

More generally, convolutional code with real arithmetic⇒ lineartime-invariant (LTI) filter, e.g.

Zn - LTI - Un =12

∞∑k=0

2−kZn−k ∈ [0, 1]

— binary expansion of real number in unit interval

Discrete input alphabet, continuous output alphabet!

Fair coin flips in, output Un ∼ U([0, 1]),

uniform marginal distributions

Unlike block codes, infinite-length stationary codes make sense!

Stationary Codes 10

Page 11: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Like previous examples, this stationary code is invertible by anotherstationary code (again an LTI filter)

2Un − Un−1 =

∞∑k=0

2−kZn−k −12

∞∑k=0

2−kZn−1−k = Zn

Coin flips have binary alphabet {0, 1}, but output of stationary codemight have same or larger alphabet AX such as {1, 2, 3, 4, 5, 6} (toresemble a fair die), possibly even a continuous alphabet such as[0, 1] or R if the shift-register has infinite length!

Stationary Codes 11

Page 12: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Detour: block vs. stationary (sliding-block) codes

Quick discussion, aimed primarily at those with minimal informationtheory background

Code a process X with alphabet AX into a process Y with alphabetAY:

Block coding Map each nonoverlapping block of source symbolsinto an index or block of encoded symbols (e.g., bits)

(standard for information theory)

Stationary coding Map overlapping blocks of source symbols intosingle encoded symbol (e.g., bit) (standard for ergodic theory)

Stationary Codes 12

Page 13: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Block Coding E : ANX → AN

Y (or other index set), N = block length

· · · , X−N, X−N+1, . . . , X−1,︸ ︷︷ ︸ X0, X1, . . . , XN−1,︸ ︷︷ ︸ XN, X1, . . . , X2N−1,︸ ︷︷ ︸ · · ·

· · · , ↓ E ↓ E ↓ E · · ·

· · · ,︷ ︸︸ ︷Y−N,Y−N+1, . . . ,Y−1,

︷ ︸︸ ︷Y0,Y1, . . . ,YN−1,

︷ ︸︸ ︷YN,Y1, . . . ,Y2N−1, · · ·

Sliding-block Coding N=window length = N1 + N2 + 1, f : ANX → AY

· · · , Xn−N1, Xn−N1+1, · · · , Xn, Xn+1, · · · , Xn+N2︸ ︷︷ ︸, Xn+N2+1, · · ·

f

Yn = f (Xn−N1, . . . , Xn, . . . , Xn+N2)

slide window −→ ︸ ︷︷ ︸?

?

f

Yn+1 = f (Xn−N1+1, . . . , Xn+1, . . . , Xn+N2+1)

Both structures induce mappings of sequences into sequences

Stationary Codes 13

Page 14: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Back to coding coin flips

Zn - LTI - Un =12

∞∑k=0

2−kZn−k ∼ U([0, 1])⇒

can get arbitrary output marginal distribution viaelementary probability trick:

Given cdf F on R, e.g., cdf for N(0, 1)

Define F−1 (generalized) inverse cdf:F−1(u) = inf{r : F(r) ≥ u} ⇒ Yn = F−1(Un) ∼ F, e.g., Gaussian

Here stationary code = LTI filter + memoryless nonlinearity

Aside: Example of a Hammerstein nonlinear system =

LTI filter + memoryless nonlinearity+ LTI filter

Stationary Codes 14

Page 15: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Have generated a process with Gaussian marginals from coin flips:

LTI with unit pulse response h = {hk = 2−k−1; k = 0, 1, . . .}

Zn - hk

?

-

UnF−1 - Yn = F−1(

∞∑k=0

2−k−1Zn−k)

Is Yn Gaussian?

Stationary Codes 15

Page 16: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

No. The condtional probability distributions of Yn given past valuesare discrete. Scatter plot of consecutive adjacent samples showsdependence:

Stationary Codes 16

Page 17: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Fake white Gaussian process

Can tweak again and decorrelate:

{Zn} coin flips, Un =12

∞∑k=0

2−kZn−k, F = cdf of N(0, 1)

Add: φ : [0, 1)→ [0, 1) satisfies φ(u) + φ(u + 1/2) = 1, u ∈ [0, 1)

Zn - hk -

Unφ - F−1 - Xn

Stationary Codes 17

Page 18: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Xn = F−1

φ

∞∑

k=0

2−k−1Zn−k︸ ︷︷ ︸∼ Unif ([0,1))

︸ ︷︷ ︸∼Unif([0,1))

⇒ Gaussian marginals & uncorrelated!

Is {Xn} a Gaussian process?

Stationary Codes 18

Page 19: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

No, can’t be (Why??)

but how close to a Gaussian process can it be??

and what has all thisto do with information theory??

Stationary Codes 19

Page 20: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Questions raise issues in information theory and ergodic theory(especially Shannon and Ornstein):

• Taxonomy of information sources/random processes

• Entropy and entropy rate

• Stationary codes

• Distortion and distance between processes

• Modeling vs. compression (Simulation vs. source coding)

Sketch several familiar and perhaps less familiar relevant ideas atthe border of information theory and ergodic theory, with a commonthread of stationary codes.

Tools and intuition differ from ubiquitous block coding treatments.

Stationary Codes 20

Page 21: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Information sources

Discrete-time information source = discrete-time random processX = {Xn; n ∈ Z} described by a process distribution µX

i.e., Kolmogorov (directly-given) random processmodel = distribution µX on sequence space A∞X+ suitable sigma-field (event space)

Xn ∈ AX = alphabet: discrete or maybe not

Stationary Codes 21

Page 22: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

The Shift

Ergodic theory focuses on the shift transformation on sequencespace:

Shift T : A∞X → A∞X : shift sequence left one time unit

T x = T (· · · , xn−1,xn, xn+1, xn+2, · · · )

= (· · · , xn,xn+1, xn+2, xn+3, · · · )

A dynamical system in ergodic theory: [A∞X , µX,T, X0]

⇒ process Xn(x) = X0(T nx)

Generalization of random process (T might not be the shift)

Stationary Codes 22

Page 23: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Stationarity

An information source X is stationary (shift-invariant) if

µX(T−1F) = µX(F) all events F

where T−1F = {x : T x ∈ F}

shifting an event does not change its probability

Ergodic theory language: T is measure preserving

Ergodic theory = theory of measure preserving transformations(and other related transformations)

i.e., of stationary random processes

and generalizations with similar behavior

Stationary Codes 23

Page 24: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Ergodicity

Information source is ergodic if invariant events T−1F = F musthave probability 0 or 1

Emphasis in literature is on stationary/measure preserving andergodic, but much remains true more generally

Stationary Codes 24

Page 25: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Random Vectors

Random process distribution µX ⇒ random vectors

XN = (X0, X1, · · · , XN−1) ∼ µXN

consistent family of distributions on ANX

Kolmogorov: µX ⇔ consistent family of distributions µXn,Xn+1,...,Xn+N−1

Stationary Codes 25

Page 26: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

IID Sources

X IID⇔ µXN = µNX0

= product distribution, µXn = µX0 all n

E.g., fair coin flips, biased coin flips, dice throws, IID uniform,IID Gaussian

IID process most random possible process — no predictability,no sparse representation

Stationary Codes 26

Page 27: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Bernoulli Processes and Shifts

Beware of the name Bernoulli —

Information theory: Bernoulli process = IID binary process withparameter p (coin bias), p = 1/2 for fair coin flips emphasized here

Ergodic theory: Bernoulli shift = IID process, discrete ornon-discrete alphabet

WarningA: A minority of the ergodic theory literature uses“Bernoulli shift” differently:(1) more narrowly — restricting name to finite alphabets (ourdefinition becomes “generalized Bernoulli shift”),(2) more generally — including any process isomorphic to an IIDprocess

Stationary Codes 27

Page 28: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

isomorphic??

Stationary Codes 28

Page 29: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Isomorphism and stationary codes

Two processes X ∼ µX and Y ∼ µY are isomorphic if there is aninvertible (with probability 1) stationary coding of µX with distributionequal to µY

Xn - f - Yn

�g

6

Can code from one source into the other in an invertible way, as inmost earlier examples no “information” is lost!

Stationary Codes 29

Page 30: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Isomorphism = process/stationary coding analogue of Shannonlossless coding

Unlike Shannon, well-defined for non-discrete alphabet sources

Stationary Codes 30

Page 31: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

E.g., {Wn} Gaussian IID process and (stationary) Gaussautoregressive process {Xn} are isomorphic, stationary code =invertible LTI filter!

hk = rk; k ≥ 0; |r| < 1 gk = δk − rδk−1

Wn = Xn − rXn−1 - h - Xn =∑∞

k=0 rkWn−k

�g

6

Stationary Codes 31

Page 32: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

B-processes

Class of stationary codings of coin flips ⊂ of the class of stationarycodings of an IID source such as Z, dice, IID Gaussian

IID Wn - g - Xn

Ornstein’s class of B-processes

(aka “Bernoulli processes”)

Where do B-processes fit in taxonomy of random processes?

Stationary Codes 32

Page 33: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

A taxonomy of random processes

IID ⊂ B ⊂ K (Kolmogorov zero-one law) ⊂ strongly mixing ⊂ weaklymixing ⊂ stationary and ergodic ⊂ stationary ⊂ block stationary ⊂asymptotically stationary ⊂ asymptotically mean stationary⇔sample averages converge

Mixing & ergodicity a form of asymptotic independence:

limn→∞

µ(T−nF ∩G) − µ(F)µ(G) = 0,∀F,G : strong mixing

limn→∞

1n

n−1∑k=0

∣∣∣|µ(T−kF ∩G) − µ(F)µ(G)∣∣∣ = 0,∀F,G : weak mixing

limn→∞

1n

n−1∑k=0

(µ(T−kF ∩G) − µ(F)µ(G)

)= 0,∀F,G : ergodic

Stationary Codes 33

Page 34: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Reminder: Special case of B-processes: positive integer R, B(R) ={all stationary codings of equiprobable IID processes with alphabetof size 2R} ⊂ B e.g., B(1)

B-processes arguably are the most fundamental for ergodic theoryand there are many equivalent characterizations.

IMHO they are also basic to information theory

Stationary Codes 34

Page 35: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

To sketch these results need two important tools used in bothergodic theory and information theory:

• Shannon entropy + Kolmogorov generalization (Kolmogorov-Sinaiinvariant, extension of Shannon entropy rate to generalalphabets, dynamical systems, flows)

• d-bar distance between random processes (and, implicitly,Shannon fidelity criterion)

Stationary Codes 35

Page 36: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Entropy: Finite alphabet (Shannon)

Usual information theory treatment

Stationary source X, distribution µX ⇒ distributions µXN for randomvectors XN, If alphabet AX is finite, also denote pmf by µXN

H(XN) = H(µXN) = −∑xN

µXN(xN) log µXN(xN)

H(X) = H(µX) = infN

N−1H(XN) = limN→∞

N−1H(XN)

e.g., for coin flips N−1H(ZN) = H(Z) = 1 bit/symbol

Stationary Codes 36

Page 37: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Entropy: General alphabet (Kolmogorov)

Alphabet discrete, continuous, or mixed:

vector entropy — H(XN) = supq

H(q(XN))︸ ︷︷ ︸finite alphabet defn

supremum over all quantizers (finite output alphabet) q of ANX

process entropy rate — H(X) = supg

H(g(X))︸ ︷︷ ︸finite alphabet defn

supremum over all stationary codes g with finite output alphabet

Example: {Xn} IID, Xn ∼ N(0, 1), N−1H(XN) = H(X) = ∞

well defined, but infinite!!

Stationary Codes 37

Page 38: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

WarningA: Shannon differential entropy for continuousdistributions is something different and lacks many of the importantproperties, intuition, and theorems of entropy

Stationary Codes 38

Page 39: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Entropy in Ergodic Theory

Entropy plays a fundamental role in ergodic theory (Shannon’s ideaadopted by Kolmogorov)

Two key results:

Sinai-Ornstein Theorem If µX and µY are stationary and ergodicrandom processes and H(µX) ≥ H(µY), then there is a stationarycoding of X with process distribution equal to µY

Ornstein Isomorphism Theorem A necessary condition for twostationary random processes µX and µY to be isomorphic is thatH(µX) = H(µY) (Kolmogorov, Sinai). The condition is sufficient ifboth processes are B-processes.

Stationary Codes 39

Page 40: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

The class of B-processes is the most general class known for whichequal entropy rate ensures isomorphism.

There exist K-processes having equal entropy which are notisomorphic (next most general class of stationary and ergodicprocesses)

Isomorphism theorem includes discrete and non-discrete alphabetprocesses and extends to continuous-time processes

Two B-processes can be coded into each other invertably iff theyhave equal entropy

If two stationary and ergodic processes have equal entropy rate,then each can be constructed as a stationary coding of the other,but there will not be an invertible coding unless both processes areB

Stationary Codes 40

Page 41: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

WarningA

In general,H(X) ≤ lim

N→∞N−1H(XN),

Not always equal! (equality if alphabets discrete)

E.g., X0 ∼ N(0, 1), Xn ≡ X0 all n

⇒ H(XN) = ∞ all N, but H(X) = 0

Quantization and limit do not always interchange.

Short-term behavior might be misleading regarding long termbehavior.

Might hope this is an extreme example, e.g., stationary, but notergodic — no such luck

Stationary Codes 41

Page 42: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Fake white Gaussian revisited

A stationary coding of fair coin flips in earlier example yieldedstationary, ergodic, uncorrelated process with Gaussian marginals

From definition of entropy rate, H(X) ≤ H(Z) = 1 bit per symbol, soX can not be a stationary uncorrelated Gaussian process (= IIDGaussian process) since IID Gaussian has H = ∞

Note: Xn has continuous alphabet, stationary and ergodic, but finitenonzero entropy rate!

⇒ entropy (rate) distinguishes the fake (less than 1 bit) Gaussianwith the correct spectrum and marginals from the real item

Stationary Codes 42

Page 43: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Process Distance

Such finite entropy rate processes masquerading as infinite entropyrate processes play a role in Shannon source coding (as will see)

How good a fake of µX (say IID N(0, 1)) is possible using coin flips?

Suppose have a notion of “distance” d(µX, µY) between randomprocesses (there are many)

Require at least

• d(µX, µX) = 0, and

• d(µX, µY) > 0 if µX , µY

Might also want triangle inequality (or something similar)

Stationary Codes 43

Page 44: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Given a class of random processes G (e.g., B(R)) & a stationaryand ergodic target source µX to be faked by a µY ∈ G, “best” fake isclosest to target in d sense:

d(µX, µY) ≥ d(µX,G) ≡ infµY∈G

d(µX, µY)

If µY ∈ B(R), then H(µY) ≤ R, so

d(µX, B(R)) = infg:Zn- g -Yn

d(µX, µY)

≥ d(µX, {B-processes µY : H(µY) ≤ R})

≥ d(µX, {stationary ergodic µY : H(µY) ≤ R}︸ ︷︷ ︸Se(R)

)

are inequalities equalities?

Stationary Codes 44

Page 45: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Suppose that µY approximately achieves d(µX,Se(R)):

H(µY) ≤ R and d(µX, µY) ≤ d(µX,Se(R)) + ε

Then by the Sinai-Ornstein theorem there is a stationary coding ofan IID equiprobable source of entropy rate H(µY) ≤ R with outputdistribution µY, thus µY ∈ B(R).

Thus for all ε > 0, d(µX,Se(R)) + ε ≥ d(µX, µY) ≥ d(µX, B(R))

d(µX, B(R)) = d(µX, {B-processes µY : H(µY) ≤ R})

= d(µX, {stationary ergodic µY : H(µY) ≤ R}) (?)

If H(µX) ≤ R, then Sinai-Ornstein⇒ d(µX, B(R)) = 0!

but what if H(µX) > R?

Stationary Codes 45

Page 46: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Further questions on best fake

• What is a useful distance measure on random processes forinformation theory and ergodic theory?

• Can d(µX, B(R)) be evaluated for the case where H(µX) > R?

• Can d(µX, B(R)) be achieved? I.e., do optimal codes exist?

Is infimum a minimum?

Already seen the answer is “yes” if H(µX) ≤ R.

• What are the properties of nearly optimal codes?

• Connections with Shannon rate-distortion/source coding theory?Lossy source code design?

Stationary Codes 46

Page 47: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Process Distance

∃ many of distances/metrics on probability distributions

One family particularly useful for ergodic theory and informationtheory: Monge/Kantorovich/transportation/Vassershtein/Ornsteinetc.

First need some notation

Stationary Codes 47

Page 48: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Detour: Pair Processes

Pair random process (X,Y) = {Xn,Yn} described by joint processdistribution πXY ⇒ πX, πY, marginal distributions

(X,Y) ∼ πX,Y-

-

X ∼ πX

Y ∼ πY

⇒ random vectors (XN,YN) with distributions πXN ,YN ⇒ πXN , πYN,marginal distributions

(XN,YN) ∼ πXN ,YN

-

-

XN ∼ πXN

YN ∼ πYN

Stationary Codes 48

Page 49: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Example of pair process = input/output of noisy channel, code,communication system

Xn - νY |X - Yn

Pair process described by input distribution µX and conditionaldistribution νY |X (deterministic if code)

Stationary Codes 49

Page 50: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Detour: Distortion measures and fidelity criteria

Suppose have two alphabets AX, AY.

For simplicity assume AX, AY ⊂ R

A fidelity criterion is a family of distortion measures dN(xN, yN) ≥ 0on (AN

X , ANY ), N = 1, 2, . . .

(as always, assume sets and functions are measurable wrt suitablesigma-fields)

Assume fidelity criterion additive or single-letter with per-letterdistortion d0 = d:

dN(xN, yN) =

N−1∑i=0

d(xi, yi)

Stationary Codes 50

Page 51: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

By far most important examples are Hamming distortiond(a, b) = 0 if a = b and 1 otherwise, and squared error distortiond(a, b) = (a − b)2

(most everything generalizes to nonnegative powers r of a metric,with r = 0 indicating the Hamming distance)

Stationary Codes 51

Page 52: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Average Distortion

Given pair process πX,Y + fidelity criterion dN, N = 1, 2, . . .

Average distortion: DN(πXN ,YN) = EπXN ,YN

[dN(XN,YN)

]Limiting average distortion: D(πX,Y) = lim

N→∞

1N

DN(πXN ,YN)

(if limit exists)

If πX,Y stationary, fidelity criterion additive

D(πX,Y) =1N

DN(πXN ,YN) = D1(πX0,Y0) = EπX0,Y0[d(X0,Y0)]

single-letter characterization

Stationary Codes 52

Page 53: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Transportation (Kantorovich) Distance

For vectors:

µXN , µYN fixed. Coupling πXN ,YN πXN ,YN-

-

XN πXN = µXN

YN πYN = µYN

What is best coupling? T (µXN , µYN) ≡ infπXN ,YN⇒µXN ,µYN

DN(πXN ,YN)

πXN ,YN ⇒ µXN , µYN is shorthand for πXN = µXN and πYN = µYN.

⇒ transportation distance

Is transportation “distance” really a distance (metric)?

Stationary Codes 53

Page 54: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Transportation distance for powers of a metric

Suppose have underlying metric m on AXN × AYN anddN(xN, yN) = m(xN, yN)r, r ≥ 0

(If r = 0, consider as Hamming distance)

Tr is transportation “ distance”

• If r ∈ [0, 1], then Tr(µXN , µYN) is a metric

• If r ≥ 1, then Tr(µXN , µYN)1/r is a metric

Summarize: For r ≥ 0 Tmin(1,1/r)r is a metric

Stationary Codes 54

Page 55: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Monge (1781)/Kantorovich (1942), Vasershtein/Wasserstein (1969),Mallows (1972), “earth mover’s” (1998), Rachev and Ruschendorf(1998), Villani (2003, 2009). Villani has > 500 references!

Processes: d(µX, µY) ≡ supN

N−1T (µXN , µYN)

As with transportation distance on random vectors, ifd(a, b) = m(a, b)r, then d

min(1,1/r)is a distance on random processes

Ornstein d-bar distance (1970) for average Hamming distance

d0 = d-distance based on T0

Metric space alphabets and squared error (1975)

d2 = d-distance based on T2

Stationary Codes 55

Page 56: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

A few process distance properties

• If processes µX, µY stationary, then

d(µX, µY) = infπX,Y⇒µX,µY

EπX,Yd(X0,Y0)

where the infimum is over stationary pair processes.If µX, µY are also ergodic, then infimum can be restricted tostationary and ergodic pair processes.

• d(µX, µY) = amount by which a µX-frequency-typical sequencemust be changed in the time-average d sense in order to confuseit with a µY-frequency-typical sequence

• Class of finite-alphabet B-processes = class of all mixingfinite-order Markov processes + d0-limits

Stationary Codes 56

Page 57: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

• H(µ) is continuous in µ with respect to d0

• If µX and µY are IID, then d(µX, µY) = T (µX0, µY0)

• If squared-error distortion, IID processes

d(µX, µY) = T2(µX0, µY0)

=

∫ 1

0| F−1

X0(u) − F−1

Y0(u) |2 du

• If Hamming distance, IID discrete-alphabet processes,

d(µX, µY) = T0(µX0, µY0) =12

∑x∈A

| µX0(x) − µY0(x) |

Stationary Codes 57

Page 58: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

• If µX, µY 0 mean stationary with power spectral density

S X( f ) =

∞∑k=−∞

RX(k)e− j2πk f , RX(k) = E(XnXn−k)

then

d(µX, µY) ≥∫ 1/2

−1/2|√

S X( f ) −√

S Y( f )|2d f

with = if the processes are Gaussian

Stationary Codes 58

Page 59: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Part IIMutual information, Shannon’s distortion-rate function, sourcecoding with a fidelity criterion, good fakes and source coding,optimality properties of stationary codes, trellis encoding IIDsources

Stationary Codes 59

Page 60: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Re-enter Shannon — Mutual Information

Pair process (X,Y) = {Xn,Yn}, process distribution πX,Y

If alphabets discrete

I(XN; YN) = I(πXN ,YN) = H(XN) + H(YN) − H(XN,YN)≤ H(YN)

In general: I(XN; YN) = supquantizers q

I(q(X)N; r(Y)N)≤ H(YN)

Information rate:

— If discrete alphabet I(X; Y) = I(πX,Y) = limn→∞

1N

I(XN; YN)≤ H(Y)

— In general (Kolmogorov, Dobrushin, Pinsker)I(X; Y) = sup

quantizers q,rI(q(X); r(Y))≤ H(Y)⇒ I(X; Y) ≤ H(Y)

Stationary Codes 60

Page 61: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Distortion-rate function lower bound

Apply to earlier inequality chain:

d(µX, B(R)) = d(µX, {B-processes µY : H(µY) ≤ R})

= infstationary ergodic µY :H(µY)≤R

d(µX, µY)

= infµY :H(µY)≤R

[inf

πX,Y⇒µX,µYEπX,Yd(X0,Y0)

]= inf

πX,Y⇒µX,H(πY)≤REπX,Yd(X0,Y0)

≥ infπX,Y⇒µX,I(πX,Y)≤R

EπX,Yd(X0,Y0) ≡ DX(R) (??)

process definition of Shannon distortion-rate function!

Stationary Codes 61

Page 62: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

More questions

• Not the traditional Shannon DRF definition. Equivalent?What about dual/inverse Shannon rate-distortion function?

• Shannon DRF/RDF familiar to information theorists ascharacterization of optimal source coding with a fidelity criterion =Shannon theory of data compression. How relate to currentproblem?

• Is the inequality achievable?

Stationary Codes 62

Page 63: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Process vs. traditional Shannon DRF, RDF

DX(R) = infπX,Y⇒µX,I(πX,Y)≤R

EπX,Yd(X0,Y0)

RX(D) = infπX,Y⇒µX,EπX,Y d(X0,Y0)≤D

I(πX,Y)

D(R) = infN

N−1DN(R) = limN→∞

N−1DN(R)

where DN(R) = infπN:πN ⇒µXN ,N−1I(πN)≤R

N−1EdN(XN,YN)

R(D) = infN

N−1RN(D) = limN→∞

N−1RN(D)

where RN(D) = infπN:πN⇒µXN ,N−1d(πN)≤D

N−1I(πN)

Stationary Codes 63

Page 64: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Theorem Given a stationary and ergodic µX, under suitabletechnical assumptions D(R) = DX(R), R(D) = RX(D)

Ideas behind proof

DX(R) ≥ D(R): Suppose that π is a process distribution that(approximately) yields the process DRF, i.e., I(π) ≤ R andDX(R) ≈ D(π)

Then for large N, the induced vector distributions πN yieldI(πN) ≤ NR (almost) and D(πN) = ND(π), which with a continuityargument⇒

DX(R) ≥ N−1DN(R) ≥ D(R)

Stationary Codes 64

Page 65: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

D(R) ≥ DX(R): Choose N large enough so that N−1DN(R) ≈ D(R)and suppose that πN approximately yields DN(R), that is, I(πN) ≤ Rand D(πN) ≈ DN(R)

A stationary (and ergodic) pair process π can be constructed usingπN in such a way that the information rate is ≤ R + ε and theper-symbol distortion is close to N−1DN(R), which again withcontinuity arguments⇒ N−1DN(R) ≥ DX(R)⇒ D(R) ≥ DX(R)

The construction of the process distribution from the blockdistributions involves using the conditional block distributions mostof the time in a conditionally independent manner, but occasionallyinserting some random spacing between blocks, which“stationarizes” the pair process

Stationary Codes 65

Page 66: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Advantage of traditional definitions

RN(D) is a convex optimization problem.

∃ tools for analytical optimization of RN(D) (Gallager, Csiszar) andnumerical optimization (Blahut, Rose)

Shannon and other lower bounds, sometimes hold with equality

Stationary Codes 66

Page 67: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Useful fact: Given a stationary and ergodic source µX, theinfimumum in the process DRF

DX(R) = infπX,Y⇒µX,I(πX,Y)≤R

EπX,Yd(X0,Y0)

is the same whether taken over all stationary and ergodicprocesses or over all stationary processes

This greatly simplifies proof of block coding theorem for sourceswith memory,

which brings us at last to Shannon source coding, the Shannontheory of data compression — the information theoretic approach tocontinuous or high entropy rate sources into relatively low entropyrate sources while minimizing distortion.

The classic work is Shannon’s 59 paper

Stationary Codes 67

Page 68: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Stationary Codes 68

Page 69: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Source Coding

Given source X, fidelity criterion:

sourceXn -

encoderf

-

bits

Un ∈ AU

decoderg

-

reproductionXn

π f ,g induced distribution of (X, X)

Average distortion DµX( f , g) ≡ D(π f ,g) = Eπ f ,gd(X0,Y0)

Optimize: For a given class of codes C what is best possible codeperformance? δX(R) ≡ inf

f ,g∈C:log |AU |≤RDµX( f , g)

Operational DRF

Shannon and most everybody since considers block codes

Emphasis here is stationary codes — compare the 2 structures

Stationary Codes 69

Page 70: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Block vs. Sliding-block

Block coding

• Far more known about design: e.g., transform codes, vectorquantization, optimality properties, clusteringU

• Does not preserve key properties (stationarity, ergodicity, mixing,0-1 law, B)DIn general output neither stationary nor ergodic (it is N-stationaryand can have a periodic structure, not necessarily N-ergodic).

Can “stationarize” with uniform random start, but retains possibleperiodicities. Not equivalent to stationary coding of input.

• Not defined for infinite block length, no limiting codes asblocklength grows.D

Stationary Codes 70

Page 71: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Stationary coding

• preserves key properties of input process: stationarity, ergodicity,mixing, B, 0-1 lawU

• well-defined for N = ∞. Infinite codes can be approximated byfinite codes. Sequence of finite codes can convergeU

• models many communication and signal processing techinques:time-invariant convolutional codes, predictive quantization,∆-modulation, Σ∆-modulation, nonlinear and linear time-invariantfiltering, wavelet coefficient evaluation by LTI filters

• used to prove key results in ergodic theory (Ornsteinisomorphism theorem, Sinai-Ornstein theorem)

There are constructions in ergodic theory and information theory toget stationary codes from block codes & vice-versa

Stationary Codes 71

Page 72: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Source Coding Theorem

X stationary and ergodic, additive (or subadditive) fidelity criterionwith reverence letter, i.e., ∃ a∗ ∈ AX for which E[d(X0, a∗)] < ∞, thenfor block codes and for stationary codes

δX(R) = DX(R)

Positive coding theorem is hard — Block coding theorem usestraditional Shannon random coding argument

Stationary coding uses positive block coding theorem to get goodblocks, embed in sliding-block code structure using“stationarization” — code long sequences of blocks with occasionalspacing based on past source information

No shortcuts using stationary codes

Stationary Codes 72

Page 73: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Converse coding theorem is simple for stationary codes

Similar to Shannon DRF lower bound for d(µX, B(R))

• Cascade of stationary encoder and decoder is a stationary code

• Channel process between encoder and decoder is a 2R-aryalphabet process with entropy ≤ R⇒

δX(R) = inff ,g∈C:log |AU |≤R

DµX( f , g)

≥ infπX,Y⇒µX,H(µY)≤R

D(πX,Y) = infµY :H(µY)≤R

d(µX, µY)︸ ︷︷ ︸=d(µX.B(R)) from (??)

≥ infπX,Y⇒µX,I(πX,Y)≤R

D(πX,Y) = DX(R)

Stationary Codes 73

Page 74: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

In particular,δX(R) ≥ d(µX, B(R)) ≥ DX(R)

So positive coding theorem⇒

δX(R) = d(µX, B(R)) = infµY∈B(R)

d(µX, µY) = DX(R)

d(µX, B(R)) “sandwiched” between equal quantities

⇒ Shannon DRF solves d(µX, B(R)) evaluation problem andprovides geometric interpretation of source coding

• Is there an intuition behind this?

Yes – can use a good stationary fake of a process X to design agood source code for X

Stationary Codes 74

Page 75: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Fakes and good source codes

Suppose have good finite length stationary code g of coinflips, i.e.,yields B-process Yn with d(µX, µY) ≈ DX(R)

⇒ possible reproduction sequences can be depicted on a trellisdiagram:

increasing time←−

{Zn} - Zn Zn−1 Zn−2 shift register length L=3

?

@@@@R

��

��

&%'$

gfunction, table - Yn = g(Zn,state︷ ︸︸ ︷

Zn−1,Zn−2)

Stationary Codes 75

Page 76: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

00

01

10

11

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

y

-

g(000)�����������������

g(100)

@@@@@@@@R

g(001)

���������

g(101)

@@@@@@@@R

g(010)

���������

g(110)

AAAAAAAAAAAAAAAAU

g(011)

-g(111)

-

g(000)�����������������

g(100)

@@@@@@@@R

g(001)

���������

g(101)

@@@@@@@@R

g(010)

���������

g(110)

AAAAAAAAAAAAAAAAU

g(011)

-g(111)

-

g(000)�����������������

g(100)

@@@@@@@@R

g(001)

���������

g(101)

@@@@@@@@R

g(010)

���������

g(110)

AAAAAAAAAAAAAAAAU

g(011)

-g(111)

-

g(000)�����������������

g(100)

@@@@@@@@R

g(001)

���������

g(101)

@@@@@@@@R

g(010)

���������

g(110)

AAAAAAAAAAAAAAAAU

g(011)

-g(111)

-

g(000)�����������������

g(100)

@@@@@@@@R

g(001)

���������

g(101)

@@@@@@@@R

g(010)

���������

g(110)

AAAAAAAAAAAAAAAAU

g(011)

-g(111)

Given a long input sequence, an encoder can find minimumdistortion path through trellis using the Viterbi algorithm (dynamicprogramming search of minimum distortion path through adirected graph)

Stationary Codes 76

Page 77: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Trellis encoder

sourceX -

Viterbi

algorithm

-

bits

U = f (X)g -

reproductionX = g(U)

00

01

10

11

w

w

w

w

w

w

w

w

w

w

w

w

w

w

w

w

w

w

w

w

w

w

w

w

w

w

w

w

w

w

w

w

w

w

w

w

w

w

w

w

w

w

w

w

-g(000)��������������

g(100)

@@@@@@@R

g(001)

��������

g(101)

@@@@@@@R

g(010)

��������

g(110)

AAAAAAAAAAAAAU

g(011)

-g(111)

-g(000)��������������

g(100)

@@@@@@@R

g(001)

��������

g(101)

@@@@@@@R

g(010)

��������

g(110)

AAAAAAAAAAAAAU

g(011)

-g(111)

-g(000)��������������

g(100)

@@@@@@@R

g(001)

��������

g(101)

@@@@@@@R

g(010)

��������

g(110)

AAAAAAAAAAAAAU

g(011)

-g(111)

-g(000)��������������

g(100)

@@@@@@@R

g(001)

��������

g(101)

@@@@@@@R

g(010)

��������

g(110)

AAAAAAAAAAAAAU

g(011)

-g(111)

-g(000)��������������

g(100)

@@@@@@@R

g(001)

��������

g(101)

@@@@@@@R

g(010)

��������

g(110)

AAAAAAAAAAAAAU

g(011)

-g(111)

-g(000)��������������

g(100)

@@@@@@@R

g(001)

��������

g(101)

@@@@@@@R

g(010)

��������

g(110)

AAAAAAAAAAAAAU

g(011)

-g(111)

-g(000)��������������

g(100)

@@@@@@@R

g(001)

��������

g(101)

@@@@@@@R

g(010)

��������

g(110)

AAAAAAAAAAAAAU

g(011)

-g(111)

-g(000)��������������

g(100)

@@@@@@@R

g(001)

��������

g(101)

@@@@@@@R

g(010)

��������

g(110)

AAAAAAAAAAAAAU

g(011)

-g(111)

-g(000)��������������

g(100)

@@@@@@@R

g(001)

��������

g(101)

@@@@@@@R

g(010)

��������

g(110)

AAAAAAAAAAAAAU

g(011)

-g(111)

-g(000)��������������

g(100)

@@@@@@@R

g(001)

��������

g(101)

@@@@@@@R

g(010)

��������

g(110)

AAAAAAAAAAAAAU

g(011)

-g(111)

← time n

Stationary Codes 77

Page 78: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

If f =Viterbi algorithm encoder (or sliding-block approximation), then

DµX( f , g) ≈ d(µX, B(R)) = DX(R)

Most natural use is as a hybrid code — block Viterbi algorithmmatched to stationary source decoder = nearly optimal simulator,but can stationarize encoder to match theory

Underlying theory assumes R < H(µX)⇒ not enough bits to getDµX( f , g) = 0 and isomorphism.

Source coding can be viewed as a “sloppy” isomorphism — do nothave invertible code, doomed to distortion ≥ DX(R) if encoding intoR bits per symbol

Good news is can do nearly this well!

but will see can not actually achieve DX(R)

Stationary Codes 78

Page 79: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

How find a good source decoder/rate-constrained simulator g?

Look at properties of good codes

Stationary Codes 79

Page 80: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Shannon Optimal Reproduction Distribution

• For any random vector XN, Shannon RDF is an informationtheoretic (convex) optimization

• For IID source, RX(D) = RX0(D) = infimum of I(π) over jointdistributions π on R2 with input marginal µX and d(π) ≤ R. Ifoptimizing π exists, resulting µY0 is a Shannon optimalreproduction distribution

For IID source,⇒ N-dimensional Shannon optimal reproductiondistribution = µYN = µN

Y0, product distribution

For IID source, optimal process distribution is IID with marginaldistribution µY0, the distribution yielding the Shannon RDF

Stationary Codes 80

Page 81: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

• Csiszar (1974): For random vectors X (abbreviate XN)

◦ There exists a distribution π achieving finite-order R(D) (undermore general conditions than those considered here)

◦ If a sequence of joint distributions π(n), n = 1, 2, . . . withmarginals µX and µ(n)

Y satisfy

I(π(n)) = I(X,Y (n)) ≤ R, n = 1, 2, . . .

limn→∞

Eπ(n)[d(X,Y (n))] = DX(R)

then µ(n)Y has a subsequence that converges to a Shannon

optimal reproduction distribution weakly and in squared errortransportation distance.

◦ If the Shannon optimal distribution is unique (e.g., it isGaussian) , then µ(n)

Y converges to it.

Stationary Codes 81

Page 82: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Finding a Shannon optimal reproduction distribution

• For squared-error: Shannon optimal reproduction alphabet isfinite unless Shannon lower bound holds with equality, e.g.,Gaussian [Fix (1977), Rose (1994)]

• Shannon optimal reproduction distribution for Gaussian N(0, 1),squared-error⇒ another Gaussian N(0, 0.75) (R = 1, SLB holds)

• Rose’s algorithm: mapping approach⇒ alternative to Blahutalgorithm, gives better results for continuous alphabets whenSLB does not hold

• Shannon optimal reproduction distribution for Uniform (0, 1),R = 1, is a discrete distribution with alphabet size 3, pmf:

y 0.2 0.5 0.8pY(y) 0.368 0.264 0.368

Stationary Codes 82

Page 83: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Asymptotically Optimal Codes

• Codes fn, gn, n = 1, 2, . . . are asymptotically optimal (a.o.) if

Source coding: limn→∞

DµX( fn, gn) = δX(R) = DX(R)

Simulation/fake process: limn→∞

d(µX, µgn(Z)) = d(µX, B(R))

• An optimal code ( f , g) (if it exists) is trivially asymptotically optimal— set ( fn, gn) = ( f , g) all n. If ( f , g) optimal, then necessarilySource coding: DµX( f , g) = δX(R) = DX(R) Simulation:d(µX, µg(Z)) = d(µX, B(R))

Focus on source coding, similar properties for fake process problem

Stationary Codes 83

Page 84: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

A design approach

• Code design idea: Find necessary conditions for a.o. codes, useas guidelines for code design. Has much in common with historicalmethods for block codes such as Lloyd clustering

Detour: Review Lloyd optimality conditions for block source codes(vector quantizers). Here fix blocklength N. “Optimum” here meansequals operational DRF for blocklength N.

Note: This block code optimality stuff will NOT be covered if I am running behind.

Stationary Codes 84

Page 85: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Lloyd Optimality Conditions for Block Codes

Steinhaus (1956) for squared error, vectors, Lloyd (1957) forrandom variables, general distortion (easily generalized to vectors)

Rediscovered many times, e.g., k-means (1967), principal points(1990), alternating optimization (2002)

Abbreviate notation by dropping the superscript N: X is the N-Drandom vector and E and D denote a blocklength N encoder anddecoder, respectively.

Lloyd Quantizer Optimality Properties An optimal quantizer mustsatisfy the following two conditions

Stationary Codes 85

Page 86: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Optimum encoder for a given decoder Given a decoder withreproduction codebook C = {xi; i = 1, 2, . . . , 2NR}, the optimalencoder satisfies

E(x) = argmini∈I

ρ(x, xi).

minimum distortion encoder is optimal

Optimum decoder for a given encoder Given an encoder E, theoptimal decoder is the generalized centroid

D(i) = argminy∈A

E[d(X, y) | X ∈ {x : E(x) = i}]

centroid decoder is optimal

Application to an empirical distribution (a training or learning set)yields an iterative codebook improvement algorithm, an earlyclustering/learning algorithm! Lloyd algorithm

Stationary Codes 86

Page 87: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Back to stationary codes, where “optimal” means close to theShannon DRF —

Stationary Codes 87

Page 88: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Necessary condition 1: Process Approximation• fn, gn, n = 1, 2, . . .: asymptotically optimal sequence of stationary

source codes of a stationary ergodic source {Xn}

• U(n): encoder output/decoder input process with alphabet of size2R for integer rate R,

• X(n): the resulting reproduction processes.

Then

limn→∞

d(µX, µX(n)) = DX(R)

limn→∞

H(X(n)) = limn→∞

H(U(n)) = R

limn→∞

d0(U(n),Z) = 0

If R = 1, encoded process U(n) → fair coin flips!

Stationary Codes 88

Page 89: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Necessary condition 2: Moment Conditions

Resemble block code moment conditions, ε(n)0 = X(n)

0 − X0

limn→∞

E(X(n)0 ) = E(X0)

limn→∞

COV(X0, X(n)0 )

σ2X(n)

0

= 1

limn→∞

σ2X(n)

0= σ2

X0− DX(R), or

limn→∞

E(ε(n)0 ) = 0

limn→∞

E(ε(n)0 X(n)

0 ) = 0

limn→∞

σ2ε

(n)0

= DX(R)

Stationary Codes 89

Page 90: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Necessary Condition 3: Marginal distributionShannon condition for IID processes

If X is IID, then

• A subsequence of the marginal distribution of the reproductionprocess, µX(n)

0converges weakly and in squared error

transportation distance to a Shannon optimal reproductiondistribution.

• If the Shannon optimal reproduction distribution is unique, thenµX(n)

0converges to it.

• If a code is optimal, then µX0= Shannon optimal distribution

Stationary Codes 90

Page 91: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Necessary Condition 4: Finite-dimensionaldistributions Shannon condition

If X is IID,

• then a subsequence of the N-dimensional reproductiondistribution µ(X(n))N converges weakly and in T2 to the N-foldproduct of a Shannon optimal marginal distribution

• If the one-dimensional Shannon optimal distribution is unique,then µ(X(n))N converges weakly and in T2 to its N-fold Shannonoptimal product distribution

• If a code ( f , g) is optimal, then µXN = the N-fold product of aShannon optimal marginal distribution

Stationary Codes 91

Page 92: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

If code optimal, then Condition 4⇒ X is also IID with the Shannonoptimal marginal reproduction marginal. This yields a contradictionsince H(X) ≤ R < H(Y) = ∞

optimal codes do not exist for the IID Gaussian source withsquared-error !!

Stationary Codes 92

Page 93: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Asymptotically Uncorrelated Condition

Covariance function of X(n): KX(n)(k) = COV(X(n)i , X(n)

i−k) ∀ integer k.

Given: IID process X with distribution µX, fn, gn an a.o. sequence ofstationary source encoder/decoder pairs with common alphabet ofsize 2R For all k , 0,

limn→∞

KX(n)(k) = 0

and hence the reproduction processes are asymptoticallyuncorrelated.

Stationary Codes 93

Page 94: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Do optimal codes exist?

Already seen answer is no for Gaussian IID, R = 1

On the other hand, suppose ( f , g) is an optimal source code for asource µX where µX is a B-process with H(X) ≤ R = 1. Then ∃invertible stationary mapping into an IID binary process⇒ code haszero distortion and entropy rate of binary channel process andreproduction process of H(X) ≤ 1.

If µX is an IID process and ∞ > H(X) = H(X0) > R = 1, then fromthe optimality properties an optimal code must have µXN = πYN = πN

Y0for all N.

Linder has shown using Csiszar’s results that still H(Y0) > 1 = H(X0)# and hence no optimal code exists, as in the Gaussian IID case

Stationary Codes 94

Page 95: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

There is a discontinuity between the 0 distortion result (Ornsteinisomorphism theorem) and the nonzero distortion result (Shannonrate-distortion theorem) — optimal stationary codes exist for theformer if the source is B, but not for the latter if the source is IID

You can get as close as you like to DX(R), but you can neverachieve it for source coding or faking

Stationary Codes 95

Page 96: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

A Code Design Algorithm

Recall question: How find a good simulator/decoder g?

One approach: Find g which at least satisfies necessary conditionsfor approximate optimality. (provably or numerically)

Consider a sliding-block code gL of length L of an equiprobablebinary IID process Z which produces an output process X definedby

Xn = gL(Zn,Zn−1, · · · ,Zn−L+1︸ ︷︷ ︸binary L-tuple

)

In the trellis setting, (Zn−1, · · · ,Zn−L+1) is the state, Zn is the nextinput bit.

Stationary Codes 96

Page 97: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Suppose that the ideal distribution for Xn is given by a CDF FY0 ofthe Shannon optimal marginal reproduction distribution.

Given binary L-tuple uL = (u0, u1, . . . , uL−1) ,

defineb(uL) =

L−1∑i=0

ui2−i−1 + 2−L−1 ∈ (0, 1). (1)

Let

Xn = g(Zn,Zn−1, · · · ,Zn−L+1) = F−1Y0

(b(Zn,Zn−1, · · · ,Zn−L+1)).

As L→ ∞, b(Zn,Zn−1, · · · ,Zn−L+1) converges weakly to uniform on(0, 1), and

gL(Zn,Zn−1, · · · ,Zn−L+1) converges weakly to the (1D) Shannonoptimal marginal distribution.

Stationary Codes 97

Page 98: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Works for both continuous and discrete Shannon optimal marginaldistribution.

g satisfies the 1D moment conditions and the 1D Shannon marginaldistribution condition.

Problem: Only matches Shannon marginal distribution, successiveoutputs on the trellis are highly correlated (reminder plot on nextpage), poor fake of IID process

Stationary Codes 98

Page 99: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Scatter plot of successive samples.

Stationary Codes 99

Page 100: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Random permutation

A possible solution: Randomly generate a permutation on binaryL-tuples: P : {0, 1}L → {0, 1}L then set

g(uL) = F−1Y0

(b(P(uL))

Permutation is then fixed for all time and used repeatedly.

Now

g(Zn,Zn−1, · · · ,Zn−L+1) = F−1Y0

(b(P (Zn,Zn−1, · · · ,Zn−L+1)))

Simulation coder/source decoder becomes

Stationary Codes 100

Page 101: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

length L←− −→

IID Zn - 1 0 0 · · · 0 1· · ·

?

XXXXXXXXXXXXXXXXXXXXz

��������������������9

· · · · · ·

&%'$P

?

&%'$

b

?

&%'$F−1

Y0

?

Xn

Coupled with Viterbi algorithm yields a source code.Driven by fair coin flips yields white Gaussian fake

Stationary Codes 101

Page 102: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Scatter plot with permutation

Stationary Codes 102

Page 103: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

−3 −2 −1 0 1 2 30

0.2

0.4

0.6

0.8

1

1.2

CDF of 1−D Shannon optimal distributionCDF of 1−D empirical distribution produced by the trellis code, L=4

−3 −2 −1 0 1 2 30

0.2

0.4

0.6

0.8

1

1.2

CDF of 1−D Shannon optimal distributionCDF of 1−D empirical distribution produced by the trellis code, L=8

CDF of the 1-D empirical reproduction distributions

Empirically: Spectrum ≈ flat, H(reproduction process) ≈H(binary sequence) ≈ 1⇒ close to fair coin flips in d0 (usingMarton’s inequality (1996) relating d0 to limiting relativeentropy/Kullback-Leibler rate)

Performance in source coding/compression:

Stationary Codes 103

Page 104: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Numeric Results: IID GaussianRate(bits) MSE SNR(dB)

RP 8 1 0.2989 5.24RP 9 1 0.2913 5.36RP 10 1 0.2835 5.47RP 12 1 0.2740 5.62RP 16 1 0.2638 5.79RP 20 1 0.2582 5.88RP 24 1 0.2557 5.92RP 28 1 0.2542 5.95

DX(R)1

0.25 6.02TCQ9 1 0.3105 5.08TCQ(opt) 9 1 0.2780 5.56Pearlman 10 1 0.292 5.35Stewart 10 1 0.293 5.33Linde/Gray 9 1 0.31 5.09LC 10 1 0.2698 5.69LC(opt) 10 1 0.2673 5.73

Stationary Codes 104

Page 105: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

1 bit fake Gaussian

Stationary Codes 105

Page 106: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Numeric Results: Uniform [0, 1)

Rate(bits) MSE SNR(dB)RP 8 1 0.0203 6.13RP 9 1 0.0195 6.30RP 10 1 0.0190 6.42RP 12 1 0.0184 6.55RP 16 1 0.0179 6.69RP 20 1 0.0176 6.75RP 24 1 0.0175 6.78RP 28 1 0.0174 6.79

DX(R)1

0.0173 6.84TCQ 9 1 0.0194 6.33TCQ(opt) 9 1 0.0183 6.58LC 10 1 0.0191 6.40LC(opt) 10 1 0.0179 6.67

Stationary Codes 106

Page 107: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

1 bit fake Uniform

Stationary Codes 107

Page 108: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

More general sources?

Finite-order distribution convergence proofs exist only for IIDsources. Conjecture more generally the convergence is to thecorresponding finite-order distribution of the Shannon optimalreproduction process distribution from process definition of the DRF.

Mao (2011) used this idea to design a trellis encoding system for aGauss Markov source Xn = 0.9Xn−1 + Wn, where Wn is IID N(0, 1).

Fake Gauss Markov designed by using a fake Gauss IID to drive anautoregressive filter chosen so the output reproduction process hada covariance close to that of the Shannon optimal reproductionprocess.

Stationary Codes 108

Page 109: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

0 0.2 0.4 0.6 0.8 1 1.20

0.2

0.4

0.6

0.8

1

1.2

1.4

R

D

Rate−distortion function for Xn = 0.9 * Xn−1 + ZnR=1/2 log(1/D)Random Permutation Trellis Coder, L=20StewartGray/Linde VQ

Stationary Codes 109

Page 110: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Random Closing Observations

• The 1 bit per symbol fake Gaussian passes theKolmogorov-Smirnov Goodness-of-Fit (with significance levelα = 0.05) for marginals and the conditional distributions for pastof length ≤ 4 as being Gauss IID.

• Weissman and Ordentlich (2005) developed results in the spirit ofthe asymptotically optimal reproduction distributions for blockcodes by considering empirical reproduction distributions for IIDsources and sources satisfying the Shannon lower bound.

• Can use Lloyd centroid property to fine-tune a trellis encoder to atraining set. Helps a little when the shift-register length is short,but improvement is negligible otherwise.

Stationary Codes 110

Page 111: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

• Many of the fundamental underlying results remain very hard toprove, e.g., Ornstein wrote a book proving his generalisomorphism theorem. Ornstein theory is harder than Shannontheory because one needs to construct a sequence of codes thatget better and converges.

• Mismatch The d-distance yields several mismatch results of thefollowing form: Suppose that you design a nearly optimal sourcecode for a source X, but you then you apply the code to anothersource Y. Then the resulting difference in performance is boundabove by d(µX, µY), Similarly, both operational and ShannonDRFs are continuous with respect to d.

Stationary Codes 111

Page 112: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Acknowledgements

These slides include material from over four decades of work withstudents and colleagues. Particularly influential collaborators onthese specific topics include Dave Neuhoff, Paul Shields, DonOrnstein, Tamas Linder, and Mark Mao, none of whom bare anyblame for any errors in these slides.

Stationary Codes 112

Page 113: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

Suggested Reading

I. Csiszar. On an extremum problem of information theory. Studia Scientiarum Mathematicarum Hungarica, pages57–70, 1974.

S.L. Fix. Rate distortion functions for continuous alphabet memoryless sources. PhD thesis, University of Michigan,Ann Arbor, Michigan, 1977.

A. Gersho and R. M. Gray. Vector Quantization and Signal Compression. Kluwer Academic Publishers, Boston, 1992.

R. M. Gray. Sliding-block source coding. IEEE Trans. Inform. Theory, IT-21(4):357–368, July 1975.

R. M. Gray. Time–invariant trellis encoding of ergodic discrete–time sources with a fidelity criterion, IEEE Trans. onInfo. Theory, Vol. 23, pp. 71–83, Jan. 1977.

R. M. Gray. Probability, Random Processes, and Ergodic Properties. Springer-Verlag, New York, 1988. SecondEdition, Springer, 2009.

R. M. Gray. Entropy and Information Theory, Springer–Verlag, 1990. Second edition, Springer, 2011.

R. M. Gray, D. L. Neuhoff, and P. C. Shields. A generalization of ornstein’s d-bar distance with applications toinformation theory. Ann. Probab., 3:315–328, April 1975.

A. J. Khinchine. The entropy concept in probability theory. Uspekhi Matematicheskikh Nauk., 8:3–20, 1953.Translated in Mathematical Foundations of Information Theory,Dover New York (1957).

Stationary Codes 113

Page 114: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

A. N. Kolmogorov. On the Shannon theory of information in the case of continuous signals. IRE Transactions Inform.Theory, IT-2:102–108, 1956.

A. N. Kolmogorov. A new metric invariant of transitive dynamic systems and automorphisms in Lebesgue spaces.Dokl. Akad. Nauk SSR, 119:861–864, 1958. (In Russian.).

S. P. Lloyd. Least squares quantization in PCM. Unpublished Bell Laboratories Technical Note. Portions presented atthe Institute of Mathematical Statistics Meeting Atlantic City New Jersey September 1957. Published in the March1982 special issue on quantization of the IEEE Transactions on Information Theory, 1957.

Mark Z. Mao, Robert M. Gray, and Tamas Linder. Rate-constrained simulation and source coding iid sources. IEEETransactions on Information Theory, to appear.

Mark Z. Mao, On Asymptotically Optimal Source Coding and Simulation of Stationary Sources, PhD Dissertation,Department of Electrical Engineering, Stanford University, June, 2011.

K. Marton. On the rate distortion function of stationary sources. Problems of Control and Information Theory,4:289–297, 1975.

K. Marton. Bounding d-distance by informational divergence: a method to prove measure concentration. Annals ofProbability, 24(2):857–866, 1996.

D. Ornstein. Bernoulli shifts with the same entropy are isomorphic. Advances in Math., 4:337–352, 1970.

D. Ornstein. An application of ergodic theory to probability theory. Ann. Probab., 1:43–58, 1973.

D. Ornstein. Ergodic Theory,Randomness,and Dynamical Systems. Yale University Press, New Haven, 1975.

S.T. Rachev. Probability Metrics and the Stability of Stochastic Models. John Wiley & Sons Ltd, Chichester, 1991.

Stationary Codes 114

Page 115: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

S.T. Rachev and L. Ruschendorf. Mass Transportation Problems Vol. I: Theory, Vol. II: Applications. Probability andits applications. Springer- Verlag, New York, 1998.

K. Rose. A mapping approach to rate-distortion computation and analysis. EEE Trans. Inform. Theory,40(6):1939–1952, Nov. 1994.

P. C. Shields. The Theory of Bernoulli Shifts. The University of Chicago Press, Chicago,Ill., 1973.

P. C. Shields. The interactions between ergodic theory and information theory. IEEE Trans. Inform. Theory,40:2079–2093, 1998.

P. C. Shields and D. L. Neuhoff. Block and sliding-block source coding. IEEE Trans. Inform. Theory, IT-23:211–215,1977.

M. Smorodinsky. A partition on a bernoulli shift which is not weakly bernoulli. Theory of Computing Systems,5(3):201–203, 1971.

H. Steinhaus. Sur la division des corp materiels en parties. Bull. Acad. Polon. Sci., IV(C1. III):801–804, 1956.

L. C. Stewart, R. M. Gray, and Y. Linde. The design of trellis waveform coders. IEEE Trans. Comm.,COM-30:702–710, April 1982.

S. S. Vallender. Computing the wasserstein distance between probability distributions on the line. Theory Probab.Appl., 18:824–827, 1973.

L. N. Vasershtein. Markov processes on countable product space describing large systems of automata. ProblemyPeredachi Informatsii, 5:64–73, 1969.

C. Villani. Topics in Optimal Transportation, volume 58 of Graduate Studies in Mathematics. American MathematicalSociety, Providence, RI, 2003.

Stationary Codes 115

Page 116: Shannon meets Ornstein - Stanford EEgray/itss11.pdf · Shannon meets Ornstein Robert M. Gray Stanford University rmgray@stanford.edu Research partially supported by Stationary Codes

C. Villani. Optimal Transport, Old and New, volume 338 of Grundlehren der Mathemtischen Wissenschaften.Springer, 2009.

T. Weissman and E. Ordentlich. The Empirical Distribution of Rate-Constrained Source Codes, IEEE Transactions onInformation Theory, Vo. 51, No. 11, Nov 2005, pp. 3718-3733.

Stationary Codes 116