1 Distributed Source Coding: Theory and Practice EUSIPCO’08 Lausanne, Switzerland August 25 2008 Speakers: Dr Vladimir Stankovic, Dr Lina Stankovic, Dr Samuel Cheng Contact Details • Vladimir Stanković Department of Electronic and Electrical Engineering, University of Strathclyde Email: [email protected]Web: http://personal.strath.ac.uk/vladimir.stankovic • Lina Stanković Department of Electronic and Electrical Engineering, University of Strathclyde Email: [email protected]Web: http://personal.strath.ac.uk/lina.stankovic • Samuel Cheng Department of Electrical and Computer Engineering, University of Oklahoma Email: [email protected]Web: http://faculty-staff.ou.edu/C/Szeming.Cheng-1/ Distributed Source Coding (DSC) • Compression of two or more physically separated sources The sources do not communicate with each other (hence distributed coding) Noiseless transmission to the decoder Decoding is performed jointly • A compression or source coding problem of network information theory Encoder X Encoder Y Joint Decoder X Y ^^ • Increased interest in DSC due to many potential applications Data gathering in wireless sensor networks Distributed (or Wyner-Ziv) video coding Multiple description coding Compressing encrypted data Streaming from multiple servers Hyper-spectral imagining Multiview and 3D video Cooperative wireless communications Motivation Talk Roadmap Slepian-Wolf (SW) problem Wyner-Ziv (WZ) problem Multiterminal (MT) problem DSC and Network coding (NC) SW coding WZ coding MT coding DS-NC Wireless Sensor Networks Distributed Video Coding Cooperative Diversity Multimedia Streaming Theory Code designs Practice DSC: Problem Setup and Theoretical Bounds
26
Embed
Contact Details Distributed Source Coding: Theory and Practice · coding (NC) SW coding WZ coding MT coding DS-NC Wireless Sensor Networks Distributed Video Coding Cooperative Diversity
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
1
Distributed Source Coding:Theory and Practice
EUSIPCO’08Lausanne, Switzerland
August 25 2008
Speakers: Dr Vladimir Stankovic, Dr Lina Stankovic, Dr Samuel Cheng
Contact Details
• Vladimir Stanković� Department of Electronic and Electrical Engineering, University of
X and Y – discrete, correlated, memoryless sourcesV – set of nodes in the networkE – set of edgesEdge e ∈ E – noiseless channel with bit-rate constraint c(e) c – vector of rate constraints c(e), ∀e ∈ E
• Each destination bi wants to reconstruct perfectly both X and Y
• How do we determine c?
Max-flow=min-cut (Graph Theory)
b3b1 b2
a1
u
mincut(a1, b2)=2.0
Network of noiseless channels with rate limits
G(V,E,c)
b1
b2
b3
br
a1
a2
...
{Xi}
{Yi}
Cuts determine the bit-rate constraint of the links between any
two nodes by disconnecting the edges in the graph network
Theoretical Limits (Han ’80, Song & Yeung ’01)
� A rate vector c is achievable if and only if :
� Each cut separating a1 from any b has at least capacity H(X|Y)
� Each cut separating a2 from any b has at least capacity H(Y|X)
� Each cut separating a1 and a2 from any b has at least capacity H(X,Y)
Network of noiseless channels with rate limits
G(V,E,c)
b1
b2
b3
br
a1
a2
...
{Xi}
{Yi}
Extending above, i.e., source coding with decoder side information, to lossy coding
Wyner-Ziv coding
Lossless EncoderSource XX
DecoderX^
Y
Asymmetric SW
4
Wyner-Ziv (WZ) Problem
Lossy EncodingSource XX V
DecodingX̂
Y
• Wyner-Ziv rate-distortion function
• Lossy source coding of X with decoder side information Y• Extension of asymmetric SW setup to rate-distortion theory
• Distortion constraint at the decoder: E[d(X,X)] ≤ D^
Wyner-Ziv (WZ) Problem
Lossy EncodingSource XX V
DecodingX̂
Y• Wyner-Ziv rate-distortion function
• Conditional rate-distortion function (side information at both sides)
Source XX V X̂
YY
Lossy Encoding Decoding
WZ: Lossy Source Coding with Decoder SI
Lossy EncoderSource XX
DecoderX^
Y
Rate loss (Zamir ’96) :� Less than 0.22 bit for binary sources and Hamming measure� Less than 0.5 bit for continuous sources and mean-square error (MSE) measure
• In general, RWZ ≥RX|Y , i.e., there is a rate loss in WZ coding compared to joint encoding
WZ: Jointly Gaussian Sources with MSE Distortion Measure
•
Lossy EncodingX V
DecodingX̂
Y
Z
• Correlation model: X= αY+Z,
where Y ~ N(0,σY2) and Z ~ N(0, σZ
2) are independent, Gaussian
• No rate loss in this case compared to joint encoding RX|Y
α
DDRR Z
YXWZ
2
log2
1)(|
σ==
For MSE distortion and jointly Gaussian X and Y, rate-distortion function is the same as for joint encoding and joint decoding
• Extension of the SW setup to rate-distortion theory
• Two types: direct and indirect/remote MT source coding
5
Quadratic Gaussian MT Source Coding with MSE Distortion
Direct MT Coding Indirect MT Coding
(Wagner ’05)
Quadratic Gaussian Direct MT Source Coding with MSE Distortion
Y1 and Y2 quadratic Gaussian sources with variances and 2
1yσ 2
2yσ
- correlation coefficient
- Distortion constraints
for D1=D2 and 2
2
2
1 yy σσ =
are zero-mean mutually independent Gaussian random variables with variances , and .
Two noisy observations:
(Oohama ’05)
Quadratic Gaussian Indirect MT Source Coding with MSE Distortion
2
2nσ2
xσ 2
1nσ
- Distortion constraint
Slepian & Wolf ’73
Wyner & Ziv ’76, ’78
Berger & Tung ’77
Omura &
Housewright ’77
Oohama ’97, ’05(Gaussian case)
Yamamoto & Itoh ’80
Flynn & Gray ’87
Viswanathan & Berger ’97 (CEO)
Oohama ’98 (Gaussian CEO)
Viswanath ’02Chen, Zhao, Berger & Wicker ’03
Direc
t M
T Indirect MT Wyner & Gray ’74, ’75 (simple network)
Ahlswede & Körner ’75
Sgarro ’77 (two-help-one)
Körner & Marton (zig-zag network )
Gel’fand & Pinsker ’80(lossless CEO problem)
Csiszár & Körner ’80
Han & Kobayashi ’80
(lossless MT network)
Oohama ’05 (Jointly Gaussian case)
LOSSY LOSSLESS
Wolf ’74 (multiple sources)
Cover ’75 (ergodic processes)
Han ’80
Song & Yeung ’01
(sources over the network)Wagner ’05
(two Gaussian sources)
DSC: Code Design Guidelines and Coding Solutions
Source Coding with Decoder Side Information
RX
H(Y|X)
H(X)
H(Y)
H(X|Y)
RY
H(X,Y)
H(X,Y)
A
B
)|( YXHRX ≥
Lossless EncoderSource XX
DecoderX^
Y
Y – decoder side information (SI)
6
Compression of Correlated Sources
Y = 11
X = 12
CentralStation
6 bits
2 bits
Y=11
Slepian-Wolf theorem: Still two bits are needed for compressing X!
Correlation: X=[Y-1, Y, Y+1, Y+2]
X=12
Side information
10
30 29 28
13 12 11 109 8 7 6
01 00 11
31
Disjoint sets - bins EncoderSource XX S
DecoderX^
Y
Channel Codes for Compression: Algebraic Binning (Wyner ’74, Zamir et al. ’02)
• Distribute all possible realizations of X (of length n) into bins• Each bin is a coset of an (n,k) linear channel code C , with parity-check matrix H, indexed by a syndrome s
bin – a coset channel code
s1=0 s2 s3
…
s2n-k
C
Encoding
• Encoding: For an input x, form a syndrome s=xHT
• Send the resulting syndrome (s2) to the decoder
EncoderSource XX S
DecoderX̂
Y
s1s2 s3
x
…
s2n-k
Decoding
• Interpret y as a noisy version (output of virtual communication channel called correlation channel ) of x• Find a codeword of the coset indexed by s closest to y by performing conventional channel decoding
EncoderSource XX S
DecoderX̂
YCorrelation Channel
s1s2 s3
x
…
s2n-k
y^
• Correlation model� X and Y are discrete binary sources of length n=3 bits
� X and Y differ at most in one position (dH(X,Y ) ≤ 1)
• If Y is also given to the encoder (joint encoding), obviously we can compress X with 2 bits
Added redundancy TO PROTECT Removed redundancy TO COMPRESS
Asymmetric Binning
Example:Example: Suppose that X and Y are i.i.d. uniform sources of length n=4 bits each. Code Y at Ry=nH(Y)=4 bits and X at Rx=nH(X|Y)=2 bits. Total transmission rate RY + RX=4+2=6bits
X :
bin
Y :
index of the bin: 2 bits ≡ 22=4 bins
index of the bin: 4 bits ≡ 24=16 bins
4 bits
X :
Y :
Code X and Y at Rx=Ry=3 bitsTotal transmission rate Rx + Ry=3+3=6 bits
index of the bin: 3 bits
index of the bin: 3 bits
From Asymmetric to Symmetric Binning
Both X and Y are compressed
8
Implementation (Pradhan & Ramchandran ‘05)
• Generate a channel code C and partition it into nonoverlapping subcodes C1 and C2
4) Reconstruction:: x = ux = u11G1 tG1 t11=[0010110] = x=[0010110] = x
y = uy = u22G2 tG2 t22=[0110110] = y=[0110110] = y^
^
u1T
u2T
u3TxT=
v1T
v2T
v3TyT=
Sx transmitted
paddedpadded
SY transmitted
3) Error-correction decoding: c = x y t1 t2 = [0010 111 ]
[ u1 v2 [u1 v2]P ]
2) Find codeword c in C closest to tt = t= t11 tt2 2 = [01 10 111]= [01 10 111]
vv11 uu22
General Syndrome Concept
• Applicable to all linear channel codes (inc. turbo and LDPC codes)
• Key lies in correlation modeling: if the correlation can be modeled with a simple communication channel, existing channel codes can be used� SW code will be good if the employed channel code is
good for a “correlation channel”
� If the channel code approaches capacity for the “correlation channel”, then the SW code approaches the SW limit
• Complexity is close to that of conventional channel coding
9
Parity-based Binning
• Syndrome approach: To compress an n -bit source, index each bin with a syndrome from a linear channel code (n,k)
• Parity-based approach: To compress an n -bit source, index each bin with (n-k) parity bits p of a codeword of a systematic (2n-k,n) channel code
• Compression rate: RX=(n-k)/n
Conventional
systematic channel
encodingx
Conventional
channel decoderx̂
y
Correlation Channel
p
x
Removed
(2n-k,n) systematic channel code
Syndrome vs. Parity Binning
• Syndrome-based approach works better because� Code distance property preserved
� For the same compression length, minimum codeword size is used
� Good channel code -> good SW code of the same performance
• Parity-based binning has advantages� Good for noisy SW coding problem because in contrast
to syndromes, parity bits can protect
� Simpler (conventional encoding and decoding)
� Simple puncturing mechanism can be used to realize different coding rates
• Practical SW coding with algebraic binning based on channel codes for discrete sources
• In WZ coding, we are dealing with continuous space, hence syndrome approach alone will not work!
• Questions: What is a good choice of binning? How to perform coding efficiently?
Practical Wyner-Ziv Coding (WZC)
• Three types of solutions proposed: � Nested quantization
� Combined quantization and SW coding (DISCUS, IT March 2003)
� Quantization followed by SW coding (Slepian-Wolf coded quantization - SWCQ)
• We will focus on the first and third method
• We will assume correlation model between source X and SI Y : X=Y+Z with Z~N(0,σ2
Z)
Practical WZC Solutions
Impro
ved p
erfo
rmance
Nested Scalar Quantization (NSQ)
• Uniform scalar quantizer with step-size q• It can be seen as four nested uniform scalar quantizers (red, green, blue, yellow), each with step-size 4q• D is fine distortion that will remain after decoding• d is coarse distortion that can appear if there are decoding errors
SI
• : Bin 0
• : Bin 1• : Bin 2
• : Bin 3
)|(| yxp YX
Nested Lattice Quantization (LQ)
• Nested lattice
� A (fine) lattice is partitioned into sublattices (coarse lattices)
� A bin: the union of the original Voronoiregions of points of a sublattice
� : bin 8
� : bin 4
� : bin 3
10
• Encoding: output index of the bin containing X� Quantize X using the
fine lattice
� Output the index V of the coarse lattice containing quantized lattice point
Y
Nested Lattice Quantization
• Decoding: find lattice point of sublattice V that is closest to Y� Quantize Y using sublattice V
Bin index: V = 8
X
Y
Nested Lattice Quantization
Bin index: V = 8
X^X
• Encoding: output index of the bin containing X� Quantize X using the
fine lattice
� Output the index V of the coarse lattice containing quantized lattice point
• Decoding: find lattice point of sublattice V that is closest to Y� Quantize Y using sublattice V
SW Coded Quantization (SWCQ)• Nested lattice quantization is asymptotically optimal as
dimensions go to infinity
� Difficult to implement even in low dimensions
Nested
Quantization
Slepian-Wolf
Encoding
Slepian-Wolf
DecodingEstimationV V
Y
X X^
• The bin index V and the SI Y are still highly correlated, i.e., H(V ) > H(V |Y ) � Note that conventional lossless compression
techniques (e.g., Huffman coding) are fruitless since Yis not given to the encoder
� Use SW coding to further compress V !
• Further improvement:
� Use estimation instead of reconstructing to a lattice point
^
Practical SWCQ
• Practical realization: (nested) quantization followed by channel coding for SW coding
• WZ coding is a source-channel coding problem� Quantization loss due to source coding
� Binning loss due to channel coding
• To approach the WZ limit, one needs � Strong source codes (e.g., TCVQ and TCQ)
� Near-capacity channel codes (e.g., turbo and LDPC)
• Estimation of X based on V and SI helps at low rate, thus rely more on� SI Y at lower rates
� V at higher rates
WZC vs. Classic Source Coding
• Classic entropy-constrained quantization (ECQ)
• Wyner-Ziv coding (SWCQ)
� Nested quantization: quantization with SI
� Slepian-Wolf coding: entropy coding with SI
QuantizationEntropy
Coding
Nested
Quantization
Slepian-Wolf
Coding
Classic source coding is just a special case of WZ coding(since the SI can be assumed to be a constant)
0.20 dBSWC-TCQ0.20 dBECTCQ
1.36 dBSWC-LQ1.36 dBECLQ (2-D)
1.53 dBSWC-SQ1.53 dBECSQ
Gap toSWCQGap to ECQ)(RDX
)(RDWZ
Classic SC WZC
Same performance limits at high rate!(Assuming ideal entropy coding and ideal SW coding)
WZC vs. Classic Source Coding (SC)
11
1V
1( ', )y V2V
'y
Layer WZ Coding
Q(.)X
1 1( ', ,..., )ny V V −
nV
CombineBitplane
SplitBit-
plane⋯
EstimateV̂V X̂
: binary SW encoder
: binary SW decoder1 1( ', ,..., )ky y V V −=
Side info at kth level
SWC
LDPC Code for binary SW Coding
• LDPC code is a linear block code
• LDPC stands for low-density parity-check� “Low-density” means its parity-check matrix is
sparse
• Message-passing decoding algorithm� Suboptimal but effective
• Pros� Exists flexible and systematic design
techniques for arbitrary channels
� Designed codes have excellent performance
Tanner Graph
V1+V2+V3
V1+V2
V1
V2
V3
: variable node
: check node
• Consider a length-3 block code with parity check matrix
• A binary vector V=[V1,V2,V3] is a codeword if HTV=0
Message Passing Decoding
1. Initialization:� Compute the “belief” of actual
transmitted bit at each variable node
2. Iteration:� Pass beliefs from variable nodes to
check nodes; combine beliefs� Pass beliefs from check nodes to
variable nodes; combine beliefs
3. Exit condition:� Estimate variable node values by
thresholding current beliefs. Exit if the estimates form a valid codeword; otherwise, back to 2.
Belief usually in the form of log-likelihood ratio
: variable node
: check node
channelV Y
V3
V2
V1
==
)1|(
)0|(log
Vyp
Vyp
• If we assume all messages are independent
Message Passing Decoding
• Message passing decoding performs well for long block-length code with relatively few connections (low-density)
y
21)1|(
)0|(log mm
Vyp
Vyp++
==
=Ψ 1 2tanh tanh tanh2 2 2
m mΦ=
SW Encoding with LDPC Codes
• Encoding:� Output check values S
• Compression rate: � R=(n-k)/n = 4/6 = 2/3
1
1
1
0
1
0
1
0
uncompressed
binary source
compressed
output
(syndrome)
0
1
: variable node
: check node
V
S
12
SW Decoding with LDPC Codes
• Decoding:� View SI Y as hypothetical
outputs of a channel
� Input received S as check node values
� Decode to a code vector syndromes V instead of a codeword
?
?
?
?
?
?
1
0
uncompressed
binary source
compressed
output
(syndrome)
0
1
: variable node
: check node
Y
S • If we assume all messages are independent
Message Passing Decoding
• When v is a codeword of the LDPC code (s is all-zero sequence), SWC decoding ≡ LDPC channel decoding
y
21)1|(
)0|(log mm
Vyp
Vyp++
==
=Ψ
s
tanh2
Φ= (1 2 )s− 1 2tanh tanh
2 2
m m
SW Code Analysis
• Major assumption: the SW decoding error probability is independent of the source v� The same assumption in conventional channel
coding
• If the assumption holds,performance of SW coding
= performance of SW coding given v=0 (hence s=0) transmitted
= performance of LDPC code on channel V→Y
� We can design SW code using conventional LDPC code design technique on V→Y
Sign Symmetry
• Code performance will be independent of input code vector
� if p(y|V=0)=p(-y|V=1)
)0|( =VyP)1|( =VyP
Error occurs when V=1Error occurs when V=0
• Symmetry commonly holds in conventional channels (e.g., BSC)
“Best” decision:
0, if 0
1, if y < ˆ
0
yV
≥=
Intuition of Sign Symmetry
0
0 0
Error ~ x + x
0
1 1
Error ~ x x
)1|( =VyP )0|( =VyP
+
Error probability independent of the input!
0
0
Relaxing Sign Symmetry
• Sign symmetry is too restrictive
• Will not work for layer setup
• Obvious non-sign symmetry distribution resulting in code performance independent of input
)1|( =VyP
)0|( =VyP
1V
1( ', )y y V=2V
'y
Side info not even a scalar (n-tuple in
general!)
(( )| | 10) p Vy V yp = ≠ − =
13
Dual Symmetry
• If p(g(y)|V=0)=p(y|V=1)
� with some g(•) that g (g (y ))=y (i.e., g (•)=g -1(•))
� Then, probability of error independent of input codeword
)1|( =VyP
)0|( =VyP
)(·g
Dual Symmetry?
)(·g
YES!YES!
)(·g
NO!
Why Dual Symmetry Works?
• Let l be the log-likelihood ratio of the output
• No loss to take l as output instead, easy to show that if dual symmetry is satisfied
� then L→Y is sign symmetric
� Hence, code performance independent of input
(log
(
0)
)
|
| 1
p y Vl
p y V
==
=
Toy Problem
• Let X~N (µ,1) ← source
• Y=X+N (0,σ2) ← side information
µ
p(x)
p(y)
Q.: Given 1 bit to quantize X, what will be a good
quantizer for SW coding?
Answer: Trust your intuition!
Toy Problem
• Let , then V→Y is dual-symmetric
<
≥=
µµ
X
XV
0
1
µ
p(x)
p(y)
V=1 V=0
)1|( =Vyp )0|( =Vyp)( ·g
µ
p(x)
p(y)
V=1 V=0 V=1
)1|( =Vyp
)0|( =Vyp
Not symmetric
Simulation Results
• Asymmetric SW
• Non-asymmetric SW
• Quadratic Gaussian WZ
� 1D lattice
� 2D lattice
� Trellis Coded Quantization (TCQ)
• MT source coding
14
Asymmetric Binning for SW
0.36 0.38 0.4 0.42 0.44 0.46 0.48 0.5 0.5210
-6
10-5
10-4
10-3
10-2
10-1
H(Xi|Y
i)=H(p) (bits)
BE
R f
or
Xi
irregular
LDPC
104
regular LDPC
104
best
nonsyndrome
turbo
Slepian-Wolf
limit
irregular LDPC
105
parallel
105
irregular LDPC
threshold
Wyner-Ziv
limit serial
104 10
5
For two sources, code rate = ½Non-asymmetric Binning for SW
LDPC code Turbo code
Codeword length 20,000 bits
Gaussian WZC (NSQ 1-D Lattice)
1.53dB
Gaussian WZC (2-D Nested Lattice)
Gaussian WZC (with TCVQ)
Central UnitTerminal 1
MT Source Code Design
SW Encoder I
SW Encoder II
SW Decoder
R1
X
+
+
N1
N2
Quantizer1
Quantizer2
Y1
Y2
X^
V1
V2
Estimator
R2
V1
V2
Terminal 2
� Conventional quantization + lossless “non-asymmetric” Slepian-Wolf coding of quantization indices V1 and V2
(Yang, Stankovic, Xiong, Zhao, IEEE IT, March 2008)
^
^
15
Gaussian MT (with TCQ)
Direct MT D1=D2=-30 dB, ρ=0.99 Indirect MT D=-22.58 dB, σn1=σn2=1/99
• Much lower encoding complexity than H.264/AVC intra, and comparable decoding complexity
DISCOVER Results Robust Scalable DVC
H.264
Video Encoder
H.264
Video Decoder
Era
sure
Chan
nel
DCT SQ IRARaptor
DecoderEstimation
Error resilient
Wyner-Ziv EncoderWyner-Ziv Decoder
x
x̂LT
Raptor encoder
Y
Base Layer
1. Encode x at very low bitrate with H.26x and send it to the
decoder using strong error protection
2. Decode the received stream and get SI Y
3. x is compress/protected again with a Raptor code
assuming Y as SI and erasure packet transmission channel
4. The decoder decodes X using Y as SI.
(Xu, Stankovic, Xiong, 2005)
At very low rate
Channel Code Design
Noisy
channel
Channel
decoder
Y
Systematic
channel
encoder
• Efficient transmission over two different parallel channels: actual erasure channel and correlation (Gaussian) channel between X and Y
• Parity-based binning is called for!
• Low complexity encoding and decoding required (use of Raptor codes vs. Reed-Solomon codes)
k bits
Correlation
channelk bits
k bits
SI
n-k bits
Raptor Code
Erasure Channel 0
0
0
0
0
IRA Encoder LT Encoder Joint Raptor Decoder
Y
AA--priori priori informationinformation
from SIfrom SI
k symbolskH(X|Y)(1+ ε) symbols
• A bias p towards selecting IRA parity symbols vs. systematic symbols in forming bipartite graph of the LT code
Scalable DVC systemH264 FGS
Transmission rate 256 Kbps5% macroblock loss rate in the base layer10% packet loss rate for WZ coding layer
Simulation Example(Xu, Stankovic, Xiong, 2005)
Applications
• Distributed (WZ) video coding
• Stereo Video Coding
• Multimedia streaming over heterogeneous networks
• Wireless sensor networks
19
Stereo Video Coding (Yang, Stankovic, Zhao, Xiong, 2007)
• The same view encoded independently with two cameras
• High correlation among the views can be exploited with MT source coding
MT
Encoder 2
MT
Encoder 2
Ba
se
sta
tio
n
Ba
se
sta
tio
n
MT
Encoder 1
MT
Encoder 1
Camera 1
Camera 2
Stereo Video Coding (Yang, Stankovic, Zhao, Xiong, 2007)
• Both views compressed with TCQ+LDPC codes using MT source coding scheme
Tunnel Stereo Video SequenceH.264/AVC
Distributed/Stereo Video Applications
• A new attractive video compression paradigm
� Video surveillance
� Low complexity networks
� Very-low complexity robust video coding
� Multiview/3D video coding
Applications
• Distributed (WZ) video coding
• Stereo Video Coding
• Multimedia streaming over heterogeneous networks
• Wireless sensor networks
Scarce wireless bandwidth
- Efficient low-bitrate video coding (e.g., H.264/MPEG-4)
Error-prone wireless links
- Strong error protection
Heavy Traffic
- Extremely high compression and efficient congestion control
Live Broadcast
- Fast encoding/decoding
Economical Feasibility
-Fit into current technologies (HDTV, best-effort Internet, DVB-S/DVB-T)
Quality of service
- QoS: Digital TV quality of video
MT Coding-based Streaming
X
+
+
N1
N2
Terminal 1
Terminal 2
Y1
Y2
NoiselessChannel
R1
R2
Central Unit
X^
X
+
+
N1
N2
Y1
Y2
InternetNetwork
R1
R2
Client X^
Video
Wireless channel
Video
20
Video coding
Internet
DSC decoding plus video decompression
Y2=X+N2
Y1=X+N1
Rt R2
R1
X
Correlation
Advantages of the System
� Significant rate savings due to spatial diversity gain and DSC (without distortion penalty)
� Downloading from multiple servers:
� Source-independent transmission (not limited to video or multimedia)
� No need for cross-layer optimization
� Servers evenly loaded� Robustness to a server failure� Security
� Acceptable system complexity and flexible design
Results: AWGN Channel
DSC with Scalar Quantization (SQ) + Turbo/LDPC codes
BPSK+SQ
Bound
No DSC
Turbo codes
BPSK+SQ
Bound
No DSC
Turbo codes
LDPC
Uncoded BPSK Convolutional codes plus BPSK
1 b/s
(Stankovic, Yang, Xiong, 2007)
Multi-station Wireless Streaming
Base station
Base station� Increased quality of the reconstructed video
due to path diversity gain
� Resilience to station failure
� Traffic control is improved because the servers can be equally loaded
� Problems: multipath fading, interference, noise
� Conventional solution: spread spectrum
� Spread spectrum increases required bandwidth
Solution (Khirallah, Stankovic et al., TWComm 2008)
� Idea: Exploit the fact that the two stations stream same/correlated content
� Use Complete Complementary (CC) sequences for spreading at the base station (BS)
� At the encoders, puncture some of the output chips to reduce the rate
� At the decoder, recover the punctured chips using the chips received from the other stations as SI
Extending DSC idea
{ })4,1()1,1(1 ,, bb …=b
{ })4,2()1,2(2 ,, bb …=b
( )11111 , wbc f= ( )12112 , wbc f=
( )21221 , wbc f= ( )22222 , wbc f=
First component Second component
R1
R2
Base Station 1
Base Station 2
Puncturing at the Encoder
Problem: Puncturing leads to loss of orthogonality => conventional CC decoding is not feasible
Input data broken into
blocks
User 1 User 2
[ ]−+++= ,,,11w [ ]+−++= ,,,21w CC sequences
sets 2== CCKN
],[ 1211 wwX =
[ ]++−+= ,,,12w
],[ 2221 wwY =
[ ]−−−+= ,,,22w
Auto- correlation
function [ ]0,00,0,0,8,0,
ψ 12121111
=
⊗+⊗= wwwwXX [ ]0,0,0,8,0,0,0
ψ 22222121
=
⊗+⊗= wwwwYY
Cross-correlation
function
[ ]0,0,0,0,0,0,0ψ 22122111 =⊗+⊗= wwwwXY
CC sequence sets of size N = 2 with KCC = 4 chips
per each code.
21
Iterative Recovery at the Decoder
Terminal 1
Terminal 1
Terminal 2
{ }21 ,rrr =
)(ˆ12 lc
Despreadfor BS 1
)(ˆ11 lc
Spread
11w
Despreadfor
Spread
12w
12)ˆ(
−h
11)ˆ(
−h 1r
_
1̂h
1d
Despread for
1̂h
)1(ˆ +l1b
Iterative process
)(ˆ1 lb )(ˆ l2b
2r
BS 2
BS 1
h1, h2 are fading coefficients, z1 and z2 AWGN, l designates iteration number
Received signal at frequency f1: r1=h1c11+h2c21+zi1
and f2: r2= 0+ h2c22+z2
{h1c11;h1c12(l + 1)}^^ ^^Rough estimate
Results (Khirallah, Stankovic et al., TWComm 2008)
Relative speed: 30km/hr Relative speed: 120km/hr
CPR: channel power ratio (the average power ratio between thesecond and first path), CPR=10dB frequency selective, CPR=∞ flat-fading channelp = 0 : streaming of two identical sourcesp > 0 : streaming of two sources correlated by a binary symmetric channel with crossover probability p
Applications
• Distributed (WZ) video coding
• Stereo Video Coding
• Multimedia streaming over heterogeneous networks
• Wireless sensor networks
• Networks of numerous tiny, low-power and low-cost
devices
• Key requirement: reduce power consumption by
reducing communication via efficient
distributed compression
• In dense sensor network, measurements of neighbouring sensors are expected to be correlated, hence DSC the most efficient compression choice� R. Cristescu et al. “Networked SW” (joint optimization of placement,
routing, and compression in WSN)
� J. Liu et al. “Optimal communication cost in WSN”
Wireless Sensor Networks (WSN)
The Relay Channel
Source
Relay
Destinationm
X
Task: Transmit messages m to the destination with the help of a relay. Noisy wireless channels assumed for all three links
The Relay Channel
Source
Relay
Destinationm
X
Yr=X+Zsr
Yd1=X+Zsd
AYr
Yd2=AYr+Zrd=AX+AZsr+Zrd
AMPLIFY-AND-FORWARD CODING
22
The Relay Channel
Source
Relay
Destinationm
X
Yr=X+Zsr
Yd1=X+Zsd
DECODE-AND-FORWARD (DF) CODINGProblem: The rate is limited by the capacity of source-relay link!