MSc Data Communications MSc Data Communications By R. A. Carrasco Professor in Mobile Communications ool of Electrical, Electronic and Computing Enginee University of Newcastle-upon-Tyne 2006
Dec 21, 2015
MSc Data CommunicationsMSc Data Communications
By R. A. Carrasco
Professor in Mobile Communications
School of Electrical, Electronic and Computing EngineeringUniversity of Newcastle-upon-Tyne
2006
Recommended Text BooksRecommended Text Books
1. “Essentials of Error-Control Coding”, Jorge Costineira, Patrick Guy Farrell
2. “Digital Communications”, John G. Proakis, Fourth Edition
• Deliver/Data from the source to the user in a:
• FAST
• INEXPENSIVE (Efficient)
• RELIABLE WAY
Goals Of A Digital Goals Of A Digital Communication SystemCommunication System
Task: to compare different modulation schemes with different values of M
• Choice of modulation scheme involve trading off
• Bandwidth
• Power
• Complexity
• Define:
• Bandwidth• Signal-to-noise ration• Error probability
Digital Modulation SchemesDigital Modulation Schemes
Examples: Memoryless Modulation (Waveforms are Examples: Memoryless Modulation (Waveforms are chosen independently – each waveform depends only chosen independently – each waveform depends only
on on mmii))
0 1 0 1 0 1 0 1 1 0
A
-AT
t
t
t
01 01 01 01 10
Source Symbols
T
010
101
011
010101
011001
T
M=8T = 3Ts
8 differentamplitudelevels
M=4T=2Ts
Sinusoids with4 different phases
M=2T=Ts
S1(t) = A, 0t<TS2(t) = -A
Ts
a
b
c
d
A crucial question is raised what is the difference?
• If T is kept constant, the waveforms of scheme C requires less bandwidth than those of 2, because the pulse duration is longer
• In the presence of noise, and if the same average signal power is used, it is more difficult to distinguish among the waveforms of c.
AM/AM = Amplitude Modulation – Amplitude Modulation conversion
AM/PM = Amplitude Modulation – Phase Modulation conversion
OutputEnvelope
InputEnvelope
B
A
Notice:• Waveforms b have constant envelopes.• This choice is good for nonlinear radio channels
A: Output envelop (AM/AM conversion)B: Output phase shift (AM/PM conversion)
TRADE-OFF BETWEENTRADE-OFF BETWEENBANDWIDTH AND POWERBANDWIDTH AND POWER
• In a Power-Limited Environment, use low values of M
• In a Band-Limited Environment, use high values of M
What if both Bandwidth and Power are Limited ?
• Expand Complexity• DEFINE:
BANDWIDTH
SIGNAL-TO-NOISE RATIO
ERROR PROBABILITY
Performance of Different Modulation Performance of Different Modulation SchemesSchemes
DIGITAL MODULATION TRADE-DIGITAL MODULATION TRADE-OFFSOFFS
SHANNON CAPACITY LIMIT FOR AWGN
C = W LOG (1 + S/N)
• S = Signal Power = e/T
• N = Noise Power = 2NoW
• W = Bandwidth
Define Bandwidth WDefine Bandwidth W
dB
0 1 2 3-1-2-3
B1
B2
B3
B4
B5
0
-10
-20
fT
S(f)
• Half – power
•Equivalent – noise
•Null – to – null
•Fractional power containment
•Bounded power spectral density
Different bandwidth definitions of the power density spectrumof (5.2). B1 is the half-power bandwidth: B2 is the equivalentnoise bandwidth: B3 is the null-to-null bandwidth: B4 is thefractional power containment bandwidth at an arbitrary level:B5 is the bounded power spectral density at a level of about 18dB. Notice that the depicted bandwidths are those aroundthe frequency f0.
In general, W = /T Hz, depends on modulation scheme and on bandwidth definition
DEFINE SNRDEFINE SNR
rate signalling T
1signals of #
sec
bitslog2
M
T
MRs
T
MT
P
b 2log
Rate at which the source outputs binary symbols
• Average Signal PowerAverage signal energy
b = average energy per bit
• Signal-to-Noise Ratio
W
R
NWN
PSNR sb
00
Noise power spectraldensity.
Bits/sec Hz(Bandwidth
BPS/HZ ComparisonBPS/HZ Comparison
Digital Modulation Trade – OffsDigital Modulation Trade – Offs(Comparison among different schemes)(Comparison among different schemes)
-2 0 2 4 6 8 10 12 14 16 18 20 (dB)
0.125
0.5
0.25
1
2
4
8
16
w
Rs
0Nb
1
32
16
84 2
64
32
16
8
4 2x
xx
x
2
4 8
16
2
4
8168
1632
2
4
816
510)( ePb
SHANNON CAPACITY BOUND
BANDWIDTH
LIMITED REGION
POWER
LIMITED REGION
PAM (SSB)
(COHERENT) PSK
AM-PM
DCPSK
COHERENT FSK
INCOHERENT FSK
x
The whole word is defined by
x1=u1, x2=u2
x3=u1+u2
Encoder for the (3,2) parity-check code
u1
x1 x2 x3
n=3
k=1
Encoder for the (3, 1) repetition code
x1= u1, x2=u1, x3=u3
0 000 1 111
k=2
n=3
u2 u1
x3 x2 x1
00 000
01 011
10 101
11 110
[1], pages 157 - 179
Hamming Code (7,4)Hamming Code (7,4)
Block EncodersNotice that only 16 of 128 sequences of length 7 are used for transmission
Source
symbols
encoded
symbols
The codeword is defined by
xi = ui, i = 1,2,3,4x5 = u1 + u2 + u3
x6 = u2 + u3 + u4
x7 = u1 + u2 + u4
0000 0000 000
0001 0001 011
0010 0010 110
0011 0011 101
0100 0100 111
0101 0101 100
0110 0110 001
0111 0111 010
1000 1000 101
1001 1001 110
1010 1010 011
1011 1011 000
1100 1100 010
1101 1101 001
1110 1110 100
1111 1111 111
(7, 4) Hamming Code
u4 u3 u2 u1
x7 x6 x5 x4 x3 x2 x1
Convolutionally encoding the Convolutionally encoding the sequence 101000...sequence 101000...
Rate , K = 3
From B Sklar,Digital Communiations.Prentice-Hall, 1988
x1
x2
x1
x2
1 0
10
Time Encoder Outputx1 x2
t1
t2
1 1
1 0
x1
x2
01 1t20 0
0
0
2
1
Convolutionally encoding the Convolutionally encoding the sequence 101000...sequence 101000...
0 1 0x1
x2
t41 0
0 1 0x1
x2
t51 1
0 0 0x1
x2
t6 0 0
Output sequence: 11 10 00 10 11
1
2
c1
c2
d3 d2
d1
d3 d2 d1
d1 d3
Input
Data
Output
1
(7, 5) Convolutional Encoder(7, 5) Convolutional Encoder
312
3211
ddc
dddc
Constraint length K = 3Code Rate = 1/2
Time Interval
1 2 3 4 5 6 7 8
Input 0 1 1 0 1 0 0 1 Output 00 11 01 01 00 10 11 11 SW Position
12 12 12 12 12 12 12 12
The Finite State MachineThe Finite State Machine
11
1001
00
a
b c
d
1/10
0/011/01
1/11 0/11
0/00
1/00
0/10
a=00
b=01
c=10
d=11
output bit
The 0 or 1 input bit
The coder in state a= 00
A 1 appearing at the input produces 11 at the output, the system moves to state b = 01
Example message:
0010
11
11100001011100Output
1010010110Input
• If in state b, a 1 at the input produces 01 as the output bits. The system then moves to state d (11).
• If a 0 appears at the input while the system is in state b, the bit sequence 10 will appear at the output, and the system will move to state c (10).
2
1
Tree RepresentationTree Representation
11
00
00
0000
00
00
00
00
11
11
11
11
11
11
11
10
10
10
1010
10
10
01
01
01
01
01
01
01
0
1
Input data bits
Time
K=3
Rate=
1 2 3 4
Upward transition
Downward transition
0 bit
1 bit
a
b
c
da
b
c
da
b
a
b
c
d
c
d
Signal-flow GraphSignal-flow Graph
cab
bdd
bdc
ca
XXDX
DXDXX
DXDXX
XDX
2
2'
D
D0 = 1
D2
D D
D
a b c a
d
Xa Xb Xc
Xd
Xa’
1 D 11 D2
D2
Transfer Function Transfer Function TT((DD))
bba
ba
bbbbcba
bc
bbbbc
bd
bd
XDD
DX
DD
DX
XD
DXD
XD
DX
D
DX
D
DXXXXD
XD
DX
XD
DDX
D
DDXX
D
DX
XD
DX
DXDX
322
2
2
22
21
)1(
211
2111
1
1
1equation From1
)1(
)1(
11
2equation From1
)1(
3equation From
D
D
D
D
D
D
DD
D
D
D
XDD
D
XD
DD
X
XDDT
b
b
a
c
211
21
1
)1(
21
1
)1(
211
)(
5
5
2
3
2
22
Therefore,
Transfer Function Transfer Function TT((DD))
8765
8
87
7
76
6
65
5
842
8
84
4
42
2
2
21
DDDD
D
DD
D
DD
D
DD
DD
8 distance
paths 8
8
7 distancepaths 4
7
6 distancepaths 2
6
5 distancepath 1
55
84221
DDDDD
D
Performing long division gives: This gives the number of paths in the state diagram with their corresponding distances.
In this case, the minimum distance of the code is 5
Block EncodersBlock Encoders by Professor R.A Carrasco by Professor R.A Carrasco
s1=(00)s2=(01)s3=(10)s4=(11)
0
1
Source
Symbol
“State Diagram” of Code
u= (11011.....) corresponds to the paths s1 s3 s4 s2 s3 s4 through the state diagram and the output sequence is x=(111100010110100)
Ui Ui-1 Ui-2
STATE U
X
000
001111 110
011 010100
101
S1
00
S2
01
S3
10
S4
11
Tree DiagramTree Diagram
The path correspondingto the input sequence11011 is shown as anexample.
000
000
000000
000
111
111
011
100
111
111
011
100
001
110
010
101
111
011
100
001
110
010
011
100
011
100
001
110
010
101101
0
1
S1
S1
S1
S1
S3
S3
S2
S4
S3
S2
S4
S1
S3
S2
S4
S3
S4
Signal Flow GraphSignal Flow Graph
Xa = S1Xb = S3Xc = S4Xd = S2
bcd
bcc
dab
XDDXX
DXXDX
XDXDX
2
2
23
)3
)2
)1
XaXb
Xc
XdD3 D2
D2
D2
D DD Xa’
Transfer Function Transfer Function TT((DD))
ad
ddab
db
bbbd
bc
a
d
a
a
XDDD
DDDX
XDD
DXDXDX
XDD
DX
XD
DDXDX
D
DX
XD
DX
X
DX
X
XDT
342
642
42
223
42
2
2
422
2
2
'
2
21
2
1
1equation From2
1
1
2
1
1
)(
642
86
642
423
423
642
423
642
21
2
21
2.
221)(
2
21
DDD
DD
DDD
DDDD
XDDD
DDD
DX
DT
XDDD
DDDX
a
a
ba
Transfer Function of the State Transfer Function of the State DiagramDiagram
...552
595
51055
05
2
24
2422
221
121086
161412
16141210
141210
1412108
12108
121086
86642
DDDD
DDD
DDDD
DDD
DDDD
DDD
DDDD
DDDDD
We have dfree
= 6,
for error events:
S1 to S
3 to S
2 to S
1
andS
1 to S
3 to S
4 to S
2 to S
1
12 distancepaths 5
12
10 distancepaths 5
10
8 distancepath 1
8
6 distancepaths 2
6642
86
55221
2DDDD
DDD
DD
Trellis Diagram for Code (Periodic Trellis Diagram for Code (Periodic from time 2 on)from time 2 on)
The minimum distance of the convolutional code l = Nd
min = d
c (N), the column distance.
The free distance dfree
of the convolutional code dfree lim dc l
000
111
100
011
001
110
101
010
000
111
100
011
001
110
101
010
000
111
100
011
001
110
101
010
000
111
100
011
001
110
101
010
000
111
000
111
100
011
00
10
01
11
0 1 2 3 4 5 6
Time
Sta
tes
LegendInput 0
Input 1
Trellis Diagram for the computation Trellis Diagram for the computation of dof d
freefree
Trellis labels are the Hamming distances of encoder outputs and theall-zero sequence.
0
1
2
2
1
0
1
2
1
2
2
1
0
1
2
1
2
2
1
0
1
2
1
2
2
1
0
3
0
1
2
00
10
01
11
0 1 2 3 4 5
Time
Sta
tes
Viterbi AlgorithmViterbi Algorithm
1
0
)(mink
lll
l
We want to compute
Functions whose arguments l can take on a finite number of values
Simplest situation arises when T2, T1…….are “independent”
(The value taken on by each one of them does not influence
the other variables)
A B
D
C
1
0
Then
},...,,{
1
0
110
)(min
k
k
lll
[1], pages 181 – 185
)xy(P is a maximum
received sequence transmitted symbols
)|()|( llxyPxyP 1. Observe
(memoryless channel)
received
no-tuple
n0-tuple of
coded digits
Viterbi Decoding of Convolutional CodesViterbi Decoding of Convolutional Codes
1-P
1-P
P
P
0
1
0
1
Tx Rx
2. We have, for a binary symmetric channel:
)1ln(1
ln)|(ln 0 PnP
PdxyP lll
Irrelevantmultiplicativeconstant
Irrelevantadditiveconstant
Hamming distanceBetween xl and yl
Brute force approach:Brute force approach:
• Compute all the values of the function and choose the smallest one.
• We want a sequential algorithm
What if 0, 1, … are not independent?
A B
DC
1
0
0 = A 1 = C or 1 = D
0 = B 1 = D
What is the shortest route fromLos Angeles to Denver?
Viterbi AlgorithmViterbi Algorithm
LosAngeles
Bishop
Ely
SpanishForks
GrandJunction
Denver
LasVegas
CedarCity
Salina
Page
Blythe
Williams
Gallup
Durango
265
284 235
236
257
338
172
215228
224
282
182
285241
130
207224
210
Day 1 Day 2 Day 3 Day 4 Day 5
Viterbi AlgorithmViterbi Algorithm
2 1 3 0 1
1 2
0
12
12
32
4 13
2
1 1
l=0 l=1 l=2 l=3 l=4 l=5
2
1
l=12
1
2 1
1 2
0
1
l=23
1
4
2
24
1
4
3
2
2
4
2 64
2
3l=3
4
3
6
4
0
1
4
5
l=414
l=5
5
(a)
(b) (c)
(d) (e)
• We maximise P(y|x) by minimising , the Hamming distance between the received sequence and coded sequence.
• Brute-Force Decoding
Compute all the distances between y and all the possible x’s. Choose x that gives the minimum distance.
• Problems with Brute-Force Decoding
- Complexity
- Delay• Viterbi algorithm solves the complexity problem (complexity
increases only linearly with sequence length)• Truncated Viterbi algorithm also solves delay problem
l
ld
ConclusionConclusion
Trellis-Coded ModulationTrellis-Coded Modulation
A.K.A
- Ungerboeck Codes
- Amplitude-Redundant Codes
- Modulation
How to increase transmission efficiency?
Reliability or Speed
• Use higher-order modulation schemes.
(8 PSK instead of 4 PSK)
Same BW, More bit/s per hz, more power
• Use coding: Less power, less bit/s per hz, BW expanded
• Band-limited environment
(Terrestrial radio communications)
• Power-limited environment
(Satellite radio communications)
[2], pages 522-532
G. Ungerboeck, "Channel coding with multilevel/phase signals," IEEE Trans. Inform. Theory, vol. IT-28, pp. 55-67, 1982.
Construction of TCMConstruction of TCM
• Constellation is divided into smaller constellations with larger euclidean distances between constellation points.
• Construction of Trellis (Ungerboeck’s rules)
1. Parallel transitions are assigned members of the same partition
2. Adjacent transitions are assigned members of the next larger transitions
3. Signals are used equally often
Model for TCMModel for TCM
Memory Partn
Select Constellation
Select SignalFrom
Constellation
an
Some examples of TCM schemesSome examples of TCM schemes
Consider transmission of 2 bits/signal
We examine TCM schemes using 8-PSK
With uncoded 4-PSK we have
We use the octogonary constellation
'
1
0
23
4
5
67
'2min d
d’8
sin2
''
d
• The Free Distance of a convolutional code is the Hamming distance between two encoded signals
• dfree is a measure of the separation among encoded sequences: the larger dfree, the better the code (at least for large enough SNR).
• Fact: To compute dfree for a linear convolutional code we may consider the distances with respect to the all-zero sequence.
(00)
An “error event”
Split Remerge
(00)
A simple algorithm to compute dfree is 1) Compute dc(l) for l = 1,2,…2) If the sequence giving dc(l) merges into the all-zero sequence, store its weight as dfree
First Key PointFirst Key Point
2min
2
d
d free
'
Constellation size must be increased to get the same rate of information
• We gain
• We lose
Gain is'/
/2min
2
d
d free
Minimum distance between sequences
Minimum distance for uncoded transmission
Energy with coding
Energy without coding
Second Key PointSecond Key Point
Lnnnn aaafx ,,, 1
),( nnn afx
How to introduce the dependence among signals
Transmitted symbolat time nT
Source symbolat time nT
Previous source Symbols = “state” n
We write(Describes output as a functionof input symbol + encoder state)
(Describes transitions between states)),(1 nnn ag
TCM Example 1TCM Example 1
Consider the 8-PSK TCM scheme, which involves the transmission of 2 bits/symbol using an uncoded 4-PSK constellation and the coded 8-PSK constellation for the TCM scheme as shown below
1
0
23
4
5
67
mind0
'
Show that and
8sin'20
2min d
TCM Example 1: SolutionTCM Example 1: Solution
2
45sin
mind
22
2245sin2min d
4cos1'2
4cos'2''
4cos''2''
20
2220
We have from the uncoded 4-PSK constellation
We have from the coded 8-PSK constellation
Using
8
2cos1
2
1
8sin 2
8sin'2
8sin'4
8sin)2('2
0
2220
TCM Scheme Based on 2-State TCM Scheme Based on 2-State TrellisTrellis
586.28
sin42)1,0()2,0(1 222
11
2
dd
d free
dB1.1293.12
586.2
0
4
2
0
1
3
7
65
0
1
Hence, we get the coding gain
a0
a1
I4PSK 8PSK
Can this performance gain Trellis coded QPSK be improved? The answer is yes by going to more trellis states.
TCM Example 2TCM Example 2
• Draw the set partition of 8-PSK with maximum Euclidean distance between two points.
• By how much is the distance between adjacent signal points increased as a result of partitioning?
8sin4
'
)1,0(
8sin'2)1,0(
2'
)2,0('2)2,0(
22
2
dd
dd
TCM Example 2: SolutionTCM Example 2: Solution
'765.08
sin'20
'41.1'21
1
0
7
6
5
4
3
001
111101
011
000
110
100
010
000100
110
010001
101111
011
'22
(0, 4) (2, 6) (1, 5) (3, 7)
'
0 1
0 01 1
0426
1537
2640
3715
04
26
26
1
5
37
5
1
7
3
04 1
2
0 0 And hence
TCM Scheme Based On A 4-State TrellisTCM Scheme Based On A 4-State Trellis
22
2222
'2'
1)4,0(
'
1
)0,0()0,0()4,0('
1
'
d
dddd free
Calculating the Euclidian distance for each, one such paths2 – s1 – s2, leaving and returning to s0, or s6 – s1 – s2
2/12
00
min n
nl
n
ssE ssd
nnl
Let us now use a TCM scheme with a more complex structure, in order to increase the coding gain. Take a trellis with four states as shown below.
22
4'
2min
2
d
d free
TCM Code Worked ExampleTCM Code Worked Example
S1 S2
8-PSKEncoder
16-QAM
a1
a2
c1
c2
c3
c4
Output
Rate ½ 4-state Convolutional Code
State Table for TCM CodeState Table for TCM Code
a1 a2 S1 S2 S’1 S’
2 c1 c2 c3
0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 0 1 1 1 0 0 0 1 0 1 1 0 0 0 1 1 0 1 0 0 1 0 0 1 1 1 1 1 0 0 1 0 1 1 0 1 0 1 1 1 0 1 1 1 1 1 1 0 0 1 0 0 1 0 0 0 1 0 1 0 0 0 0 1 0 0 1 1 0 0 1 0 0 1 1 1 1 0 0 0 0 1 1 0 0 1 1 1 1 1 0 0 1 0 1 1 1 0 1 1 0 0 1 1 1 1 1 1 0 1 1 1 1 1 1 0 1 1 1
Inputs Initial State Next State Outputs
Trellis Diagram of TCM CodeTrellis Diagram of TCM Code
'2
'2
'
222
422
)6,0()6,0(
ddd free00
10
01
11
0 04
2
6
2
6
401
53
7
1
5
37
6
0426
2640
1537
3715
State
OR
'2
'22 42)4,0(
dd free
Coding Gain over Uncoded QPSK Coding Gain over Uncoded QPSK ModulationModulation
2dmin =
Uncoded QPSK
22min d
Gain = 22
42min
'
2
d
d free
or 3 dB
TCM ProblemsTCM Problems
S1 S2
8-PSKEncoder
16-QAM
a1
a2
c1
c2
c3
c4
Output
S1 S2
8-PSKEncoder
16-QAM
a1
a2
c1
c2
c3
c4
Output
S3
S2 S3
8-PSKEncoder
16-QAM
a1
a2
c1
c2
c3
c4
Output
S4S1
The trellis-coded signal is formed as shown below, by encoding one bit using a rate½ convolutional code with three additional information bits left uncoded. Perform the set partitioning of a 32-QAM (cross) constellation and indicate the subsets in thepartition. By how much is the distance between the adjacent signal points increased asa result of partitioning.
a1
a2a3
a4
c1
c2
c3c4c5
TCM and DecodingTCM and Decoding
• Viterbi Algorithm is used with soft decisions of the demodulator for maximum likelihood estimation of the sequence being transmitted
Turbo Encoding / DecodingTurbo Encoding / Decoding
By R. A. Carrasco
Professor in Mobile Communications
School of Electrical, Electronic and Computing EngineeringUniversity of Newcastle-upon-Tyne
[1], pages 209 – 215http://en.wikipedia.org/wiki/Turbo_code
IntroductionIntroduction
• The Turbo Encoder– Overview
– Component encoders and their construction
– Tail bits
– Interleaving
– Puncturing
• The Turbo Decoder– Overview
– Scaling
Introduction Cont’dIntroduction Cont’d
• Results– AWGN results
• Performance
• Conclusions
Concatenated Coding and Turbo Codes
Input
data
Outer encoder Inner encoder
channel
Inner decoderOuter decoder
Output data
Serially concatenated codes
Input
data encoder
encoder
interleaver
Systematic bits
Parity bits#1
Parity bits#2
Parallel-concatenated (Turbo encoder)
• Convolutional codes
Non-systematic convolutional codes (NSC)
– Have no fixed back paths;
– They act like a finite impulse response (FIR) digital filter;
– NSC codes do not lead themselves to parallel concatenation;
– At high SNR the BER performance of a classical NSC code is better than the systematic convolutional codes of the same constraint length.
The Turbo Encoder
• Recursive systematic convolutional encoders in parallel concatenation, separated by pseudo-random interleaver
• Second systematic is interleaved version of first systematic– Interleaver process is known at the decoder, therefore this is
surplus to our needs
Component Encoder #1
Component Encoder #2
Interleaver
dk
dk-1
sk = dk
pk1
pk2
Component EncodersComponent Encoders
• [23;33]8 RSC component encoder
• 16 state trellis representation
• [7;5]8 RSC component encoder
• 4 state trellis representation
D D
D D D Ddk
dk
sk
pk
sk
pk
Systematic convolutional codes
– Recursive Systematic Convolutional (RSC) codes can be generated from NSC codes by connecting the output of the encoder directly to the input;
– At low SNR the BER performance of an RSC code is better than the NSC.
The operation of the Turbo encoder is as follow:1. The input data sequence is applied directly to encoder 1
and the interleaved version of the same input data sequence is applied to encoder 2.
2. The systematic bits (i.e. the original message bits) and the two parity check bit streams (generated by the two encoders) are multiplexed together to form the output of the encoders.
Turbo code interleavers
The novelty of the parallel-concatenated turbo encoder lies in the use of RSC codes and the introduction of an interleaver between the two encoders;
• The interleaver ensures that two permutations the same input data are encoded to produce two different parity sequences;
• The effect of the interleaver is to tie together errors that are easily made in one half of the turbo encoder to errors that are exceptionally unlikely to occur in the other half;
• This ensures robust performance in the event that the channel characteristics are not known and is the reason why turbo codes perform better than traditional codes.
• The choice of interleaver is therefore the key to be performance of a turbo coding system;
• Turbo code performance can be analysed in terms of the Hamming distance between the code words;
• If the applied input sequence happens to terminate one of the encoders, it is unlikely that, once interleaved, the sequence will terminate the other leading to a large hamming distance in at least one of the two encoders;
• A Pseudo-random interleaver is a good choice.
Turbo code interleavers (Cont’d)
Interleaving
• Shannon states that large frame length random codes can achieve channel capacity
• By their very nature, random codes are impossible to decode
• Pseudo-random interleavers make turbo codes appear random while maintaining a decodable structure
Interleaving cont’d
• Primary use– To increase average codeword weight– Altering bit position does not alter data-word weight
but can increase codeword weight– Thus a low weight convolutional output from encoder
#1 does not mean a low-weight turbo output
DATAWORD CODEWORDCODEWORD
WEIGHT
01100 0011100001 4
01010 0011011001 5
10010 1101011100 6
InterleaversInterleavers
• An interleaver takes a given sequence of symbols and permutes their positions, An interleaver takes a given sequence of symbols and permutes their positions, arranging them in a different temporal order;arranging them in a different temporal order;
• The basis goal of an interleaver is to randomise the data sequence when used The basis goal of an interleaver is to randomise the data sequence when used against burst errors;against burst errors;
• In general, data interleavers can be classified into: block, convolutional, random In general, data interleavers can be classified into: block, convolutional, random and linear interleavers;and linear interleavers;
• Block interleaver:Block interleaver: data are first written in row format in a permutation matrix, and data are first written in row format in a permutation matrix, and then read in a column format;then read in a column format;
• A pseudo – random interleaverA pseudo – random interleaver is a variation of a block interleaver when data are is a variation of a block interleaver when data are stored in a register at position that are deinterleaved randomly;stored in a register at position that are deinterleaved randomly;
• Convolutional interleaverConvolutional interleaver are characterised by a shift of the data, usually applied in are characterised by a shift of the data, usually applied in a fixed and cumulative way.a fixed and cumulative way.
Example: Block interleaverExample: Block interleaver
• Data sequence: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
read out:
1 5 9 13 2 6 10 14 3 7 11 15 4 8 12 16
read in (interleave):
1 5 9 13
2 6 10 14
3 7 11 15
4 8 12 16
read out:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
read in (De-interleave):
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
Channel
transmit
receive
Example: Pseudo - random interleaverExample: Pseudo - random interleaver
• Data sequence: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
read out (by a random position pattern):
1 6 11 15 2 5 9 13 4 7 12 14 3 8 16 10
read in (interleave):
1 6 11 15
2 5 9 13
4 7 12 14
3 8 16 10
read out: (by the inverse of random position pattern)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
read in (De-interleave):
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
Channel
transmit
receive
Convolutional interleaverConvolutional interleaver
2L
(N-1)/L
Channel
(N-1)/L
2L
L
……
……
.
……
……
.
Continued…Continued…
Input: …, x0, x1, x2, x3, x4, x5, x6, …Interleave:
D
D D
D D D
x0
x1 x-3
x2 x-2 x-6
x3 x-1 x-5 x-9
Output: …, x0, x-3, x-6, x-9, x4, x1, x-2, x-5, x8, x5, x2, x-1 …
Channel
transmit receive
Input: …, x0, x-3, x-6, x-9, x4, x1, x-2, x-5, x8, x5, x2, x-1 …
De-interleave:
D Dx12 x8 x4
x0
D
D Dx9 x5 x1
Dx6 x2
x3
Output: …, x0, x1, x2, x3, x4, x5, x6, …
DCorresponds to a delay of 4 symbols in this example.
Puncturing
• High rate codes are usually generated by a procedure known as puncturing;
• A change in the code rate to ½ could be achieved by puncturing the two parity sequences prior to the multiplexer. One bit might be deleted from code parity output in turn, such that one parity bit remains for each data bit.
Puncturing
• Used to reduce code rate– Omits certain output bits according to a pre-
arranged method
– Standard method reduces turbo codeword from rate n/k = 1/3 to rate n/k = 1/2
p2,1 p2,2 p2,3 p2,4
p1,1 p1,2 p1,3 p1,4
s1,1 s1,2 s1,3 s1,4
s1,1 p1,1 s1,2 p2,2 s1,3 p1,3 s1,4 p2,4
puncturer
Tail Bits
• Tail bits are added to the dataword such that the first component encoders codeword terminates at the all-zero state– Look up table is most common method
= 1
= 0
Data bits TailS0
S1
S2
S3
Turbo decoding
• A key component of iterative (Turbo) decoding is the soft-in, soft-out (SISO) decoder;
Matched filterS+H
8-level3-bitquantization
Soft decisions
Combined soft decision
error control decoding
Hard decisions
Matched filterS+H
Binary quantization
Error control
Hard decisions [1], pages 239 -244
000 010 100 110
001 011 101 111
Soft decision
Digital 0 Digital 1
0 1
Hard decision vs. Soft decision1. Soft (multi-level) decisions;
2. Hard (two-level) decisions;
Each soft decision contains not only information about the most likely
transmitted symbol
000 to 011 indicating a likely 0
100 to 111 indicating a likely 1
but also information about the confidence or likelihood which can be placed on this decision.
Soft decision
The log-likelihood ratio
• It is based on modulo-2 addition of the binary random variable u, which is -1 for logic 0, and +1 for logic 1;
• L(u) is the log-likelihood ratio for the binary random variable and is defined as:
This is described as the ‘soft’ value of the binary random variable u.
The sign of the value is the hard decision while the magnitude represents the reliability of this decision;
)1(
)1(ln)(
uP
uPuL
• As L(u) increases towards +∞,the probability that u=+1 also increases.
• As L(u) increases towards - ∞, the probability that u=-1 increases.
• For the conditional log-likelihood ration L(u/y) defined as:
)/1(
)/1(ln)/(
yuP
yuPyuL
The log-likelihood ratio
The information u is mapped to the encoded bits x. These
encoded bits are received by the decoder as y. All with the
time index k. From this the log-likelihood ration for the
system is:
From Bayes theorem, this is equivalent to:
)/1(
)/1(ln)/(
kk
kkkk yxP
yxPyxL
)1(
)1(ln
)1/(
)1/(ln
)1()1/(
)1()1/(ln)/(
k
k
kk
kk
kkk
kkkkk xP
xP
xyP
xyP
xPxyP
xPxyPyxL
The log-likelihood ratio
• Assuming the ‘channel’ to be flat fading with Gaussian noise, the Gaussian pdf, G(x):
With q representing the mean and representing the variance, showing that:
kay
N
Eb
ayN
Eb
kk
kkkk ay
N
Eb
e
e
xyP
xyPxyL
k
k
0)(
)(
4ln)1/(
)1/(ln)|(
2
0
2
0
2
2
2
)(
2
1)(
qx
exG
2
The log-likelihood ratio
20
02
1)|(
kkb axy
N
E
kk eN
xyP
where represent the signal to noise ratio per bit and a being the fading
amplitude (a = 1 for a non- fading Gaussian Channel).
From equation
0N
Eb
)1(
)1(ln)(
uP
uPuL
)()|()1(
)1(ln
)1/(
)1/(ln)/( xLxyL
xP
xP
xyP
xyPyxL c
k
k
kk
kkkk
The log-likelihood ratio
)(
)()|()|()()|()()|(),(
yP
xPxyPyxPxPxyPyPyxPyxP
)1()1|(
)1()1|(ln
)(
)1(1|
)(
11|
ln|1
|1ln)|(
xPxyP
xPxyP
yP
xPxyP
yP
xPxyP
yxP
yxPyxL
The log likelihood ratio of xk depends on yk is:
where is the channel reliability.
Therefore L(xk/yk) is the weighted received value.
)()()/( kkckk xLyLyxL
aN
EbLc
0
4
The log-likelihood ratio
Turbo Decoding: Scaling by Channel Reliability
• Channel Reliability = 4*Eb/N0*Channel Amplitude– Channel Amplitude for AWGN = 1– Channel Amplitude for Fading varies
4*Eb/N0*ACorrupted, Received
codewordScaled, Corrupted, Received codeword
Performance of Turbo CodesPerformance of Turbo Codes
-4 -2 0 2 4 6 8 10
100
10-1
10-2
10-3
10-4
10-5
10-6
Shannon
limit
Turbo codes
Uncoded
At a bit error rate of 10-5, the turbo code is less than 0.5 dB from Shannon's theoretical limits.
Decoder
Stage 1Interleaver
Decoder
Stage 2
De-interleaver
De-interleaver
Hard-limiter
Decoder bits
Noise parity-check bits ε1
Noise
Systematic
bits
Noise parity-check bits ε2
Block diagram of Turbo Decoder
Block diagram of Turbo Decoder
Figure shows the basic structure of the turbo decoder. It operates on noisy versions of the systematic bits and the two noisy version of the parity bits in two decoding stages to produce an estimate of the original message bits.
Turbo Decoder
0)(2
~
xI
)(2 xI
)(2
~
xI
BCJR BCJRI D∑ ∑ ∑ ∑
u ε1 u ε2
+ -
+
-
Stage 1 Stage 2
Set
)(1 xI
)(1
~
xI
~
xHard
Limiter
Turbo Decoding
• The BCJR algorithm is a soft input –soft output decoding algorithm with two recursions, one forward and the other backward.
• At stage 1, the BCJR algorithm uses extrinsic information I2(x) added to the input (u). At the output of the decoder the ‘input’ is subtracted from the ‘output’ and only the information generated by the 1st decoder is passed on I1(x). For the first ‘run’ I2(x) is set to zero as there is no ‘prior’ information.
Turbo Decoding
• At stage 2, the BCJR algorithm uses extrinsic information I1(x) added to the input (u). The input is then interleaved so that the data sequence matches the previously interleaved parity (ε2). The decoder output is then de-interleaved and the decoder’s ‘input’ is subtracted so that only decoder 2’s information is passed on I2(x). After this loop has repeated many times, the output of the 2nd decoder is hard limited to form the output data.
The first decoding stage use the BCJR Algorithm to produce a
soft estimate of systematic bit xJ, expressed as the log-
likelihood ratio:
where u is the set of noisy systematic bits, ε1 is the set of noisy
parity-check bits generated by encoder 1.
)))(,,|0(
))(,,|1(ln()(
2
~
1
2
~
11
xIuxP
xIuxPxI
J
JJ
,3,2,1J
Turbo Decoding
I2(x) is the extrinsic information about the set of message bits x derived from the second decoding stage and fed back to the first stage.
The total log-likelihood ratio at the output of the first decoding stage is therefore:
K
JJxIxI
111 )()(
Turbo Decoding
The extrinsic information about the message bits derived from
the first decoding stage is:
where is to be defined.
)()()( 2
~
11
~
xIxIxI
)(2
~
xI
∑ ∑Soft-input
Soft-outputIntrinsic
information
Extrinsic
informationOther
information
Raw data
At the output of the SISO decoder, the ‘input’ is subtracted from the ‘output’ and only the reliability information generated by the decoder is passed on as extrinsic information to the next decoder.
Turbo Decoding
The extrinsic information fed back to the first
decoding stage is therefore:
where is itself defined before and is the log-
likelihood ratio computed by the second storage.
)(2
~
xI
)()()( 1
~
22
~
xIxIxI
)(1
~
xI )(2
~
xI
)))(,,/0(
))(,,/1((log)(
1
~
2
1
~
222
xIuxP
xIuxPxI
J
JJ
,2,1J
Turbo Decoding
An estimate of the message bits x is computed by
hard-limiting the log-likelihood ratio at the output
of the second stage, as shown by:
we set on the first iteration of the algorithm.
)(2 xI
))(sgn( 2 nIx
0)(2
xx
Turbo Decoding
Turbo Decoding: Serial to Parallel Conversion and Erasure Insertion
• Received, corrupted codeword is returned to original three bit streams
• Erasures are replaced with a ‘null’
0 p2,2 0 p2,4
p1,1 0 p1,3 0
s1,1 s1,2 s1,3 s1,4
s1,1 p1,1 s1,2 p2,2 s1,3 p1,3 s1,4 p2,4
Serial to P
arallel
Results over AWGN
[7;5] SOVA vs Log-MAP for AWGN, 512 data bits
1.00E-06
1.00E-05
1.00E-04
1.00E-03
1.00E-02
1.00E-01
1.00E+00
0.5 1 1.5 2 2.5 3
snr
BE
R 7;5 punctured, AWGN, Log-MAP
7;5 punctured, AWGN, SOVA
QuestionsQuestions
• What is the MAP algorithm first of all? Who found it?• I have heard about the Viterbi algorithm and ML sequence estimation for decoding
coded sequences. What is the essential difference between these two methods?• But I haven’t heard about the MAP algorithm until recently (even though it was
discovered in 1974). Why?• What are SISO (Soft-Input-Soft-Output) algorithms first of all?• Well! I am quite comfortable with the basics of SISO algorithms. But tell me one thing.
Why should a decoder output soft values? I presume there is no need for it to do that.• How does the MAP algorithm work?• Well then! Explain MAP as an algorithm. (Some flow-charts or steps will do).• Are there any simplified versions of the MAP algorithm? (The standard one involves a
lot of multiplication and log business and requires a number of clock cycles to execute.)
• Is there any demo source code available for the MAP algorithm?• References
Problem 1Problem 1
• Let rc1=p/q1 and rc2=p/q2 be the codes rates of RSC encoders 1 and 2 in the turbo encoder of figure 1. Determine the code rate of the turbo code.
• The turbo encoder of figure 1 involves the use of two RSC encoders .
(i) Generalise this encoder to encompass a total of M interleavers .
(ii) construct the block diagram of the turbo decoder that exploits the M sets of parity-check bits generated by such a generalization.
ENC1
ENC2
p
p
p
q1
q2
Figure 1
Problem 2Problem 2
Consider the following generator matrices for rates ½ turbo codes:
4-state encoder: g (D) =
8-state encoder: g (D)=
(i) Construct the block diagram for each one of these RSC encoders.
(ii) Construct the parity-check equation associated with each encoder.
2
2
1
1,1
D
DD
32
32
1
1,1
DDD
DD
Problem 3Problem 3
• Explain the principle of Non-systematic convolutional codes (NSC) and Recursive systematic convolutional codes (RSC) and make comparisons between the two
• Describe the operation of the turbo encoder
• Explain how important the interleaver process is to the performance of a turbo coding system
Problem 4Problem 4
• Describe the meaning of Hard decision and soft decision for the turbo decoder process
• Discuss the log-likelihood ratio principle for turbo decoding system
• Describe the iterative turbo decoding process
• Explain the operation of the soft-input-soft-output (SISO) decoder
School of Electrical, Electronics andComputer Engineering
Low Density Parity Check Codes: An Overview
By R.A. Carrasco Professor in Mobile Communications
University of Newcastle-upon-Tyne
[1], pages 277 – 287http://en.wikipedia.org/wiki/LDPC
• Parity check codes• What are LDPC Codes?• Introduction and Background • Message Passing Algorithm• LDPC Decoding Process
– Sum-Product Algorithm– Example of rate 1/3 LDPC (2,3) code
• Construction of LDPC codes– Protograph Method– Finite Geometries – Combinatorial design
• Results of LDPC codes constructed using BIBD design
Outline
• A binary parity check code is a block code: i.e., a collection of binary vectors of fixed length n.
• The symbols in the code satisfy m parity check equations of the form:– xa xb xc … xz = 0– where means modulo 2 addition and – xa, xb, xc , … , xz • are the code symbols in the equation.
• Each codeword of length n can contain (n-m)=k information digits and m check digits.
Parity Check CodeParity Check Code
For a code word of the form c1, c2, c3, c4, c5, c6, c7, the equations are:
c1 c2 c3 c5 = 0
c1 c2 c4 c6 = 0
c1 c3 c4 c7 = 0
The parity check matrix for this code is then:
1 1 1 0 1 0 01 1 0 1 0 1 01 0 1 1 0 0 1
Note that c1 is contained in all three equations while c2 is contained in only the first two equations.
Example: Hamming Code with n=7, k=4, and m=3
What are Low Density Parity Check Codes?
•The percentage of 1’s in the parity check matrix for a LDPC code is low.
•A regular LDPC code has the property that:–every code digit is contained in the same number of equations, –each equation contains the same number of code symbols.
•An irregular LDPC code relaxes these conditions.
c3 c6 c7 c8 = 0
c1 c2 c5 c12 = 0
c4 c9 c10 c11 = 0
c2 c6 c7 c10 = 0
c1 c3 c8 c11 = 0
c4 c5 c9 c12 = 0
c1 c4 c5 c7 = 0
c6 c8 c11 c12= 0
c2 c3 c9 c10 = 0
Equations for Simple LDPC Code with n=12 and m=9
The Parity Check Matrix for the LDPC Code
c1 c2 c3 c4 c5 c6 c7 c8 c9c10c11c12
0 0 1 0 0 1 1 1 0 0 0 01 1 0 0 1 0 0 0 0 0 0 10 0 0 1 0 0 0 0 1 1 1 00 1 0 0 0 1 1 0 0 1 0 01 0 1 0 0 0 0 1 0 0 1 00 0 0 1 1 0 0 0 1 0 0 11 0 0 1 1 0 1 0 0 0 0 00 0 0 0 0 1 0 1 0 0 1 10 1 1 0 0 0 0 0 1 1 0 0
c3 c6 c7 c8 = 0
c1 c2 c5 c12 = 0
c4 c9 c10 c11 = 0
c2 c6 c7 c10 = 0
c1 c3 c8 c11 = 0
c4 c5 c9 c12 = 0
c1 c4 c5 c7 = 0
c6 c8 c11 c12= 0
c2 c3 c9 c10 = 0
Introduction
- LDPC codes were originally invented by Robert Gallager in the early 1960s but were largely ignored until they were rediscovered in the mid-1990s by MacKay.
- Defined in terms of a parity check matrix that has a small number of non-zero entries in each column
- Randomly distributed non-zero entries–Regular LDPC codes–Irregular LDPC codes
- Sum and Product Algorithm used for decoding
•- Linear block code with sparse (small fraction of ones) parity-check matrix
•- Have natural representation in terms of bipartite graph
•- Simple and efficient iterative decoding
Introduction
Low Density Parity Check (LDPC) codes are a class of linear block codes characterized by sparse parity check matrices (H).
Review of parity check matrices:Review of parity check matrices:– For a (n,k) code, H is a (n-k ,n) matrix of ones and zeros.
– A codeword c is valid if cHT =s= 0
– Each row of H specifies a parity check equation. The code bits in positions where the row is one must sum (modulo-2) to zero.
– In an LDPC code, only a few bits (~4 to 6) participate in each parity check equation.
– From parity check matrix we obtained Generator Matrix G which is used to generate LDPC Codeword.
– G.HT = 0
– Parity check matrix is arranged in Systmatic from as H = [Im | P]
– Generator matrix G = [Ik | PT]
– Code can be expressed as c= x . G
• Representations Of LDPC Codes
parity check matrix
(Soft) Message passing:
Variable nodes communicate to check nodes their reliability (log-likelihoods)
Check nodes decide which variables are not reliable and “suppress” their inputs
Number of edges in graph = density of H
Sparse = small complexity
011010110001
111001011000
010110000111
001110101010
100001011110
100101100101
Low Density Parity Check Codes
Parity Check Matrix to Tanner Graph
011010110001
111001011000
010110000111
001110101010
100001011110
100101100101
LDPC Codes
• Bipartite graph with connections defined by matrix H
• r’: variable nodes– corrupted codeword
• s: check nodes– Syndrome, must be
all zero for the decoder to claim no error
• Given the syndromes and the statistics of r’, the LDPC decoder solves the equation
r’HT=sin an iterative manner
Construction of LDPC codes
• Random LDPC codes– MacKay Construction
•Computer Generated random Construction
• Structured LDPC codes– Well defined and structured code– Algebraic and Combinatoric construction – Encoding advantage over random LDPC codes – Performs equally well as random codes– Examples
•Vandermonde-matrix (Array codes)• Finite Geometry • Balance Incomplete block design • Other Methods (e.g. Ramanujan Graphs)
Protograph Construction of LDPC codes by J. Thorpe
• A protograph can be any Tanner graph with a relatively small number of nodes.
• The protograph serves as a blueprint for constructing LDPC codes of arbitrary size whose performance can be predicted
by analysing protograph.
J.Thrope, “Low-Density Parity Check codes constructed from Protograph,” IPN Progress report 42-154,
2003.
Protograph Construction (Continued)
LDPC Decoding (Message passing Algorithm)
•Decoding is accomplished by passing messages along the lines of the graph.
•The messages on the lines that connect to the ith variable node, ri, are estimates of Pr[ri =1] (or some equivalent information).
•Each variable node is furnished an initial estimate of the probability from the soft output of the channel.
•The variable node broadcasts this initial estimate to the check nodes on the lines connected to that variable node.
•But each check node must make new estimates for the bits involved in that parity equation and send these new estimates (on the lines) back to the variable nodes.
Message passing Algorithm or Sum-Product Algorithm
While(not equal to stop criteria){
- All variable nodes pass messages to corresponding check nodes- All check nodes pass messages to corresponding variable nodes
}Stop Criteria:
• Satisfying Equation r’HT=0• Maximum number of iterations reached
1mnz
2mnz
4mnz
3mnz3mnL
2mnL
4mnL
1mnL2mnL
1mnz
4mnz
3mnz
Var NodesN(m)
Check Nodem
nmNnnmmn zL
\)(
1 2tanhtanh2
nmz4
nmz3
nmz2
nmz1
nmL4
nmL3
nmL2
nmL1
nF
Check NodesM(n)
Var Noden
mnMm
nmnmn LFz\)(
, )(
nMm
mnnn LFz for hard decision
Check Node Processing Variable Node Processing
LDPC Decoding ProcessLDPC Decoding Process
Where Fn is the channel reliability value
Sum-Product Algorithm
Step #1 : Initialisation
LLR of the (soft) received signal yi , for AWGN
Lij = Ri = where j represents check and i variable node
Step # 2 Check to Variable Node
Extrinsic message from check node j to bit node i
Where Bj represents set of column locations of the bits in the jth parity-check equations and ln is natural logarithm
No
Eby4 i
iiBi j''
'j
'
,
ii,Bi
2tanh1
2tanh1
lnji
ji
ji'
'
L
L
E
Sum-Product Algorithm (Continued)
Step #3 : Codeword test or Parity Check
Combined LLR is the sum of the extrinsic LLRs and the original LLR calculated in step #1.
Where Ai is the set of row locations of the parity-check equations which check on the ith bit of the code
Hard decision is made for each bit
iAj
ijii REL
0L0,
0L1,z
i
ii
Sum-Product Algorithm (Continued)
Condition to stop further iterations:
If r =[r1, r2,…………. rn] is a valid code word then,
• It would satisfied H.rT = 0
• maximum number of iterations reached.
Step #4 : Variable to Check Node
Variable node i send LLR message to check node j
without using the information from check node j .
Return to step # 2
jj,Aj
ijiij'
i'
' REL
Example:
Code word send is [ 001011 ] through AWGN channel with Eb/No = 1.25 and Vector [-0.1 0.5 –0.8 1.0 –0.7 0.5] is received.
Parity Check Matrix
101100
110001
010110
001011
H
010101
5.25.30.50.45.25.0R
1Iteration
No
Eby4 iR=
Step # 1
After Hard Decision we find that 2 bits are in errors so we need to apply LDPC decoding to correct the errors.
101100
110001
010110
001011
Example (Continued)
Variable Nodes
Che
ck N
odes
Variable Nodes Check Nodes
Initialisation
5.205400
5.25.30005.0
05.3045.20
00505.25.0
ji,L
QuestionsQuestions
1. Suppose we have codeword c as follows:where each is either 0 or 1 and codeword now has three parity-check equations
a) Determine the parity check matrix H by using the above equation b) Show the systematic form of H by applying Gauss Jordan
eliminationc) Determine Generator matrix G from H and prove G * HT = 0d) Find out the dimension of the H, Ge) State whether the matrix is regular or irregular
654321 ccccccc
ic
0521 ccc
0631 ccc
06421 cccc
QuestionsQuestions
2. The parity check matrix H of LDPC code is given below:
a) Determine the degree of rows and column b) State whether the LDPC code is regular or irregularc) Determine the rate of the LDPC coded) Draw the tanner graph representation of this LDPC code.e) What would be the code rate if we make rows equals to columnf) Write down the parity check equation of the LDPC code
011010110001
111001011000
010110000111
001110101010
100001011110
100101100101
H
QuestionsQuestions
3. Consider parity check matrix H generated in question 1,
a) Determine message bits length k, parity bits length m, codeword length n
b) Use the generator matrix G obtained in question 1 to generate all possible codewords c.
4.
a) What is the difference between regular and irregular LDPC codes?
b) What is the importance of cycles in parity check matrix?
c) Identify the cycles of 4 in the following tanner graph.
Check Nodes
Variable Nodes
SolutionsSolutions
Question 1 a)
101011
100101
010011
H
Question 1 b)
101011
110110
010011
111000
110110
010011
110100
111010
010011
Applying Gaussian elimination first
modulo-2 addition of R1 and R2 in equation (A)
(A)
modulo-2 addition of R1 and R3 in equation (A)
Swap C3 with C4 to obtain the diagonal of 1’sin the first 3 columns
Desired diagonalThis 1 needs to beeliminated to achieve the identity matrix I
SolutionsSolutions
Now, we need an Identity matrix of 3 x 3 dimensions. As you can see the first 3 columns and rows can become an identity matrix if we somehow eliminate 1 in the position (row =1 and column =2). To do that we apply Jordan elimination to find the parity matrix in systematic form, hence
Modulo- 2 addition of R2 into R1 gives
110100
111010
101001
Hsys
It is now in systematic form and represented as
H = [I | P]
SolutionsSolutions
Generator matrix can be obtained by using H in the systematic form obtained in a)
G = [PT | I]
000111
010110
001011
G
To prove G * HT = 0
*
000111
010110
001011
111
110
011
100
010
001
000
000
000
SolutionsSolutions
d) Dimension of H is (3 × 6) and G is also (3 × 6)
Matrix is irregular because the number of 1’s in rows and columns are not equal e.g. number of 1’s in 1st row is ‘3’ while 3rd row has ‘4’ 1’s. Similarly, number of 1’s in 1st column is ‘3’ while in 2nd columns has ‘2’ 1’s.
e)
Question 2
The parity check matrix H contains 6 ones in the each row and 3 ones in each column. The degree of rows is the number of 1’s in the rows which is 6 in this case, similarly and the degree of column is the number of 1’s in the column hence in this case 3.
a)
Regular LDPC, because the number of ones in each row and column are the same b)
Rate = 1 – m / n = 1 – 6/12 = ½ c)
SolutionsSolutions
Tanner graph is obtained by connecting the check and variable nodes as followsd)
SolutionsSolutions
If we make rows equals to columns then the code rate is 1. It means there is no redundancy involved in the code and all bits are the information bits.
e)
Parity check equations of the LDP code are f)
01297631 cccccc
01275432 cccccc
01098642 cccccc
01198321 cccccc
0121110754 cccccc
011108651 cccccc
SolutionsSolutions
message bits length k = 6-3 =3 parity bits length m = 3 codeword length n = 6
Question 3
a)
Since the information or message is 3 bits long therefore, b)
0x 1x 2x
0 0 0
0 0 1
0 1 0
0 1 1
1 0 0
1 0 1
1 1 0
1 1 1
The information bits has 8 possibilities as shown in the Table below
SolutionsSolutions
The codeword is generated by multiplying information bits with the generator matrix as follows
c = x G
The Table below shows the code words generated by using G in question 1.
x c
000 000000
001 111001
010 011010
011 100011
100 110100
101 001101
110 101110
111 010111
SolutionsSolutions
Question 4
The Regular LDPC code has constant number of 1’s in the rows and columns of theParity check matrix.
The Irregular LDPC code has variable number of 1’s in the rows and columns of theParity check matrix.
a)
A cycle in a tanner graph is a sequence of connected vertices that start and end at the same vertex in the graph, and other vertices participates only once in the cycle. The length of the cycle is the number of edges it contains, and the girth of a graph is the size of the smallest cycle. A good LDPC code should have a large girth so as to avoid short cycles in the tanner Graph since they introduce an error floor. Avoiding short cycles have been proven to be moreeffective in combating error floor in the LDPC code. Hence the design criteria of LDPC code should be such that it removes most of the short cycles in the code.
b)
SolutionsSolutions
Cycles of length 4 are identified as follows
Check Nodes
Variable Nodes
c)
Security in Mobile Systems
By Prof R A Carrasco
School of Electrical, Electronic and Computing Engineering
University of Newcastle-upon-Tyne
Security in Mobile SystemsSecurity in Mobile Systems
• Air Interface Encryption -Provides security to the Air Interface -Mobile Station to Base Station
• End-to-end Encryption -Provides security to the whole
communication path -Mobile Station to Mobile Station
Air Interface Encryption ProtocolsAir Interface Encryption Protocols
• Symmetric Key
-Use Challenge Response Protocols for authentication and key agreement
• Asymmetric Key
-Use exchange and verification of ‘Certificates’ for authentication and key agreement
Where it is used
Challenge Response ProtocolChallenge Response Protocol
GSM *Only authenticates the Mobile Station *A3,A8,Algorithms are usedTETRA *Authentication both Mobile Station and the
Network *TA11, TA12, TA21, TA22 algorithm are used3G *Authentication and Key Agreement (AKA)
3G3G
• Advantages
- Simpler than Public key techniques
- Less processing power required in the hand set
• Disadvantages
- Network has to maintain a Database of secret keys of all the Mobile stations supported by it
- The secret key is never changed in normal operation
- Have to share secret keys with MS
Challenge-Response ProtocolChallenge-Response Protocol
Challenge-Response Protocol
1. MS sends its identity to BS
2. BS sends the receiver MS identity to AC
3. AC gets the corresponding key ‘k’ from database
4. AC generates a random number called a challenge
5. By hashing K and the challenge the AC computes a signed response
6. It also generates a session authentication key by hashing K and the challenge (difference hashing function)
Challenge-response ProtocolChallenge-response Protocol
7.AC sends the challenge, Response and session key to BS
8.BS sends the challenges to the MS9.MS computes the Response and the session
authentication key10.MS sends the response to the BS11. If the two Responses received by BS from AC &
MS are equal, the MS is authentic12. Now MS and BS uses the session key to encrypt
the communication data between them
Challenge-Response ProtocolChallenge-Response Protocol
MS BS AC Database Identity (i) Identity (i) Identity key
K challenge challenge K I1 K1 I2 K2 Response/ks Response
Challenge response protocol in GSMChallenge response protocol in GSM
1)MS sends its IMSI (international mobile subscriber identity) to the VLR
2)VLR sends the IMSI to AC via HLR 3)AC looks up in the database and gets the
authentication key “Ki” using IMSI4)Authentication center generates RAND5)It combines Ki with RAND to produce SRES using
A3 algorithm6)It combines ki with RAND to produce kc using A8
algorithm
Challenge response protocol in GSMChallenge response protocol in GSM
7) The AC provides the HLR a set of RAND ,SRES ,kc triplets
8) HLR sends one set to the VLR to authenticate the MS
9) VLR sends RAND to the MS
10) MS computes SRES and kc using ki and RAND
11) MS sends SRES to the VLR
12) VLR compares the two SRES ’s received from the HLR and MS. If they are equal, the MS is authenticated
SRES=A3(ki,RAND) and kc=A8(ki,RAND)
TETRA ProtocolTETRA Protocol
Protocol flow 1.MS sends its TMSI(TETRA Mobile subscriber Identity) to
BS (Normally a temporary Identity) 2) BS sends the TMSI to AC 3)AC looks up in the database and gets the Authentication
key ‘k’ using TMSI 4)AC generates a 80 bit RANDOM Seed (RS) 5)AC computes KS (session authentication key-128bits)
using K and RS 6)AC sends KS &RS to BS 7)BS generates a 80 bit random challenge called RAND1
and computes a 32 bit expected response called XRES1
TETRA ProtocolTETRA Protocol
8.BS sends RAND1 and RS to the MS9.MS computes KS using k & RS 10.Then MS computes RES1 using KS &
RAND111.MS sends RES1 to the BS 12.BS compares RES1 and XRES1.If they are
equal ,the MS is authenticated .13.BS sends the results ‘R1’ of the comparison
to the MS
Authentication of the userAuthentication of the user
Protocol Flow
1)A Random number is chosen by AC called RS
1a)AC uses the K and RS to generate session key (KS), using TA11 algorithm
2)AC sends the KS and RS to the base station.
3)BS generates a random number called RAND1.
4)BS computes expected response (XRES1) and Derived Cypher key noting also TA12
Authentication of the userAuthentication of the user
5)BS sends RS and RAND1 to MS
6)MS using his own key (k) and RS computes KS (session key) using TA11 also use TA12 computes RES1 and DCK1.
7)MS sends RES1 to BS.
8)BS computes XRES1 with RES1.
ComparisonComparison
• Challenge response Protocol
• Advantages
Simpler than Public key techniques
Less processing power required in the hand set
• Disadvantages
Network has to maintain a Database of
secret keys of all the Mobile stations supported by it
Comparison Comparison
• Public key Protocol Advantages • Network doesn’t has to share the secret keys with
MS• Network doesn’t has to maintain a database of
secret keys of the Mobiles Disadvantages• Requires high processing power in the mobile
handsets to carry out the complex computations in real time
Hybrid ProtocolHybrid Protocol
• Combines the challenge-response protocol with a Public key scheme.
• Here the AC also acts as the Certification authority
• AC has the public key PAC and private key SAC for a public key scheme.
• MS also has the public key pi and private key si for a public key scheme.
End to End Encryption RequirementsEnd to End Encryption Requirements
• Authentication
• Key Management
• Encryption Synchronisation for multimedia
(e.g Video)
Secret key MethodsSecret key Methods
Advantages• Less complex compared to public key
methods• Less processing power required for
implementation • Higher encryption rates than the public key
techniques Disadvantages• Difficult to manages keys
Public key MethodsPublic key Methods
Advantages • Easy to manage keys• Capable of providing Digital Signatures
Disadvantages • More complex and time consuming
computations • Not suitable for bulk encryption of user data
Combined Secret-key and Public-key Combined Secret-key and Public-key Systems.Systems.
• Encryption and Decryption of User Data (Private key technique)
• Session key Distribution (Public key technique)
• Authentication (Public key technique)
Possible ImplementationPossible Implementation
• Combined RSA and DES
• Encryption and Decryption of Data (DES)
• Session key distribution (RSA)
• Authentication (RSA and MD5 Hash Function)
• Combined Rabin’s Modular Square Root (RMSR) and SAFER
One Way Hashing FunctionsOne Way Hashing Functions
• A one way hash function, H(M), operates on an arbitrary-length pre-image M and return a fixed-length value h
h=H(M) where h is of length m.
The pre-image should contain some kind of binary representation of the length of the entire message. This technique overcome a potential security problem resulting from message with different lengths possibly hashing to the same value. (MD-Strengthening).
Characteristic of hash functionsCharacteristic of hash functions
• Given M. it is easy to compute h.• Given h, it is hard to compute M such that H(M)=h• Given M, it is harder to find another
message, M’ such that H(M)=H(M’)• MD5 Hashing Algorithm
Variable Length MD5 128 bit Message
message Digest
QuestionsQuestions
1) Describe the Channel Response Protocol for authentication and key agreement.
2) Describe the Channel Response Protocol for GSM.3) Describe the TETRA Protocol for authentication and key
agreement.4) Describe the authentication for the user in mobile
communications and networking.5) Describe the End to End encryption requirement, Secret Key
Methods, Public Key Methods and possible implementation.6) Explain the principle of public/private key encryption. How can
such encryption schemes be used to authenticate a message and check integrity.
7) Describe different types of data encryption standards.