Top Banner
7/28/2019 Lect6_TurboDecAlgorithms_07_4.pdf http://slidepdf.com/reader/full/lect6turbodecalgorithms074pdf 1/13 Turbo coding Turbo coding Kalle Ruttik Communications Laboratory Helsinki University of Technology April 24, 2007 Turbo coding Historical review Introduction Channel coding theorem suggest that a randomly generated code with appropriate distribution is likely a good code if its block length is high. Problem: Decoding - In case of long block lengths the codes without a structure are difficult to decode. A fully random codes can be avoided. The codes whose spectrum resembles spectrum of a random code are good codes. Such random like codes can be generated by using interleaver. Interleaver performs permutation of the bit sequence. Permutation can be either of information bits sequence or of the parity bits. Turbo coding Historical review Parallel concated convolutional codes (PCCC) Interleaver F Encoder 1 to channel Encoder 2 Figure: Encoder Turbo coding Historical review Decoding turbo codes A parallel-concatenated convolutional code cannot be decoded by a standard serial dynamic programming algorithm. - The number of states considered in trellis evaluation of two interleaved code is squared compared to the forward-backward algorithm on one code. - Changing a symbol in one part of the turbo-coded codeword will affect possible paths in this part in one code and also the distant part in other code where this bit is ”interleaved”. - Optimal path in one constituent codeword does not have to be optimal path in the other codeword.
13

Lect6_TurboDecAlgorithms_07_4.pdf

Apr 03, 2018

Download

Documents

anite99
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Lect6_TurboDecAlgorithms_07_4.pdf

7/28/2019 Lect6_TurboDecAlgorithms_07_4.pdf

http://slidepdf.com/reader/full/lect6turbodecalgorithms074pdf 1/13

Turbo coding

Turbo coding

Kalle Ruttik

Communications Laboratory

Helsinki University of Technology

April 24, 2007

Turbo coding

Historical review

Introduction

Channel coding theorem suggest that a randomly generated code

with appropriate distribution is likely a good code if its block

length is high.Problem: Decoding

- In case of long block lengths the codes without a structure are

difficult to decode.

A fully random codes can be avoided. The codes whose spectrum

resembles spectrum of a random code are good codes.

Such random like codes can be generated by using interleaver.

Interleaver performs permutation of the bit sequence.Permutation can be either of information bits sequence or of 

the parity bits.

Turbo coding

Historical review

Parallel concated convolutional codes (PCCC)

I n t e r l e a v e r

F

Encoder

1

t o c h a n n e l

E n c o d e r

2

Figure: Encoder

Turbo coding

Historical review

Decoding turbo codes

• A parallel-concatenated convolutional code cannot be decodedby a standard serial dynamic programming algorithm.

- The number of states considered in trellis evaluation of two

interleaved code is squared compared to the forward-backward

algorithm on one code.

- Changing a symbol in one part of the turbo-coded codeword

will affect possible paths in this part in one code and also the

distant part in other code where this bit is ”interleaved”.

- Optimal path in one constituent codeword does not have to be

optimal path in the other codeword.

Page 2: Lect6_TurboDecAlgorithms_07_4.pdf

7/28/2019 Lect6_TurboDecAlgorithms_07_4.pdf

http://slidepdf.com/reader/full/lect6turbodecalgorithms074pdf 2/13

Turbo coding

Historical review

Berrou approach

• Calculate the likelihood of each bit of the original dataword of 

being 0 or 1 accordingly to the trellis of the first code.

• The second decoder uses the likelihood from the first decoderto calculate the new probability of the received bits but now

accordingly to the received sequence of the second coder y (2).

• Bit estimate from the second decoder is feed again into first

decoder.

• Instead of serially decoding each of the two trellises we decode

both of them in parallel fashion.

Turbo coding

Historical review

Decoding turbo codes

Random likecodes

Inteleaving

Combining several codes

Turbo codes

Pseudorandomcodes

Probabilityreassesment

based decoding

Iterativedecoding

Figure: The ideas influencing evolution of turbo coding. Accordingly to

fig. 1 from Battail97.

Turbo coding

Code as a constraint

Code as a constraint

• The bit estimates calculated based on the coder structure areaposteriori  probabilities of the bit constrained by the code.

• Contraint means that among of all possible bit sequences only

some are allowed - they are possible codewords. Thecodewords limit the possible bit sequecnes.

• The aposteriori probability is calculated over the probabilitiesof possible codewords.

• The rule of the conditional probability

P (A|B ) =P (A,B )

P (B )

where A correspond to a event that a certain bit in thecodeword is 1 (or zero). B  requires that the bit sequence isallowable codeword.

Turbo coding

Code as a constraint

Repetition code

Example: repetition code

• We have three samples c 1,c 2,c 3.

• If to take the samples one by one they can be either zero orone.

• We have additional information: the samples are generated asa repetition code.

• Let denote the valid configurations as S  = {(c 1, c 2, c 3)}.

• The set of possible codewords (the constraint set) isS  = {(0, 0, 0) , (1, 1, 1)}.

Page 3: Lect6_TurboDecAlgorithms_07_4.pdf

7/28/2019 Lect6_TurboDecAlgorithms_07_4.pdf

http://slidepdf.com/reader/full/lect6turbodecalgorithms074pdf 3/13

Turbo coding

Code as a constraint

Repetition code

• In our case the a posterior  probability for the sample c 2 = 1 is

p post 2 =

(c 1,c 2,c 3)∈S c 2=1

P (c 1, c 2, c 3)

(c 1

,

c 2,

c 3)∈S P (c 1,

c 2,

c 3)

In the numerator is summation over all configurations in S 

such that c 2 = 1, in the denominator is the normalization, thesum of all the probabilities of all configurations in S .

Turbo coding

Code as a constraint

Repetition code

Example: Repetition code

p (c 1, c 2, c 3)

p (0, 0, 0)

p (1,

1,

1)

p post (c 2 = 0) =

c 2=0p (c 1,c 2,c 3)

c 2=0p (c 1,c 2,c 3)+

c 2=1p (c 1,c 2,c 3)

p post (c 2 = 1) =

c 2=1p (c 1,c 2,c 3)

c 2=0p (c 1,c 2,c 3)+

c 2=1p (c 1,c 2,c 3)

In the repetition code is only one possible codeword with c 2 = 1and onw codeword with c 2 = 0

Turbo coding

Code as a constraint

Repetition code

• If the prior probabilities are independent the joint probabilitycan be factorised

P (c 1, c 2, c 3) = P (c 1) P (c 2) P (c 3)

• The values for the prior probabilities could be acquired bymeasurements.For example if we are handling channel outputs themp (c k ) = p (y k |x k )

• A numerical excample: We have observed the samples andconcluded that the samples have values 1 with the followingprobabilitiesP (c 1 = 1) = 1

4 ,P (c 2 = 1) = 12 ,P (c 3 = 1) = 1

3 ,

where c i  stands for the i -th sample and i  = 1, 2, 3.What is the probability that the second bit is one?

Turbo coding

Code as a constraint

Repetition code

• The probability of the second sample being one is

p post 

2 =p 1p 2p 3

p 1p 2p 3 + (1 − p 1) (1 − p 2) (1 − p 3)

• In numerical values

p post 

2 =14 ·

12 ·

13

14 ·

13 ·

13

+

1 −14

1 −

12

1 −

13

= 0.1429

• The probability of p 2(c 2 = 0)

p 2(c 2 = 0) =(1− 1

4 )(1− 12 )(1− 1

3 )14·

12·

13

+(1− 14 )(1− 1

2 )(1− 13 )

= 0.

8571 = 1−

post 

2

Page 4: Lect6_TurboDecAlgorithms_07_4.pdf

7/28/2019 Lect6_TurboDecAlgorithms_07_4.pdf

http://slidepdf.com/reader/full/lect6turbodecalgorithms074pdf 4/13

Turbo coding

Code as a constraint

Repetition code

• By using likelihood ratio we can simplify further

p post 2

(1−p post 2 )

=

(c 1,c 2,c 3)∈S c 2=1

P (c 1,c 2,c 3)

(c 1,c 2,c 3)∈S 

c 2=1

P (c 1,c 2,c 3)

⇒p 1·p 2·p 3

(1−p 1)·(1−p 2)·(1−p 3)

= p 1(1−p 1) ·

p 2(1−p 2) ·

p 3(1−p 3)

• In the logarithmic domain

lnp (c 1 = 1) p (c 2 = 1) p (c 3 = 1)

p (c 1 = 0) p (c 2 = 0) p (c 3 = 0)

= lnp (c 1 = 1)

p (c 1 = 0)+ ln

p (c 2 = 1)

p (c 2 = 0)+ ln

p (c 3 = 1)

p (c 3 = 0)

Lpost (c 3) = L(c 1) + L(c 2) + L(c 3)

Turbo coding

Code as a constraint

Repetition code

• Probability 0.5 indicates that nothing is known about the bit.

L(c 2) = 0

• Even if we do not know aything about the bit but we knowprobabilties for the other bits we can calculate the aposterioriprobability for the unknown bit.

Lpost (c 2) = L(c 1) + L(c 3)

• The posteriori probability calculation for a bit can beseparated into two parts:

part describing prior probability− part impacted by the constraint imposed by the code. This

later part is calculated only based on the probabilities of otherbits.

Turbo coding

Parity-check code

• Assume now that there can be even number of ones amongthe three samples S = {(0, 0, 0) , (1, 1, 0) , (1, 0, 1) , (0, 1, 1)}

• The first two bits are either 0 or 1 the third bit is calculatedas XOR  of first two bits.

• Assume the measured probability for the second bit is 0.5

• The posterior probability of the second sample is

p post 

2 =p 1 (1 − p 3) + (1 − p 1) p 3

p 1 · p 3 + p 1 · (1 − p 3) + (1 − p 1) · p 3 + (1 − p 1) · (1 − p 3)

• The probability that the second sample is 1 is given by theprobability that exactly one of the other two samples is 1.

• The probability that the second sample is 0 is given by theprobability that both other samples are 0 or both of them are1.

Turbo coding

Parity-check code

Example: Repetition code

p (c 1, c 2, c 3)

p (0, 0, 0)

p (0, 1, 1)

p (1, 0, 1)

p (1, 1, 0)

p post (c 2 = 0) =

c 2=0

p (c 1,

c 2,

c 3)

c 2=0p (c 1,c 2,c 3)+

c 2=1p (c 1,c 2,c 3)

p post (c 2 = 1) =

c 2=1p (c 1,c 2,c 3)

c 2=0p (c 1,c 2,c 3)+

c 2=1p (c 1,c 2,c 3)

Page 5: Lect6_TurboDecAlgorithms_07_4.pdf

7/28/2019 Lect6_TurboDecAlgorithms_07_4.pdf

http://slidepdf.com/reader/full/lect6turbodecalgorithms074pdf 5/13

Turbo coding

Parity-check code

• Likelihood ratio for the second sample aposterior probability is

p post  (c 2 = 1)

p post  (c 2 = 0)=

p 2 (p 1 (1 − p 3) + (1 − p 1) p 3)

(1− p 2) (p 1p 3 + (1 − p 1) (1 − p 3))

=12

14

1 − 1

3

+

1 − 1

4

13

1 − 12

14

13

+

1 − 14

1− 1

3

= 0.595

Turbo coding

Parity-check code

Parity check in log domain

• In logarithmic domain can be separated the prior for the bitand information from the other bits.

logp 2(c 2 = 1) · p ((c 1 ⊕ c 3) = 1)p 2(c 2 = 0) · p ((c 1 ⊕ c 3) = 0)

=

ln

p 2(c 2 = 1)

p 2(c 2 = 0)·p 1(c 1 = 1)p 3(c 3 = 0) + p 1(c 1 = 0)p 3(c 3 = 1)

p 1(c 1 = 0)p 3(c 3 = 0) + p 1(c 1 = 1)p 3(c 3 = 1)

=

L2(c 2) + lnp 1(c 1 = 1)p 3(c 3 = 0) + p 1(c 1 = 0)p 3(c 3 = 1)

p 1(c 1 = 0)p 3(c 3 = 0) + p 1(c 1 = 1)p 3(c 3 = 1)

Turbo coding

Parity-check code

Probabilities in log domain

• Here we give the probability calculation folmulas for thebinary code, GF(2).

• The log-likelihood ratio (LLR) of c  is

L(c ) = lnp (c  = 1)

p (c  = 0)= ln

p (c  = 1)

1− p (c  = 1)

p (c  = 1) =e L(c )

1 + e L(c )=

1

1 + e −L(c )⇒

p (c  = 0) = 1− p (c  = 1) =1

1 + e L(c )=

e −L(c )

1 + e −L(c )

Turbo coding

Parity-check code

Incorporating proababilities from different encoders

• Often we have two or more independent ways to calculate the

aposteriori probability of the bit.

• The bit estimates from different sources are similar to

repetition code. All the estiamtes have to have the same bitvalue.

• Because all the estiamtes have to be either 0 or 1 in log

domain we can simple sum together the loglikelihood ratios

from different estimations.

Page 6: Lect6_TurboDecAlgorithms_07_4.pdf

7/28/2019 Lect6_TurboDecAlgorithms_07_4.pdf

http://slidepdf.com/reader/full/lect6turbodecalgorithms074pdf 6/13

Page 7: Lect6_TurboDecAlgorithms_07_4.pdf

7/28/2019 Lect6_TurboDecAlgorithms_07_4.pdf

http://slidepdf.com/reader/full/lect6turbodecalgorithms074pdf 7/13

Turbo coding

Example: Single Parity Check Code

Decoding example ...

7. If the interations have not finished

- combine the information of the aposteriori  vertical Lextr 

h(d #)

and priori information L(d #) for each bit.- go back to stage 2.

else

- Combine all the information for the bit the priori aposteriori 

vertical and horisontal.Ld (d #) = Lc (d #) + Lextr 

v  (d #) + Lextr 

h(d #)

- Compare the likelihood ratio of the bit with the decision level(0).

Turbo coding

Example: Single Parity Check Code

Decoding example ...

The soft output for the received signal corresponding to data d i 

L(d̂ i ) =

Lc (x i ) +

L(d̂ i ) +

Lextr 

h (d 

)

Decode horizontally

Lextr h (d 1) = ln

p (d 2=1)p (p 1h=−1)+p (d 2=−1)p (p 1h=1)p (d 2=−1)p (p 1h=−1)+p (d 2=1)p (p 1h=1)

= 0.74 = newL(d̂ 1)

Lextr h (d 2) = +0.12 = newL(d̂ 2)

Lextr h (d 3) =−

0.

60 = newL(d̂ 3)

Lextr h (d 4) = −1.47 = newL(d̂ 4)

Turbo coding

Example: Single Parity Check Code

+0.25 +2.0

+5.0 +1.0+

+0.74 +0.12

−0.60 −1.47after 1st horizontal decoding

Decode vertically

Lextr 

v  (d 1) = +0.

33 = newL(d̂ 1)

Lextr 

v  (d 2) = +0.09 = newL(d̂ 2)

Lextr 

v  (d 3) = −0.36 = newL(d̂ 3)

Lextr 

v  (d 4) = −0.26 = newL(d̂ 4)

+0.33 +0.09

+0.36 −0.26Extrinsic information after 1st vertical decoding

Soft output after 1st iteration L(d̂ ) = Lc (x ) + Leh(d ) + Lev (d )+0.25 +2

+5.0 +1+

+0.74 +0.12

−0.60 −1.47+

+0.33 +0.09

+0.36 −0.26=

+1.31 +2.20

+4.75 −0.74

Turbo coding

Example: Single Parity Check Code

We can see an iterative process:

1 Decode first code and calculate extrinsic information for eachbit.

- In first iteration the information from other code is zero.

2 Decode the second code by using extrinsic information from

the first decoder.

3 Return to the first step by using the extrinsic information from

the second decoder.

Page 8: Lect6_TurboDecAlgorithms_07_4.pdf

7/28/2019 Lect6_TurboDecAlgorithms_07_4.pdf

http://slidepdf.com/reader/full/lect6turbodecalgorithms074pdf 8/13

Turbo coding

Turbo code

Parallel concated convolutional codes (PCCC)

I n t e r l e a v e r

F

E n c o d e r

1

t o c h a n n e l

E n c o d e r

2

Figure: Encoder

S I S O

1 I n t e r l e a v e r

F

1F-

S I S O

2

F r o m d e m o dn o t u s e d n o t u s e d

F r o m

d e m o d

d e c i s i o n

Figure: Decoder

Turbo coding

Turbo code

Parallel concated convolutional codes (PCCC)

• Encoder contains two or more systematic convolutional

encoders

• The constituent encoders code the same interleaved datastream

• The systematic bits are transmitted only once

• In reciever the extrinsic information is calculated for the

information bit based on one constituent code and feed to the

decoder of other code

Turbo coding

Turbo code

Serial concated convolutional codes (SCCC)

I n t e r l e a v e r

F

O u t e r

E n c o d e r 1

t o c h a n n e lI n n e r

E n c o d e r 2

Figure: Encoder

S I S O

1 S I S O

2

F r o m d e m o dn o t u s e d

d e c i s i o n

I n t e r l e a v e r

F

1F-

0

Figure: Decoder

Turbo coding

Turbo code

Serial concated convolutional codes (SCCC)

- Code is formed by concatenating two encoders

- The output coded bit stream from the outer encoder is

interleaved and feed to the inner encoder

- The decoder

- The inner decoder calculates the loglikelihoods of information

symbols at the output of the inner decoder and deinterleaver

them

- The outer decode is decoded and loglikelihoods for the coded

bits are calculated

- The coded bits loglikelihoods are interleaved and feed back to

the inner decoder

- The decision are made after the decoding iterations on the

loglikelihoods of the information bits at the output of the outerdecoder

b d

Page 9: Lect6_TurboDecAlgorithms_07_4.pdf

7/28/2019 Lect6_TurboDecAlgorithms_07_4.pdf

http://slidepdf.com/reader/full/lect6turbodecalgorithms074pdf 9/13

Turbo coding

Algorithms for iterative turbo processing

Algorithms for Iterative (Turbo) Data Processing

Trellis-BasedDetection Algorithms

MAPAlgorithm

log-MAP

max-log-MAP

ViterbiAlgorithm

SOVA

ModifiedSOVA

Sequencedetection

Symbol-by-symboldetection

Requirements

Accept soft-inputs in the form

of a priori probabilities orlog-likelihood ratiosProduce APP for output dataSoft-Input Soft-Output

- MAP: Maximum APosteriori(symbol-by-symbol)

- SOVA: Soft Output

Viterbi Algorithm

Turbo coding

Algorithms for iterative turbo processing

Symbol by symbol detection

MAP algorithm

• MAP algorithm operates in probability domain.

• The probability of all codewords passing some particular edge

from initial state i  to final state j  at stage k  is

p Ak  (b k ,i , j ) = 1p (Y N 1 )

(b k ,i , j 

Ak −1,i  ·M k ,i , j  · B k , j 

• When probablity is expressed by loglikelihood value we have todeal with numbers in very large range. (overflows incomputers).

Simplification of MAP: Log-MAP algorithm

• Log-MAP algorithm is a transformation of MAP intologarithmic domain.

Turbo coding

Algorithms for iterative turbo processing

Symbol by symbol detection

Modification to MAP algorithm

• The MAP algorithm logarithmic domain is expressed withreplaced computations

- Multiplication is converted to addition.- Addition is converted to max ∗ (·) operation.

max ∗ (x , y ) = log (e x  + e y ) = max (x , y ) + log

1 + e −|x −y |

• The terms for calculating the probabilities in the trellis areconverted

αk ,i  = log (Ak ,i )

β k , j  = log (B k 

, j )γ k ,i , j  = log (M k ,i , j )

Turbo coding

Algorithms for iterative turbo processing

Symbol by symbol detection

Modification to MAP algorithm

• The complete logMAP algorithm is

L(b k ) = logb k =1

Ak −1,i  ·M k ,i , j  · B k , j 

− logb k =0

Ak −1,i  ·M k ,i , j  · B k , j 

= max ∗b k =1

(αk −1,i  + γ k ,i , j  + β k , j )

−max ∗b k =0

(αk −1,i  + γ k ,i , j  + β k , j )

αk ,i  = logi 1

Ak −1,i 1 ·M k ,i 1,i β k , j  = log

 j 1

M k +1, j , j 1 · B k +1, j 1

Turbo coding Turbo coding

Page 10: Lect6_TurboDecAlgorithms_07_4.pdf

7/28/2019 Lect6_TurboDecAlgorithms_07_4.pdf

http://slidepdf.com/reader/full/lect6turbodecalgorithms074pdf 10/13

Turbo coding

Algorithms for iterative turbo processing

Symbol by symbol detection

Max-Log-MAP decoding Algorithm

• In summation of probabilities in Log-MAP algorithm we areusing max  ∗ (·) operation.

• The max  ∗ (·) requires to convert LLR value into exponentialand after adding 1 to move back into log domain.

• Simplifications

- We can replace log

1 + e −|x −y |

by a lookup table.

- We can skip the term ⇒ Max-Log-Map.

Turbo coding

Algorithms for iterative turbo processing

Symbol by symbol detection

Max-Log-MAP decoding Algorithm

αk ,i  = log

i 1

Ak −1,i 1 ·M k ,i 1,i  = log

i 1

e αk −

1,i 1 +γ k ,i 1,i 

= max ∗∀i 1e 

αk −1,i 1+γ k ,i 1,i 

≈ max

∀i 1e 

αk −1,i 1+γ k ,i 1,i 

⇒ Max-Log-MAP

β k , j  = log

 j 1

M k +1, j , j 1 · B k +1, j 1

= log

 j 1

e γ k +1, j , j 1

+β k +1, j 1

= max ∗∀ j 1e 

γ k +1, j , j 1+β k +1, j 1

≈ max

∀ j 1e 

γ k +1, j , j 1+β k +1, j 1

⇒ Max-Log-MAP

Turbo coding

Algorithms for iterative turbo processing

Symbol by symbol detection

Example: Max-Log-MAP approximation

00

01

10

11

a0(0)=0.0 a1(0)=-0.5

a1(2)=-0.3

g1(00)=-0.5

g1(02)=-0.3g2(02)=-5.0

g2(23)=-2.3

g2(00)=-2.3

g2(21)=-1.2

a2(0)=-2.8

a2(0)=-1.5

a2(2)=-5.5

a2(3)=-2.6

a3(0)=-6.5

a3(0)=-3.4

a3(2)=-6.5

a3(3)=-3.8

g3(00)=-6.6

g3(02)=-6.0

g3(21)=-2.4

g3(23)=-2.5

g3(10)=-5.0

g3(12)=-2.7

g3(33)=-1.2

g3(31)=-0.8

αk ,i  = logi 1

Ak −1,i 1 ·M k ,i 1,i α3,0 = log

(α2,0+γ 3,0,0) + e (α2,1,0+γ 3,1,0)

≈ max

(−2.8−6.0) + e (−1.5−2.7)

= max(−8.8,−6.5) = −6.5

Turbo coding

Algorithms for iterative turbo processing

Soft sequence detection

Soft-output Viterbi algorithm

Two modifications compared to the classical Viterbi algorithm

• Ability to accept extrinsic information from other decoder

- The path metric is modified to account the extrinsic

information. This is similar to the metric calculation inMax-Log-MAP algorithm.

• Modification for generating soft outputs

Turbo coding Turbo coding

Page 11: Lect6_TurboDecAlgorithms_07_4.pdf

7/28/2019 Lect6_TurboDecAlgorithms_07_4.pdf

http://slidepdf.com/reader/full/lect6turbodecalgorithms074pdf 11/13

Turbo coding

Algorithms for iterative turbo processing

Soft sequence detection

SOVA

• For each state in the trellis the metric M k ,i , j  is calculated forboth merging paths.

• The path with the highest metric is selected to be the survivor.• For the state (at this stage) a pointer to the previous state

along the surviving path is stored.

• The following information that is later used for calculatingLb k | y 

is stored:

- The difference ∆s k  between the discarded and surviving path.

- The binary vector containing δ + 1 bits, indicating last δ + 1bits that generated the discarded path.

• After Maximum Likely path is found the update sequencesand metric differences are used to calculate L

b k | y 

.

Turbo coding

Algorithms for iterative turbo processing

Soft sequence detection

SOVA

Calculation of Lb k | y 

.

- For each bit b MLk 

in the ML path we try to find the path

merging with ML path that had compared to the b MLk  in MLdifferent bit value b k  at state k  and this path should haveminimal distance with ML path.

- We go trough δ + 1 merging paths that follow stage k  i.e. the∆s i 

i i  = k ...(k + δ )

- For each merging path in that set we calculate back to findout which value of the bit b k  generated this path.

- If the bit b k  in this path is not b MLk  and ∆s i i  is less thancurrent ∆min

k we set ∆min

k = ∆s i 

- Lb k | y 

≈ b k  min

i =k ...k +σ

b ML

k =b i 

∆s i i 

Turbo coding

Comparison of soft decoding algorithms

Comparison of the soft decoding algorithms

MAP

• The MAP algorithm is the optimal component codes decoder

algorithm.

• It finds the probability of each bit b k  of either being +1 or −1by summing the probabilities of all the codewords where the

given bit is +1 and where the bit is −1.

• Complex.

• Because of the exponent in probability calculations in practice

the MAP algrithm often suffers numerical stability problems.

Turbo coding

Comparison of soft decoding algorithms

LogMAP

• LogMAP is theorectically identical to MAP, the calculationonly are made in logarithmic domain.

• Multiplications are replaced by addition and summation with

max ∗(·) operation.• Numerical problems that occure in MAP are cirmcumvented.

Turbo coding Turbo coding

Page 12: Lect6_TurboDecAlgorithms_07_4.pdf

7/28/2019 Lect6_TurboDecAlgorithms_07_4.pdf

http://slidepdf.com/reader/full/lect6turbodecalgorithms074pdf 12/13

g

Comparison of soft decoding algorithms

Max-Log-MAP

• Similar to LogMAP but replaces the maxlog operation(max∗(·)) with taking maximum.

• Because at each state in forward and backward calcualtionsonly the path with maximum value is considered theprobabilities are not calculated over all the codewords.

- In recursive calculation of α and β  only the best transition isconsidered.

- The algorithm gives the logarithm of the probability that onlythe most likely path reaches the state.

• In the MaxLogMAP Lb k | y 

is comparison of probability of 

most likely path giving b k  = −1 to the most likely path givingb k  = +1.

- In calcualtions of loglikelihood ratio only two codewords areconsidered (two transitions): The best transition that wouldgive +1 and the best transition that would give −1.

• MaxLogMAP performs worse than MAP or LogMAP

g

Comparison of soft decoding algorithms

SOVA

• the SOVA algorithms founds the ML path.

- The recursion used is identical to the one used for calcuating α

in MaxLogMAP algorithm.

• Along the ML path hard decision on the bit b k  is made.

• Lb k | y 

is the minimum metric difference between the ML

path and the path that merges with ML path and is generatedwith different bit value b k .

- In Lb k | y 

calcualtions accordingly to MaxLogMAP one path

is ML path and other is the most likely path that gives the

different b k .

- In SOVA the difference is calculated between the ML and themost likely path that merges with ML path and gives different

b k .

This path but the other may not be the most likely one for

giving different b k .

Turbo coding

Comparison of soft decoding algorithms

SOVA

• The output of SOVA just more noisy compared toMaxlogMAP output (SOVA does not have bias).

• The SOVA and MaxLogMAP have the same output

- The magnitude of the soft decisions of SOVA will either beidentical of higher than those of MaxLogMAP.- If the most likely path that gives the different hard decision for

b k  has survived and merges with ML path the two algorithmsare identical.

- If that path does not survive the path on what different b k  ismade is less likely than the path which should have been used.

Turbo coding

Complexity of decoding algorithms

Algorithms complexity

Table: Comparison of complexity of different decoding algorithms

Operations maxlogMAP logMAP SOVA

max-ops 5× 2M − 2 5× 2M 

− 2 3 (M + 1) + 2M 

additions 10× 2M  + 11 10× 2M  + 11 2× 2M  + 8mult. by ±1 8 8 8bit comps 6 (M + 1)look-ups 5× 2M 

− 2M is the length of the code memory.

Table accordingly to reference [1]

Turbo coding Turbo coding

Page 13: Lect6_TurboDecAlgorithms_07_4.pdf

7/28/2019 Lect6_TurboDecAlgorithms_07_4.pdf

http://slidepdf.com/reader/full/lect6turbodecalgorithms074pdf 13/13

Complexity of decoding algorithms

Algorithms complexity ...

If to assume that each operation is comparable we can calculatethe totat amount of operations per bit every algorithm demands

for decoding one code in one iteration.

Table: Number of reguired operations per bit for different decodingalgorithms

memory (M) MaxLogMAP LogMAP Sova

2 77 95 553 137 175 76

4 257 335 1095 497 655 166

Complexity of decoding algorithms

References

1 P. Robertson, E. Villebrun, P. Hoeher, ”Comparison of 

Optimal and Suboptimal MAP decoding algorithms”, ICC,

1995 page 1009-1013.