Top Banner

of 25

MAP decoding, BCJR algorithm

Apr 07, 2018

Download

Documents

Fatih Genc
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • 8/4/2019 MAP decoding, BCJR algorithm

    1/25

    1

    MAP decoding: The BCJR algorithm

    Maximum a posteriori probability (MAP) decoding

    Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm (1974)

    Baum-Welch algorithm (1963?)*

    Decoder inputs:

    Received sequence r (soft or hard) A prioriL-valuesLa(ul) = ln(P(ul= 1)/P(ul= -1))

    Decoder outputs:

    A posteriori probability (APP)L-valuesL(ul) = ln(P(ul= 1|

    r)/P(ul= -1|r)) > 0: ul is most likely to be 1

    < 0: ul is most likely to be -1

  • 8/4/2019 MAP decoding, BCJR algorithm

    2/25

    2

    BCJR (cont.)

  • 8/4/2019 MAP decoding, BCJR algorithm

    3/25

    3

    BCJR (cont.)

  • 8/4/2019 MAP decoding, BCJR algorithm

    4/25

    4

    BCJR (cont.)

  • 8/4/2019 MAP decoding, BCJR algorithm

    5/25

    5

    BCJR (cont.)

    AWGN

  • 8/4/2019 MAP decoding, BCJR algorithm

    6/25

    6

    MAP algorithm

    Initialize forward and backward recursions 0(s) and N(s) Compute branch metrics {l(s,s)}

    Carry out forward recursion {l+1(s)} based on {l(s)}

    Carry out backward recursion {l-1(s)} based on {l(s)} Compute APPL-values

    Complexity: Approximately 3xViterbi

    Requires detailed knowledge of SNR Viterbi just maximizes rv, and does not require exact

    knowledge of SNR

  • 8/4/2019 MAP decoding, BCJR algorithm

    7/25

    7

    BCJR (cont.)

    Information bits

    Termination bits

  • 8/4/2019 MAP decoding, BCJR algorithm

    8/25

    8

    BCJR (cont.)

  • 8/4/2019 MAP decoding, BCJR algorithm

    9/25

    9

    BCJR (cont.)

  • 8/4/2019 MAP decoding, BCJR algorithm

    10/25

    10

    Log-MAP algorithm

    Initialize forward and backward recursions 0*(s

    ) and N*(s

    ) Compute branch metrics {l*(s, s)}

    Carry out forward recursion {l+1*(s)} based on {l*(s)}

    Carry out backward recursion {l-1*(s)} based on {l*(s)}

    Compute APPL-values

    Advantages over MAP algorithm:

    Easier to implement

    Numerically more stable

  • 8/4/2019 MAP decoding, BCJR algorithm

    11/25

    11

    Max-log-MAP algorithm

    Replace max* by max, i.e., remove table look-up correction term

    Advantage: Simpler and much faster Forward and backward passes are equivalent to a Viterbi decoder

    Disadvantage: Less accurate, but the correction term is limited in sizeby ln(2)

    Can improve accuracy by scaling with an SNR-(in)dependent scalingfactor*

  • 8/4/2019 MAP decoding, BCJR algorithm

    12/25

    12

    Example: log-MAP

  • 8/4/2019 MAP decoding, BCJR algorithm

    13/25

    13

    Example: log-MAP AssumeEs/N0 = 1/4 = -6.02 dB

    R = 3/8, soEb/N0 = 2/3 = -1.76dB

    -0,45

    +0,45

    -0,25

    +0,25

    -0,75

    +0,75

    +0,35

    -0,35

    +1,45

    -1,45

    0

    1.60

    0 0

    1,60

    01,59

    3,06

    3,44

    3,02

    -0,45

    0,45

    1.34

    0,44

  • 8/4/2019 MAP decoding, BCJR algorithm

    14/25

    14

    Example: log-MAP AssumeEs/N0 = 1/4 = -6.02 dB

    R = 3/8, soEb/N0 = 2/3 = -1.76dB

    -0,45

    +0,45

    -0,25

    +0,25

    -0,75

    +0,75

    +0,35

    -0,35

    +1,45

    -1,45

    0

    1.60

    0 0

    1,60

    01,59

    3,06

    3,44

    3,02

    -0,45

    0,45

    1.34

    0,44

    +0.48 +0.62 -1,02

  • 8/4/2019 MAP decoding, BCJR algorithm

    15/25

    15

    Example: Max-log-MAP AssumeEs/N0 = 1/4 = -6.02 dB

    R = 3/8, soEb/N0 = 2/3 = -1.76dB

    -0,45

    +0,45

    -0,25

    +0,25

    -0,75

    +0,75

    +0,35

    -0,35

    +1,45

    -1,45

    0

    1,60

    0 0

    1,60

    01,25

    3,05

    3,31

    2,34

    -0,45

    0,45

    1.20

    -0,20

  • 8/4/2019 MAP decoding, BCJR algorithm

    16/25

    16

    Example: Max-log-MAP AssumeEs/N0 = 1/4 = -6.02 dB

    R = 3/8, soEb/N0 = 2/3 = -1.76dB

    -0,45

    +0,45

    -0,25

    +0,25

    -0,75

    +0,75

    +0,35

    -0,35

    +1,45

    -1,45

    0

    1,60

    0 0

    1,60

    01,25

    3,05

    3,31

    2,34

    -0,45

    0,45

    1.20

    -0,20

    -0,07 +0,10 -0,40

  • 8/4/2019 MAP decoding, BCJR algorithm

    17/25

    17

    Punctured convolutional codes

    Recall that an (n,k) convolutional code has a decoder trellis with 2k

    branches going into each state More complex decoding

    Solutions:

    Bit-level encoders

    Syndrome trellis decoding (Riedel)* Punctured codes

    Start with low-rate convolutional mothercode (rate 1/n?)

    Puncture (delete) some code bits according to apredetermined pattern

    Punctured bits are not transmitted. Hence, the code rate isincreased, but the free distance of the code could be reduced

    Decoder inserts dummy bits with neutral metrics contribution

  • 8/4/2019 MAP decoding, BCJR algorithm

    18/25

    18

    Example: Rate 2/3 punctured from rate 1/2

    The punctured code is also a convolutional code dfree = 3

  • 8/4/2019 MAP decoding, BCJR algorithm

    19/25

    19

    Example: Rate 3/4 punctured from rate 1/2

    dfree = 3

  • 8/4/2019 MAP decoding, BCJR algorithm

    20/25

    20

    More on punctured convolutional codes

    Rate-compatible punctured convolutional (RCPC) codes: Used for applications that need to support several code rates,

    e.g., adaptive coding or hybrid ARQ

    Sequence of codes is obtained by repeated puncturing

    Advantage: One decoder can decode all codes in the family Disadvantage: Resulting codes may be sub-optimum

    Puncturing patterns:

    Usually periodic puncturing patterns

    Found by computer search Care must be exercised to avoid catastrophic encoders

  • 8/4/2019 MAP decoding, BCJR algorithm

    21/25

    21

    Best punctured codes

    1

    5 3

    7

    717

    10

    4 2

    6 27

    75

  • 8/4/2019 MAP decoding, BCJR algorithm

    22/25

    22

    Tailbiting convolutional codes Purpose: Avoid the terminating tail (rate loss) and maintain a

    uniform level of protection

    Note: Cannot avoid distance loss completely unless the length isnot too short. When the length gets larger, the minimum distanceapproaches the free distance of the convolutional code

    Codewords can start in any state

    This gives 2as many codewords

    However, each codeword must end in the same state that it

    started from. This gives 2-as many codewords

    Thus, the code rate is equal to the encoder rate

    Tailbiting codes are increasingly popular for moderate lengthpurposes

    Some of the best known linear block codes are tailbiting codes

    Tables of optimum tailbiting codes are given in the book

    DVB: Turbo codes with tailbiting component codes

  • 8/4/2019 MAP decoding, BCJR algorithm

    23/25

    23

    Example: Feedforward encoder

    Feedforward encoder: Always possible to find an information vector

    that ends in the proper state (inspect the last mk-bit input tuples)

  • 8/4/2019 MAP decoding, BCJR algorithm

    24/25

    24

    Example: Feedback encoder Feedback encoder:

    Not always

    possible, for everylength, to constructa tailbiting code

    For each u: Must

    find uniquestarting state

    L* = 6 not OK

    L* = 5 OK

    In general,L*should not have thelength of a zeroinput-weight cycleas a divisor

  • 8/4/2019 MAP decoding, BCJR algorithm

    25/25

    25

    Circular trellis

    Decoding of tailbiting codes: Try all possible starting states (multiplies complexity by 2), i.e., run the

    Viterbi algorithm for each of the 2 subcodes and compare the best paths fromeach subcode

    Suboptimum Viterbi: Initialize an arbitrary state at time 0 with zero metricand find the best ending state. Continue one round from there with the best

    subcode MAP: Similar