Top Banner

of 51

Convolution Codes [Compatibility Mode]

Jul 07, 2018

Download

Documents

Sarthak Sourav
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • 8/19/2019 Convolution Codes [Compatibility Mode]

    1/51

    EEEC383 Communication Systems

    Cyclic Codes and Convolutional Codes

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    2/51

    Topics today

    Cyclic codes

     –   

     –  systematic and non-systematic codes –  generating codes: generator polynomials

     –  decoding: syndrome decoding

     –  encoding/decoding circuits realized by shift registers

    Convolutional codes

     –   presenting codes

    convolutional encoder 

      generator sequences

    Some topics in mod-2 arithmetic`s

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    3/51

    Block and  (n,k )

    (n,k )k  bits   n bits

     

    Block coding: mapping of source bits of length k into (binary) channel-

    Binary coding produces 2k code words of length n. Extra bits in the codewords are used for error detection/correction

    k input bits  oc , an convo u ona co es:

     –  (n,k) block codes: Encoder output of

    n bits depends only on the k input bits –  (n,k,L) convolutional codes:

     each source bit influences n(L+1)encoder output bits

    input bit

     –  n(L+1) is the constraint length –  L is the memory depth

    Essential difference of block and conv. codinn(L+1) output bits

    3

    is in simplicity of design of encoding and decoding circuits

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    4/51

    Why cyclic codes?

    For practical applications rather large n and k must be used. This is because in order to correct up to t errors it should be that

    1

    2 1 ...1 2

    n k 

    n

    i it 

    =

    =

    − ≈ + + + =⎜ ⎟⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎝ ⎠⎝ ⎠ ⎝ ⎠ ⎝ ⎠   ∑number of syndromes

    -number of error patters

    2 1

    11 log note: (1 )

    C C 

    i

    n

    i

     R q n k n R

    n  =

    ⎡ ⎤⎛ ⎞⇒ − ≈ = − = −⎜ ⎟⎢ ⎥

    ⎝ ⎠⎣ ⎦

    Hence for , large n and k must be used (next slide)

    C clic codes are

    / 1C 

     R k n= ≈ (n,k ) block coder 

    (n,k ) block coder 

    k -bits   n-bits

     –   linear: sum of any two code words is a code word 

     –   cyclic: any cyclic shift of a code word produces another code word 

    4

    van ages: nco ng, eco ng an syn rome compu a on easy yshift registers

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    5/51

    Example

    Consider a relatively high SNR channel such that only 1 or 2 bit errorsare likely to happen. Consider the ration

    (n,k )

     block coder 

    (n,k )

     block coder 

    - ts   n- ts

    /C 

     R k n=2

    1

    11 log

      t 

    ci

    n

     R in   =∑− ≈   ⎢ ⎥⎜ ⎟⎝ ⎠⎣ ⎦

    − Number of 2-bit error patterns

    um er o c ec - s

    2

    ( , )

    log1 2

    n k n n

    ε ⎡ ⎤⎛ ⎞ ⎛ ⎞

    +⎢ ⎥⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠⎣ ⎦

    =

    Take a constant code rate of Rc=k/n=0.8 and consider ε with somevalues of larger n and k :

    = = =

    This demonstrates that long codes are more advantages when a highcode rate and high error correction capability is required

    , . , , . , , .

    5

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    6/51

    Some block codes that can be realized by cyclic codes

    (n,1) Repetition codes. High coding gain (minimum distance always n-1), but very low rate: 1/n

    , . .

    errors and correct one error. n=2m-1, k = n - m,  Maximum-length codes. For every integer there exists a

    k k-1

    3k  ≥3m ≥

       ,   - , min   .

      BCH-codes. For every integer there exist a code with n = 2m-1,

    and where t is the error correction capability

    3≥m

    ≥ −k n mt  min 2 1≥ +d t 

    n,   ee - o omon co es. or s w sym o s a cons s s om bits that are encoded to yield code words of n symbols. For thesecodes and 2 1,number of check symbols 2= − − =mn n k t  

    min2 1= +d t 

    owa ays an are very popu ar ue o arge min, arge num erof codes, and easy generation

    Code selection criteria: number of codes, correlation properties, code

    6

    ga n, co e ra e, error correc on e ec on proper es

    1: Task: find out from literature what is meant by dual codes!

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    7/51

    Defining cyclic codes: code polynomial

    An (n,k ) linear code X is called a cyclic code when every cyclic shift ofa code X, as for instance X’, is also a code, e.g.

    0 1 2 1n n− −=  

    1 0 3 2' ( )

    n n n x x x x

    − − −=X  

    Each cyclic code has the associated code vector with the polynomial

     Note that the (n,k ) code vector has the polynomial of degree of n-1 or

    2 1

    0 1 2 1( )   n n

    n n p x x p x p x p

    − −

    − −= + + + +X  

    less. Mapping between code vector and code polynomial is one-to-one,e.g. they specify the same code uniquely

    Manipulation of the associated polynomial is done in a Galois field (for

    instance GF(2)) having elements {0,1}, where operations are performedmod-2

    For each cyclic code, there exist only one generator polynomial whosedegree equals the number of check bits in the encoded word 

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    8/51

    An example of a (7,4) cyclic code,

    enerator ol nomial G   =1+ +  3

    ( ) pM ( ) pX ( ) ( ) p pM G

    ( )( )

    3

    1 1 X X X + + +

    1   X = +   X X + +   X X + +

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    9/51

    The common factor of cyclic codes

    GF(2) operations (XOR and AND):

     Modulo-2 Addition   Modulo-2 Multiplication

    + 0 1

    0 0 1

    1 1 0

    * 0 1

    0 0 0

    1 0 1

    Cyclic codes have a common factor pn+1. In order to see this weconsider summing two (unity shifted) cyclic code vectors:

    2 1n n x x x x− −= + + + +X  

    2 1

    1 0 1 2

    2 1

    0 1 2 1

    '( )

    ( )

    n

    n n

    n n

    n n

    n n

     p x x p x p x p

     p p x p x p x p x p

    − −

    − −

    − −

    = + + + +

    = + + + +

    X

    X

    Right rotated 

    Right shifted by multiplication

    Question is how to make the cyclic code from the multiplied code?Adding the last two equations together reveals the common factor:

    1 1 1n n n p p p x p x x p

    − − −+ = + = +

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    10/51

    Factoring cyclic code generator polynomial

    Any factor of pn+1 with the degree q=n-k generates an (n,k ) cyclic code

    Example: Consider the polynomial p7+1. This can be factored as

    For instance the factors 1+ p+ p3 or 1+ p2+ p3, can be used to generate an

    7 3 2 31 (1 )(1 )(1 ) p p p p p p+ = + + + + +

    unique cyclic code. For a message polynomial 1+ p2 the followingencoded word is generated:

    2 3 2 51 1 1+ + + = + + +

    and the respective code vector (of degree n-1, in this case) is

    (111 0 0 1 0)

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    11/51

    Obtaining a cyclic code from another cyclic code

    Therefore unity cyclic shift is obtained by multiplication by p whereafter division by the common factor yields a cyclic code

    n

    and by induction, any cyclic shift is obtained by

    mo p p p p= +33

    3

    1

    1

     p p p

     p+++

    Example:

    mo p p p p= +

    2101 ( ) 1 p p→ = +X

    1   p+

     3( ) p p p p= +X

    ( ) 11 110

     p p= + →

    X

    not a three-bit code,divide by the common factor 

    Important point is that division by mod pn+1 and multiplication by thegenerator polynomial is enabled by tapped shift register.

     p p   ++

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    12/51

    Using shift registers for multiplication

    Figure shows a shift register to realize multiplication by 1+ p2+ p3

    In practice, multiplication can be realized by two equivalent topologies:

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    13/51

    Example: multiplication by using a shift register 

    out

    1 1 0 0 0 0 0 0 0 0 0

    0 1 1 0 0 0 0 0 0 0 00 0 1 1 0 0 0 0 0 0 0

    0 0 0 0 1 1 0 0 0 0 1

    0 0 0 0 0 1 1 0 0 0 1determined by

    adding dashed line would

    enable division by 1+ pn

    2 3

    0 0 0 0 0 0 0 1 1 0 1

    0 0 0 0 0 0 0 0 1 1 0

     word to beencoded 

    2 31 p p p p p= + + 3 p p+ + 4

    2 4

     p+

    = nco e wor  

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    14/51

    Examples of cyclic code generator polynomials

    The generator polynomial for a (n,k) cyclic code is defined by1

    1 1( ) 1 ,q q

    q p g p g p p q n k −

    −= + + + + = −G  

    and G( p) is a factor of pn

    +1. Any factor of pn

    +1 that has the degree qmay serve as the generator polynomial. We noticed that a code isgenerated by the multiplication

    where M( p) is a block of k message bits. Hence this gives a criterion to

    select the generating polynomial, e.g. it must be a factor of pn+1.

    =

    Only few of the possible generating polynomials yield high qualitycodes (in terms of their minimum Hamming distance)

    3

    Some cyclic codes:

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    15/51

    Systematic cyclic codes

    Define the length q=n-k check vector C and the length-k messagevector M by 1

    0 1 1( )   k 

    k  p m m p m p   −

    −= + + +M  

    Thus the systematic n:th degree codeword polynomial is

    1

    0 1 1( )

      q

    q p c c p c p

      −

    = + + +C  

    1

    0 1 1

    1

    0 1 1

    ( ) ( )n k k k 

    q

    q

     p p m m p m p

    c c p c p

    − −

    = + + +

    + + + +

    X  

    message bits p p p= +

    Check bits determined by:

    check bits

    Question: Why these denote the message bits still

    mo p p p p−

    =

     

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    16/51

    Determining check-bits

    Prove that the check-bits can be calculated from the message bitsM(p) by

    [ ]( ) mod ( ) / ( )n k  p p p p−=C M G 10 / 2 /4 42= +

    ( ) pC

    ( ) / ( ) ( ) ( ) / ( )n k 

     p p p p p p−

    = +M G Q C G

    / / / /n k − + = + +M G C G C G C G0=

    ( )

    ( ) ( ) ( ) ( )n k 

     p

     p p p p p−

    =

    + =X

    M C G Q

    must be a systematic codec ecmessage

    3 2 1= + +G3 2( ) / ( ) 1 1n k  p p p p p− = + + +⎧   M G

    Example: (7,4) Cyclic code:

    ase on s e n on(previous slide)

    3

    7 4 6 4

    ( )

    ( )

     p p p

     p p p p−

    ⎪ = +⎨⎪ = +⎩

    M

    M

    3 3 6 4

    3 2 3 2 6 4

    ( )( )

    ( ) ( ) ( ) 1 1n k  p p

     p p p p p p p p−⎪ + = + + = + +⎨

    CQ

    M C

    ⎪⎩

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    17/51

    Example: Encoding of systematic cyclic codes

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    18/51

    Circuit for encoding systematic cyclic codes

    1

    We noticed earlier that cyclic codes can be generated by using shiftregisters whose feedback coefficients are determined directly by thegenerating polynomial

    For cyclic codes the generator polynomial is of the form

    In the circuit, first the message flows to the transmitter, and feedbackswitch is set to ‘1’ where after check-bit-switch is turned on and the

    2 11 2 1( ) 1   q q qq q p g p g p g p p− −− −= + + ⋅⋅ ⋅ + + +G

     feedback switch to ‘0’, enabling the check bits to be outputted

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    19/51

    Decoding cyclic codes

    Every valid, received code word R(p) must be a multiple of G(p), otherwise anerror has occurred. (Assume that the probability for noise to convert code wordsto other code words is ver small. 

    Therefore dividing the R(p)/ G(p) and considering the remainder as a syndromecan reveal if the error has happened and sometimes also to reveal in which bitde endin on code stren th 

    Division is accomplished by a shift registers with reversed tap order 

    The error syndrome of n-k-1 degree is therefore

    This can be expressed also in terms of error E(p) and the code word  X(p)

    ( ) ( ) ( ) p p p= +R X E

    mod /= +S X E G

    hence

    [ ]( ) mod ( ) / ( ) p p p=S E G

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    20/51

    Decoding cyclic codes: example

    Using denotation of this example:[ ]16.20 ( ) mod ( ) / ( )s x e x g x=

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    21/51

    Table 16.6

    Decoding cyclic codes (cont.)

    [ ]( ) mod ( ) / ( )s x r x g x=

    ( )g x

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    22/51

    Decoding circuit for (7,4) code

    0

    3

    1rece ve co e syn rome

    While first receiving the code, switch is set to “0”

     p p p=

    The shift register is stepped until all the received code bits have enteredthe register 

    This results 3-bit syndrome (n - k = 3 ) that is then left to the register 

    Then the switch is turned to the direction “1” that drives the syndromeout of the register 

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    23/51

    Convolutional coding

    Block codes are memoryless

    Convolution codes have memory that utilizes previous bits to encode or

    Convolutional codes are specified by n, k and constraint length that isthe maximum number of information symbols upon which the symbol

    Thus they are denoted by (n,k,L), where L is the code memory depth

    Convolutional codes are commonly used in applications that requirere a ve y goo per ormance w ow mp emen a on cos

    Convolutional codes are encoded by circuits based on shift registers anddecoded by several methods as

     Viterbi decoding that is a maximum likelihood method   Sequential decoding (performance depends on decoder

    complexity)

     Feedback decoding (simplified hardware, lower performance)

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    24/51

    Example: convolutional encoder 

    2 1' j j j j

     x m m m− −

    = ⊕ ⊕

    2''

     j j j x m m

    −= ⊕

    1 1 2 2 3 3' '' ' '' ' '' ...

    out  X x x x x x x= (n,k,L) = (2,1,2) encoder 

      bits in a serial manner 

    Thus the generated code word is a function of input and the state of the

    In this (n,k,L)=(2,1,2) encoder each message bit influences a span ofn(L+1)=6 successive output bits that is the code constraint length

     L-1 n   , ,   -state machine

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    25/51

    (3,2,1) Convolutional encoder 

    3 2' j j j j x m m m− −= ⊕ ⊕3 1

    '' j j j j

     x m m m− −

    = ⊕ ⊕ Here each message bit influencesa span of n(L+1)=3(1+1)=6 

    2 j j j x m m

    −= successive output bits

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    26/51

    Generator sequences

    (n,k,L) Convolutional code can be described by the generator sequencesthat are the impulse responses for each coder output branch:1 2, ,... ng g g

    (2,1,2) encoder 

    1

    2[1111]

    =⎨

    =⎩

    g

    g Note that the generator sequence lengthexceeds register depth by 1

     

    generator matrix

    Encoded convolution code is produced by matrix multiplication of the

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    27/51

    Encoding equations

    Encoder outputs are formed by modulo-2 discrete convolutions:(1) (1)*=v u g

    Therefore the l:th bit of the j:th output branch is

    =v u g

    ( ) ( ) ( ) ( ) ( )

    0 0 1 1... , 1,2... , 2 j j j j jmil l i l l l l m mv u g u g u g u g j m m L=   − − −∑= = + + + = = +

    Input for extraction of generator sequences is

    ,l i

    i l i

    u −=⎧

    ⎨1 [1 0 11]

    =⎛ ⎞g

    Hence for this circuit the following equations result:

    ,

    (1)

    2 3l l l lv u u u

    − −= + +

    2[111 1]=g

    ( 2)

    1 2 3l l l l lv u u u u− − −= + + +

    Encoder out ut:(1) ( 2) (1) ( 2) (1) ( 2)

    0 0 1 1 2 2[ ...]v v v v v v=v

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    28/51

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    29/51

    Example of using generator matrix

    1 0 1 1=

    2 [111 1]⎜ ⎟

    =⎝ ⎠g11 00 01 11 01⊕ ⊕ ⊕ =

    11 10

    01

    Verify that you can obtain the result shown!

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    30/51

    Representing convolutional code: code tree

    '2 1 j j j j− −

    2''

     j j j x m m

    −= ⊕

    ' '' ' '' ' '' ... X x x x x x x=out 

    Tells how one input bitis transformed into two output bits(initially register is all zero)

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    31/51

    Representing convolutional codes compactly:

    code trellis and state dia ramInput state ‘1’

    indicated by dashed line

    Shift register states

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    32/51

    Convolutional encoding

    Figure shows how inmemory depth L=v-1k in ut bits are encoded ton output bits in a(n,k,L) code

     shows a generalstructure of aconvolutional

    encoder 

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    33/51

    Example of using generator matrix

    1 0 1 1=

    2 [111 1]

    ⎜ ⎟

    =⎝ ⎠g11 00 01 11 01⊕ ⊕ ⊕ =

    11 10

    01

    Verify that you can obtain the result shown!

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    34/51

    State diagram of a convolutional code

    Each new block of k  bits causes a transition into new state (see -2 slides)

    Hence there are 2k  branches leaving each state

    ssum ng enco er zero n a s a e, enco e wor or any npu s

    can thus be obtained. For instance, below for  u=(1 1 1 0 1) the encodedword v=(1 1, 1 0, 0 1, 0 1, 1 1, 1 0, 1 1, 1 1) is produced:

    Input state

    Output state

    Encoder state diagram for an (n,k,L)=(2,1,2) coder 

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    35/51

    Extracting the generating function by

    s littin and labelin the state dia ram The state diagram can be modified to yield information on code distance properties

     

     –  (1) Split S0 into initial and final state, remove self-loop –  (2) Label each branch by the branch gain X i. Here i is the weight of

    e n enco e s on a ranc

     –  (3) Each path connecting the initial state and the final state

    represents a nonzero code word that diverges and re-emerges with0 on y once

    The path gain is the product of the branch gains along a path, and thecode weight is the power of X in the path gain

    Code weigh distribution is obtained by using a weighted gain formula tocompute its generating function (input-output equation)

    ( )   ii

    T X A X  ∑=

    where Ai is the number of encoded words of weight i

    i

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    36/51

    Exam le of s littin

    weight: 1weight: 2and labeling thestate diagram

    The path representing the statesequence S0S1S3S7S6S5S2S4S0 has

    2 1 1 1 2 1 2 2= 12

    and the corresponding code wordhas the weight 12. The generatingfunction is:

    6 7 8

    ( )

    3 5

    i

    iiT X A X  

     X X X 

    ∑== + +

    9 1011 25 .... X X + + +

    Where these terms come from?

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    37/51

    Distance properties of convolutional codes

    Code strength is measured by the minimum free distance:

    { }min ( ) free

    d w X =

    where w(X) is weight of the entire encoded sequence X generated by amessage sequence

    The minimum free distance denotes: 

    The minimum weight of all the paths in the state diagram thatdiverge from and remerge with the all-zero state S0

     

    6 7 8

    ( )

    3 5

    i

    ii

    T X A X  

     X X X 

    ∑=

    = + +9 1011 25 .... X X + + +

    6 free

    d ⇒ =

    ( )/ 2 1c free

    G kd n= >o ng ga n:

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    38/51

    Decoding convolutional codes

    Maximum likelihood decoding of convolutional codes means finding thecode branch in the code trellis that was most likely transmitted 

     Hamming distances d 

     freefor each branch forming encoded word 

    Assume that information symbols applied into a AWGN channel are

    Let’s denote by x the message bits (no errors) and by y the decoded bits:

    0 1 2... ...m m m m mj x x x x=x

    0 1... ...m j y y y=y

    Probability to decode the sequence y provided x wastransmitted is then

    0

    ( , ) ( | )m j mj

     j

     p p y x∞

    =∏=y x

    Decoder m

    yreceived bits:

    xnon-erroneous bits:

    The most likely path through the trellis will maximize this metric Also, the following metric is maximized (prob.

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    39/51

    Example of exhaustive maximal likelihood detection

    Assume a three bit message is to transmitted. To clear the encoder twozero-bits are appended after message. Thus 5 bits are inserted intoencoder and 10 bits roduced. Assume channel error robabilit is  p=0.1. After the channel 10,01,10,11,00 is produced. What comes afterdecoder, e.g. what was most likely the transmitted sequence?

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    40/51

    0

    ( , ) ( | )m j mj

     j

     p p y x=

    ∏=y x

    0ln ( , ) ln ( | ) jm j mj p p y x∞=∑=y x

      .

    receive bit in-error 

    errorscorrect

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    41/51

    correct:1+1+2+2+2=8;8 ( 0.11) 0.88⋅ − = −

    a se: + + + + = ; . .

    total path metric: 5.48

    ⋅ − = −

    The largest metric, verifythat you get the same result!

     

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    42/51

    Soft and hard decoding

    Regardless whether the channel outputs hard or soft decisions the decoding ruleremains the same: maximize the probability

    However, in soft decoding decision region energies must be accounted for, andhence Euclidean metric d  E , rather that Hamming metric d ree is used 

    0,   jm j mj==

     E  fre   be C d d   E  R=

    rans t on or r s n cate by the arrow

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    43/51

    Decision regions

    Coding can be realized by soft-decoding or hard-decoding principle

    For soft-decoding reliability (measured by bit-energy) of decision region

    Example: decoding BPSK-signal: Matched filter output is a continuosnumber. In AWGN matched filter output is Gaussian

    or so - eco ngseveral decisionregion partitions

     

    Transition probability  , . . .

    that transmitted ‘0’falls into region no: 3

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    44/51

    The Viterbi algorithm

    Exhaustive maximum likelihood method must search all paths in phasetrellis for 2k  bits for a (n,k,L) code

    k L  -surviving paths where 2 L is the number of nodes and 2k is the numberof branches coming to each node (see the next slide!)

    from the initial stage back to initial stage (below from S 0 to S 0). Theminimum distance is the sum of all path metrics

    0n , n jm j mj p p y x==y x

    Channel output sequenceat the RX

    TX Encoder output sequencefor the m:th path

    that is maximized by the correct path The Viterbi algorithm gets its

    efficiency via concentrating intosurvivor paths of the trellis

    The survivor path

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    45/51

    The survivor path Assume for simplicity a convolutional code with k =1, and up to 2k = 2

     branches can enter each stage in trellis diagram

    Assume optimal path passes S. Metric comparison is done by adding themetric of S into S1 and S2. At the survivor path the accumulated metricis naturally smaller (otherwise it could not be the optimum path)

      - be discarded -> all path alternatives need notto be considered 

     

    sequence must be received before decision.However, in practice storing of states for

    2 branches enter each nodek 

     no es

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    46/51

    Example of using the Viterbi algorithm

    Assume received sequence is

    and the   n k L = 2 1 2 encoder shown below. Determine the Viterbi01101111010001 y =

     decoded output sequence!

    states

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    47/51

    The maximum likelihood path

    Smaller accumulatedmetric selected After register length L+1=3

     branch pattern begins to repeat

    (1)

    (1)

    (1)1

    (1)

    (2)

    1(0)

    (Branch Hamming distance

    The decoded ML code sequence is 11 10 10 11 00 00 00 whose Hammingdistance to the received sequence is 4 and the respective decoded

    n parent es s

      . .(Black circles denote the deleted branches, dashed lines: '1' was applied)

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    48/51

    How to end-up decoding?

    In the previous example it was assumed that the register was finallyfilled with zeros thus finding the minimum distance path

     sequence of zeros to the end of the message bits: wastes channelcapacity & introduces delay

     

     –  Trace all the surviving paths to thedepth where they merge

     –   at a memory depth J 

     –   J is a random variable whosemagnitude shown in the figure (5 L)has been experimentally tested for negligible error rate increase

     –   Note that this also introduces thee ay o 5 stages of the trellis J L>

    Error rate of convolutional codes:

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    49/51

    Error rate of convolutional codes:Wei ht s ectrum and error-event robabilit

    Error rate depends on

     –  channel SNR

     –  npu sequence eng , num er 

    of errors is scaled to sequence length –  code trellis topology

    These determine which path in trellis was followed while decoding

      An error event happens when an erroneous path is followed by the

    decoder  All the paths producing errors must have a distance that is larger than

    the path having distance d  free, e.g. there exists the upper bound forfollowing all the erroneous paths (error-event probability):

    2 ( ) freee d d d  p a p d 

    =∑≤Probability of the path at

    (the weight spectrum) at

    the Hamming distance d

     

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    50/51

    Selected convolutional code gains

    Probability to select a path at the Hamming distance d depends ondecoding method. For antipodal (polar) signaling in AWGN channel it is

    2

    0

    2

    ( )  b

     E 

     p d Q R d  N =

      ⎜ ⎟⎝ ⎠2 ( ) freee d d d  p a p d 

    =∑≤

    /C  R k n=

    that can be further simplified for low error probability channels byremembering that then the following bound works well:

    21

    ( 0) x ≈

    21  ∞

    Here is a table of selected convolutionalcodes and their associative code ains G

    22   xπ −

    Gc=RC d  f  /2 (d  f = d  free)

    The error-weighted distance spectrum and

  • 8/19/2019 Convolution Codes [Compatibility Mode]

    51/51

    The error weighted distance spectrum and the bit-error rate

    BER is obtained by multiplying the error-event probability by the number ofdata bit errors associated with the each error event

    Therefore the BER is u er bounded for instance for olar si nalin b

    2 ( ) freeb d d d  p e p d 

    =∑≤2

    0

    2( )

      b

     E 

     p d Q R d  N 

    ⎛ ⎞

    =   ⎜ ⎟⎝ ⎠w ere ed  s e -

    where

    d d 

    ae

    θ =

     –   ad is the number of paths (the weight spectrum) at the Hamming distance d 

     –  is the number of data-bit errors for the path at the Hamming distance d 

     Note: This bound is very loose for low SNR channels.d 

    θ 

    It has been found by simulations that partial bounds, eg taking 3 - 10 terms ofthe summation of pb expression above yields good estimate to aroundBER