Top Banner
CHAPTER 9 Information and Coding 9.1 Measure of Information - Entropy 9.2 Source Coding 9.2.1 Huffman Coding 9.2.2 Lempel-Ziv-Welch Coding 9.2.3 Source Coding vs. Channel Coding 9.3 Channel Model and Channel Capacity 9.4 Channel Coding 9.4.1 Waveform Coding 9.4.2 Linear Block Coding 9.4.3 Cyclic Coding 9.4.4 Convolutional Coding and Viterbi Decoding 9.4.5 Trellis-Coded Modulation 9.4.6 Turbo Coding 9.4.7 Low-Density Parity-Check (LDPC) Coding 9.4.8 Differential Space-Time Block Coding (DSTBC) 9.5 Coding Gain Chapter Outline 9.1 MEASURE OF INFORMATION – ENTROPY On the premise that the outputs from an information source such as data, speech, or audio/video disk can be regarded as a random process, the information theory defines the self information of an event with probability as i x ) ( i x P 2 2 1 () log log [bits] (9.1.1) () () i i i Ix Px Px This measure of information is large/small for an event of lower/higher probability. Why is it defined like that? It can be intuitively justified/understood by observing the following: - A message that there was or will be an earthquake in the area where earthquake is very rare has a value of big news and therefore can be believed to have a lot of information. But, another message that there will be an earthquake in the area where earthquakes are frequent is of much less value as a news and accordingly, can be regarded as having a little information. - For instance, the information contained in an event which happens with probability is computed to be zero according to this definition, which fits our intuition. ()1 i Px Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009 ( [email protected] , http://wyyang53.com.ne.kr )
81
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: dc09_show

CHAPTER 9 Information and Coding

9.1 Measure of Information - Entropy9.2 Source Coding 9.2.1 Huffman Coding 9.2.2 Lempel-Ziv-Welch Coding 9.2.3 Source Coding vs. Channel Coding9.3 Channel Model and Channel Capacity

9.4 Channel Coding 9.4.1 Waveform Coding 9.4.2 Linear Block Coding 9.4.3 Cyclic Coding 9.4.4 Convolutional Coding and Viterbi Decoding 9.4.5 Trellis-Coded Modulation 9.4.6 Turbo Coding 9.4.7 Low-Density Parity-Check (LDPC) Coding 9.4.8 Differential Space-Time Block Coding (DSTBC)9.5 Coding Gain

Chapter Outline

9.1 MEASURE OF INFORMATION – ENTROPY

On the premise that the outputs from an information source such as data, speech, or audio/video disk can be regarded as a random process, the information theory defines the self information of an event with probability asix )( ixP

2 2

1( ) log log [bits] (9.1.1)

( )( )i i

i

I xP x

P x

This measure of information is large/small for an event of lower/higher probability. Why is it defined like that? It can be intuitively justified/understood by observing the following:

- A message that there was or will be an earthquake in the area where earthquake is very rare has a value of big news and therefore can be believed to have a lot of information. But, another message that there will be an earthquake in the area where earthquakes are frequent is of much less value as a news and accordingly, can be regarded as having a little information.- For instance, the information contained in an event which happens with probability is computed to be zero according to this definition, which fits our intuition.

( ) 1iP x

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009 ( [email protected], http://wyyang53.com.ne.kr )

Page 2: dc09_show

Why is it defined by using the logarithm?Because the logarithm makes the combined information of two independent events and (having joint probability of ) equal to the sum of the informations of the events:

ix jy( ) ( )jiP x P y

(9.1.1)

2 2 2 2

1 1( , ) log log log ) log ( ) ) ( ) (9.1.2)

( , ) ( ) ( )( (ji j i i j

i j i j

I x y P P y I IP x y P x P y

x x y

What makes the logarithm have base 2? It is to make the information measured in bits. According to the definition (9.1.1), the information of a bit whose value may be 0 or 1 with equal probability of 1/2 isx

2

1( ) log 1 (9.1.3)

1/ 2I x

Now, suppose we have a random variable whose value is taken to be with probability from the universe . Then, the average information that can be obtained from observing its value is

x} to1|{ MixX i

ix ( )iP x

(9.1.1)

2( ) ( ) ( ) ( ) log ( ) (9.1.4)i ii i i ix X x XH P x I x P x P x x

This is called the entropy of , which is the amount of uncertainty (mystery) contained in before its value is known and will be lost after the value of is observed.

xx

x

Especially in the case of a binary information source whose value is chosen from with probability of

},{ 21 xxX

1 2( ) and ( ) 1 (9.1.5)P x p P x p

the entropy(9.1.4)

2 2( ) log (1 ) log (1 ) (9.1.6)H p p p p x

is depicted as a function of in Fig. 9.1.

This shows that the entropy is maximized when the two events and are equally likely so that . More generally, the entropy defined by Eq. (9.1.4) is maximized when all the events are equiprobable (equally probable), i.e.

1x 2x2/11 pp

1( ) for all 1 to (9.1.7)iP x i M

M

Page 3: dc09_show

Problem 9.1 Entropy and Investment Value of a Date

Suppose you are dating someone, who is interested in you. When is your value maximized so that he or she can give you the most valuable present? In connection with the concept of entropy introduced in Sec. 9.1, choose one among the following examples:

(1) When you sit on the fence so that he or she cannot guess whether you are interested in him or her.(2) When you have shown him or her a negative sign.(3) When you have shown him or her a positive sign.

In the same context, which fish would you offer the best bait to? Choose one among the following examples:

(1) The fish you are not sure whether you can catch or not.(2) The fish you are sure you can catch (even with no bait).(3) The fish you are sure you cannot catch (with any bait).

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 4: dc09_show

9.2 SOURCE CODING

Sometimes we want to find an efficient code using fewest bits to represent the information. Shannon’s source coding theorem or noiseless coding theorem states that we need the number of bits not less than the entropy in order to encode an information message in such a way that perfect decoding is possible. In this section we examine Huffman coding and LZW coding.

9.2.1 Huffman CodingHuffman coding is a variable length coding which has a strategy to minimize the average number of bits required for coding a set of symbols by assigning shorter/longer length code to more/less probable symbol. It is performed in two steps – merging and splitting as illustrated in Fig. 9.2.(1) Merging: Repeat arranging the symbols in descending order of probability as in each column of Table 9.1 and merging the least probable two entries into a new entry with a probability equal to the sum of their probabilities in the next column, until only two enrties remain. (2) Splitting: Assign 0/1 to each of the two remaining entries to make their initial parts of codeword. Then repeat splitting back the merged entries into the two (previous) entries and appending another 0/1 to each of their (unfinished) codewords.The average codeword length of any code assigned to the set of symbols cannot be less than the entropy defined by Eq. (9.1.4) and especially, the average codeword length of the Huffman code constructed by this procedure is less than :

X)(xH L

1)( xH

( ) ( ) ( ) ( ) 1 (9.2.1)ii

xix XH L P x l c H x x

where is the length of the codeword for each symbol .

( )ixl c ix

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 5: dc09_show

function [h,L,H]=Huffman_code(p,opt)% Huffman code generator gives a Huffman code matrix h, % average codeword length L & entropy H% for a source with probability vector p given as argin(1) zero_one=['0'; '1']; if nargin>1&&opt>0, zero_one=['1'; '0']; endif abs(sum(p)-1)>1e-6 fprintf('\n The probabilities in p does not add up to 1!');end M=length(p); N=M-1; p=p(:); % Make p a column vectorh={zero_one(1),zero_one(2)}if M>2 pp(:,1)=p; for n=1:N % To sort in descending order [pp(1:M-n+1,n),o(1:M-n+1,n)]=sort(pp(1:M-n+1,n),1,'descend'); if n==1, ord0=o; end % Original descending order if M-n>1, pp(1:M-n,n+1)=[pp(1:M-1-n,n); sum(pp(M-n:M-n+1,n))]; end end for n=N:-1:2 tmp=N-n+2; oi=o(1:tmp,n); for i=1:tmp, h1{oi(i)}=h{i}; end h=h1; h{tmp+1}=h{tmp}; h{tmp}=[h{tmp} zero_one(1)]; h{tmp+1}=[h{tmp+1} zero_one(2)]; end for i=1:length(ord0), h1{ord0(i)}=h{i}; end h=h1;endL=0; for n=1:M, L=L+p(n)*length(h{n}); end % Average codeword lengthH=-sum(p.*log2(p)); % Entropy by Eq.(9.1.4)Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 6: dc09_show

(Example 9.1) Huffman Coding

Suppose there is an information source which generates the symbols from with the corresponding probability vector

1 2 9{ , , , }X x x x x

( ) [0.2 0.15 0.13 0.12 0.1 0.09 0.08 0.07 0.06] (E9.1.1)P p x

The Hoffman codewords together with their average length and the entropy of the information source can be found by typing the following statements into MATLAB Command Window:x

>>p=[0.2 0.15 0.13 0.12 0.1 0.09 0.08 0.07 0.06];>>[h,L,H]=Huffman_code(p) % Fig.9.2 and Eq.(9.1.4)

h = '11' '001' '010' '100' '101' '0000' '0001' '0110' '0111' L = 3.1000, H = 3.0371

satisfies

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

( ) ( ) ( ) ( ) 1 (9.2.1)ii

xix XH L P x l c H x x

Page 7: dc09_show

function coded_seq=source_coding(src,symbols,codewords)% Encode a data sequence src based on the given (symbols,codewords).no_of_symbols=length(symbols); coded_seq=[];if length(codewords)<no_of_symbols error('The number of codewords must equal that of symbols');endfor n=1:length(src) found=0; for i=1:no_of_symbols if src(n)==symbols(i), tmp=codewords{i}; found=1; break; end end if found==0, tmp='?'; end coded_seq=[coded_seq tmp];end

function decoded_seq=source_decoding(coded_seq,codewords,symbols)% Decode a coded_seq based on the given (codewords,symbols).M=length(codewords); decoded_seq=[];while ~isempty(coded_seq) lcs= length(coded_seq); found=0; for m=1:M codeword= codewords{m}; lc= length(codeword); if lcs>=lc&codeword==coded_seq(1:lc) symbol=symbols(m); found=1; break; end if found==0, symbol='?'; end end decoded_seq=[decoded_seq symbol]; coded_seq=coded_seq(lc+1:end); end

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 8: dc09_show

>>src='12345678987654321'; symbols='123456789';>>coded_sequence=source_coding(src,symbols,h) % with the Huffman code h coded_sequence = 11001010100101000000010110011101100001000010110001000111>>decoded_sequence=source_decoding(coded_sequence,h,symbols) decoded_sequence = 12345678987654321>>length(src), length(coded_sequence) ans= 17 56

It also turns out that, in comparison with the case where each of the nine different symbols {1,2,…,9} is commonly represented by a binary number of 4[bits], Huffman coding compresses the source data from [bits] to 56[bits].

49 2

68417

9.2.2 Lempel-Ziv-Welch (LZW) Coding

Suppose we have a binary sequence '001011011000011011011000'. We can apply the LZW encoding/decoding procedure to get the following results. They can also be assured by the step-by-step procedure described in Figs. 9.3.1/9.3.2.

>>src='001011011000011011011000' >>[coded_sequence,dictionary]=LZW_coding(src) coded_sequence = 001341425a79 dictionary = '0' '1' '00' '01' '10' '011' '101' '11' '100' '000' '0110' '01101' '110‘>>[decoded_sequence,dictionary]=LZW_decoding(coded_sequence) decoded_sequence = 001011011000011011011000 % agrees with the source dictionary = '0' '1' '00' '01' '10' '011' '101' '11' '100' '000' '0110' '01101' '110' >>length(src), length(coded_sequence) ans= 24 12

Page 9: dc09_show

9.2.3 Source Coding vs. Channel Coding

The simplified block diagram of a communication system depicted in Fig. 9.4 shows the position of the source encoder/decoder and the channel encoder/decoder.

- The purpose of source coding is to reduce the number of data bits that are to be transmitted over the channel for sending a message information to the receiver,

- while the objective of channel coding is to make the transmission error correction efficient so that the error probability can be decreased.

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 10: dc09_show

Shannon-Hartley channel capacity theorem

(9.3.7)

2 2 20 0

Channel : log 1 log 1 log 1 [bits/sec] (9.3.16)capacity ( / 2) 2

where [Hz]: the channel bandwidth, [W]: the signal power,

[J/bit]: the signal energy per bit,

b

b

E RS SC B B B

N N B N B

B SE

0

0

[bits/sec] the data bit rate,

/ 2[W/Hz]: the noise PSD per unit frequency[Hz] in the passband

[W]: the noise power.

R

N

N N B

The limits of the channel capacity as or 0 are as follows:

0

0

00

0 0

/

22 10/ 00

/ 0

2 2 2/ 0 / 00 0 00 0

lim log 1 log 10 log 0.332 SNRdB (9.3.17a)

lim log 1 lim log 1 log 1.44

(9.3.17b)

S N B

S N B

N BS N B S

S N B S N B

SSC B B B

N BN B

S S SS SC B

N N NN B N Be

In order to taste an implication of this formula, we suppose an ideal situation in which the data transmission rate reaches its upperbound that is the channel capacity . The relationship between bandwidth efficiency and for such an ideal system can be obtained by substituting into the left-hand side of Eq. (9.3.16) as

/R B 10 0EbN0dB 10 log ( / )bE NR C

C R

02 2 20 0

/10 0 10

log 1 log 1 log 1 ( / )

; EbN0dB 10 log ( ) 10 log (2 1) [dB] (9.3.18)/

b bb

R Bb

RE R E R RC B R B E N

BN B N B B

BE

RN

0/S N B

On the other hand, we have (Data transmission rate) (Chanel capacity) (9.3.8)R CSource: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 11: dc09_show

This relationship is depicted as the ‘capacity boundary’ curve in Fig. 9.7 where the bandwidth efficiencies vs. SNR for PSK/FSK/QAM signalings listed in Table 7.2 are plotted together. Note the following about Fig. 9.7:- Only the lower-right region of the curve corresponding to can be realized (toward error-free transmission).- The figure shows us possible trade-offs among SNR, bandwidth efficiency, and error probability, which can be useful for a communication system design.- However low and wide the data transmission rate and the channel bandwidth may be made, respectively, the SNR (EbN0dB) should be at least -1.6dB (Shannon limit) for reliable communication.- The curve gives a rough estimate of the maximum possible coding gain where the coding gain is defined as the SNR that can be reduced to maintain the BER by the virtue of (channel) coding.- The Shannon limit can be found using the following MATLAB statements: >>syms x, Shannon_limit=eval(limit(10*log10((2^x-1)/x),0))

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 12: dc09_show

9.4 Channel Coding

In order to achieve reliable communication through a channel corrupted by a noise, we might have to make the codewords different from each other conspicuously enough to reduce the probability of each symbol to be taken for another symbol. The conversion of the message data with this aim is called channel coding. It may accompany some side effects such as a decline of data transmission rate, an increase of required channel bandwidth, and an increase of complexity in the encoder/decoder. In this section, we will discuss waveform coding that converts the data to distinguishable waveforms and structured coding that adds some redundant/extra parity bits for detection and/or correction of transmission errors. The structured coding is divided into two schemes. One is the block coding, which converts the source data independently of the previous data and the other is the convolutional coding, which converts the data dependently of the previous data.

9.4.1 Waveform CodingThere are various methods of waveform coding such as antipodal signaling, orthogonal signaling, bi-orthogonal signaling, etc, most of which were discussed in Chapters 5 and 7.Among those waveform codings, the orthogonal signaling can be regarded as converting one bit of data and two bits of data according to its/their value(s) in the following way:

1

1 12

1 1

Data Codeword matrix0 00

1 0 1

00 0 0 0 001 0 1 0 110 0 0 1 111 0 1 1 0

H

H HHH H

Page 13: dc09_show

This conversion procedure is generalized for bits of data into the form of a Hadamard matrix as

K

1 1

1 1 (9.4.1)K K

KK K

H HH

H H

Here, we define the crosscorrelation between codeword and codeword asi j

Number of bits having the same values Number of bits having different values(9.4.2)

Total number of bits in a codewordijz

This can be computed by changing every 0 (of both codewords) into -1 and dividing the inner product of the two bipolar codeword vectors by the codeword length. According to this definition, the crosscorrelation between any two different codewords turns out to be zero, implying the orthogonality among the codewords and their corresponding signal waveforms.

On the other hand, the bi-orthogonal signaling can be regarded as converting two bits of data and generally, bits of data in the following way:K

1

2

1

1

1

Data Codeword matrix0 0000 101

10 1 111 1 0

-bitsof

data

K

K

K

HB

H

HKB

H

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 14: dc09_show

Since the number of columns of the codeword matrix is the number of bits per symbol, the number of bits spent for transmission with bi-orthogonal signaling is a half of that for orthogonal signaling and the required channel bandwidth is also a half of that for orthogonal signaling, still showing comparable BER performance. However, the codeword lengths of both signaling (coding) methods increase by geometric progression as the number of bits per symbol increases and consequently, the data transmission rate or the bandwidth will suffer, resulting in the degradation of bandwidth efficiency .

KBR

/R B

9.4.2 Linear Block Coding

Block coding is a mapping of -bit message vectors (symbols) into -bit ( ) code vectors by using an block code which consists of codes of length .N

N K),( KN 2K

K N

The simplest block coding uses a repetition code to assign an -zero ( ) sequence or an -one sequence to a symbol or bit message 0 and 1, respectively:

:oddNNN

Bit message: 0 00 000( zeros), : an odd positive number

Bit message: 1 11 111( ones)N

NN

A data sequence coded by this coding can be decoded simply by the majority rule, which decodes each -bit subsequence into 0 or 1 depending on which of 0 or 1 is more than the other one. In this coding-decoding scheme, a symbol error happens only with at least transmission bits of error in an -bit sequence and therefore the symbol error probability in a BSC with crossover probability (i.e., channel bit transmission error probability) can be computed as

( 1) / 2NN

N

, ( 1) / 2 (1 ) (9.4.4)N k N k

e s k NN

pk

Source: MATLAB /Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 15: dc09_show

This implies that the error probability can be made close to zero just by increasing the number of bits per symbol in the face of low bandwidth efficiency.

With the crossover probability of , the symbol error probability will be01.0

3 3 4, 2

5 5, 3

3 0.01 0.99 2.98 10 for 3

5 6 0.01 0.99 9.85 10 for 5

k ke s k

k ke s k

p Nk

p Nk

N

What is the linearity of a block code?

A block code is said to be linear if the modulo-2 sum/difference of any two codewords in the block code is another codeword belonging to the block code. A linear block code is represented by its generator matrix , which is modulo-2 premultiplied by a -bit message symbol vector to yield an -bit codeword as

K N KGN cm

(9.4.5)Gc m

Accordingly, the encoder and decoder in a linear block coding scheme use matrix multiplications with generator/parity-check matrices instead of using table lookup methods with the whole codeword matrices. This makes the encoding/decoding processes simple and efficient.

2K N

Now, we define the minimum (Hamming) distance of a linear block code as the minimum of Hamming distances between two different codewords:

c

min ( ) Min ( , ) (9.4.6)i j H i jd c d c cSource: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 16: dc09_show

Here, Hamming distance between two different codewords and is the number of bits having different values and can be found as the weight of the modulo-2 sum/difference of the two codewords:

icji c c

( , ) ( ) (9.4.7)H i j i jd w c c c c

jc

where the Hamming weight of a codeword is defined to be the number of non-zero bits.

( )kw c kc

min 0( ) Min ( ) (9.4.8)k kd c w c

Besides, since the modulo-2 sum/difference of any two codewords is another codeword in the linear block code , the minimum distance of a linear block code is the same as the minimum among the nonzero weights of codewords.

c

c

To describe the strength of a code against transmission errors, the error-detecting/correcting capabilities / are defined to be the maximum numbers of detectable/correctable bit errors, respectively, and they can be obtained from the minimum distance as follows:

( )dd c ( )c cdc

min

min

( ) ( ) 1 (9.4.9)

( ) 1( ) floor (9.4.10)

2

d

c

d c d c

d cd c

where is the greatest number less than or equal to .)floor(x In case the crossover probability of a channel, i.e., the probability of channel bit transmission error is and the RCVR corrects only the symbol errors caused by at most bits of error, the bit error probability after correction can be found roughly as

( )c cd

, 1

1(1 ) (9.4.11)

c

N k N ke b k d

Np k

kN

function pemb_t=prob_err_msg_bit(et,N,No_of_correctable_error_bits)pemb_t=0; % Theoretical message bit error probability by Eq.(9.4.11)for k=No_of_correctable_error_bits+1:N pemb_t= pemb_t +k*nchoosek(N,k)*et.^k.*(1-et).^(N-k)/N;end

Page 17: dc09_show

(Example 9.5) A Linear Block Code of Codeword (Block) Length and Message Size

Find the codewords and the minimum distance of the (7,4) linear block code represented by the generator matrix 1 1 0 1 0 0 0

0 1 1 0 1 0 0(E9.4.1)

1 1 1 0 0 1 01 0 1 0 0 0 1

G

function y=deci2bin1(x,l) % Equivalent to de2bi(x,l,'left-msb')% Converts a given decimal number x into a binary number of l bitsif x==0, y=0; else y=[]; while x>=1, y=[rem(x,2) y]; x=floor(x/2); endendif nargin>1, y=[zeros(size(x,1),l-size(y,2)) y]; end

%dc09e05.m: A Linear Block Code in Example 9.5% Constructs the codeword matrix and finds its minimum distance.clearK=4; L=2^K; % Message size and the number of codewordsfor i=1:L, M(i,:)=deci2bin1(i-1,K); endM % A series of K-bit binary numbers% Generator matrixG=[1 1 0 1 0 0 0; 0 1 1 0 1 0 0; 1 1 1 0 0 1 0; 1 0 1 0 0 0 1];% To generate codewordsCodewords =rem(M*G,2) % Modulo-2 multiplication Eq.(9.4.5)% Find the minimum distance by Eq.(9.4.8)Minimum_distance =min(sum((Codewords(2:L, :))'))

>>dc09e05

(9.4.5)

0 0 0 0 0 0 0 0 0 0 00 0 0 1 1 0 1 0 0 0 10 0 1 0 1 1 1 0 0 1 00 0 1 1 0 1 0 0 0 1 10 1 0 0 0 1 1 0 1 0 00 1 0 1 1 1 0 0 10 1 1 0 1 1 0 1 0 0 00 1 1 1 0 1 1 0 1 0 0Codeword 1 0 0 0 1 1 1 0 0 1 01 0 0 1 1 0 1 0 0 0 11 0 1 01 0 1 11 1 0 01 1 0 11 1 1 01 1 1 1

M G

0 11 0 0 0 1 1 00 0 1 0 1 1 1 (E9.5.2)1 1 0 1 0 0 00 1 1 1 0 0 10 0 1 1 0 1 01 0 0 1 0 1 11 0 1 1 1 0 00 0 0 1 1 0 10 1 0 1 1 1 01 1 1 1 1 1 1

Minimum_distance = 3

(cf) Note that every operation involved in adding/subtracting/multiplying the code vectors/ matrices is not the ordinary arithmetic operation, but the modulo-2 operation and consequently, the addition and the subtraction don’t have to be distinguished.

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 18: dc09_show

<Construction of a Generator Matrix and the Corresponding Parity-Check Matrix>

A generator matrix representing an block code can be constructed as K N G ),( KN

1,1 1,2 1,

2,1 2,2 2,

( )

,1 ,2 ,

1 0 00 1 0

| (9.4.12)

0 0 1

N K

N K

K N K K KK N

K K K N K

p p pp p p

G P I

p p p

With this generator matrix, the - bit code vector (codeword) for a -bit message (or source or information) vector is generated as

N c Km

1 2 1 2 [ ] (9.4.13)N K KG p p p m m m c m p m

which consists of the first parity bits and the last message bits in a systematic structure.

( )N K K Note that if a block code is represented by a generator matrix containing an identity matrix, then the message vector appears (without being altered) in the code vector and the code is said to be systematic.

function M=combis(N,i)% Creates an error pattern matrix each row of which is an N-dimensional % vector having ones (representing bit errors) not more than iM=[]; m1=0;for n=1:i ind = combnk([1:N],n); % nchoosek([1:N],n); for m=1:size(ind,1), m1=m1+1; M(m1,ind(m,:))=1; endend

>> M=combis(5,2)

M = 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 1 1 0 0 1 0 1 0 0 1 1 0 0 1 0 0 1 0 1 0 1 0 0 1 1 0 0 1 0 0 0 1 1 0 0 1 0 1 0 1 0 0 1 1 0 0 0

Page 19: dc09_show

Correspondingly to this generator matrix , the parity-check matrix to be used for decoding the received signal vector is defined as

HG

1,1 2,1 ,1

1,2 2,2 ,2

( ) ( ) ( )( )

1, 2, ,

( ) ( )( )

( )

1 0 00 1 0

| (9.4.14a)

0 0 1

1 0 00 1 0

0

K

KT

N K N K N K KN K N

N K N K K N K

T N K N KN N K

K N K

p p pp p p

H I P

p p p

IH

P

1,1 1,2 1,

2,1 2,2 2,

,1 ,2 ,

( ) ( )( )( ) ( )( )

( )

0 1(9.4.14b)

so that it satisfies

| (9.4.15)(

N K

N K

K K K N K

N K N KTK N KK N K K N KK N K K K

K N K

p p pp p p

p p p

IGH P I P P O

P

a ( ) zero matrix)K N K Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 20: dc09_show

For a received signal vector containing an error vector , the decoder at RCVR computes

r c e e

(9.4.13) (9.4.15) ( ) ( ) (9.4.16)T T T TH H G H H s r c e m e e

which is called a syndrome vector for the reason that it has some (distinct) information about the possible symbol error like a pathological syndrome or symptom of disease. Then, the decoder finds the error pattern corresponding to the syndrome vector from such a table as shown in Fig. 9.8 and subtracts it from the received signal vector to hopefully obtain the original code vector as

e sr

(9.4.17) c r e

Finally, the RCVR accepts the last bits of this corrected code vector as the message vector (see Eq. (9.4.13)).

K m

Fig. 9.8 shows a table which contains the healthy (valid) codewords, the error patterns, the cosets of diseased codewords infected by each error pattern, and the corresponding syndromes for the linear block code given in Example 9.5.

Page 21: dc09_show

For example, suppose the RCVR has received a signal, which is detected to be

[1 1 0 1 1 1 0]r

The RCVR multiplies this received signal vector with the transposed parity matrix to get the syndrome as

(9.4.16)[1 1 0 1 1 1 0]

1 0 00 1 00 0 1

[1 1 1 1 1 1 1 1 1] [1 0 0]1 1 00 1 11 1 11 0 1

TH

s r

Then the error pattern corresponding to this syndrome vector is added to the detected result to yield

[1 0 0 0 0 0 0]e

(9.4.17)[1 1 0 1 1 1 0] [1 0 0 0 0 0 0] [0 1 0 1 1 1 0] c r e

which is one of the healthy (valid) codewords. The last bits of this corrected code vector is supposed to be the decoded message vector by reference to Eq. (9.4.13).

4K

1 2 1 2 [ ] (9.4.13)N K Kp p p m m m c p m

Page 22: dc09_show

The MATLAB routine “do_Hamming_code74.m” can be used to simulate a channel encoding/decoding scheme that is based on the block code represented by the generator matrix (E9.5.1). Actually, the block code is the (7,4) Hamming code and it can be constructed using the Communication Toolbox function ‘Hammgen()’ (as done in the routine) or a subsidiary routine ‘Hamm_gen()’ that will soon be introduced. Note the following about the routine:

- The randomly generated -dimensional message vector is coded by the (7,4) Hamming code and then BPSK-modulated.- For comparison with the uncoded case, Eq. (7.3.5) is used to compute the BER for BPSK signaling.- The same equation (Eq. (7.3.5)) is also used to find the crossover probability , but with the SNRb (SNR per bit) multiplied by the code rate to keep the SNR per message bit the same for uncoded and coded messages. That is, the SNR per transmission bit should be decreased to times for the same SNR per message bit since the channel coding based on an linear block code increases the number of transmission bits by times of that with no channel coding. - If there are many syndromes and corresponding error patterns, it is time-consuming to search the syndrome matrix S for a matching syndrome and find out the corresponding error pattern in the error pattern matrix E. In that case, it will make the search process more efficient to create and use an ‘error pattern index vector’ that can easily be accessed by using the number converted from the syndrome vector as the index (see Sec. 9.4.3). This idea is analogous to that of an address decoding circuit that can be used to decode the syndrome into the address of the memory in which the corresponding error pattern is stored.

/cR K N

/cR K N1/ /cR N K( , )N K

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 23: dc09_show

function do_Hamming_code74(SNRbdB,MaxIter)% (7,4) Hamming codeif nargin<2, MaxIter=1e6; endn=3; N=2^n-1; K=2^n-1-n; % Codeword (Block) length and Message sizeRc=K/N; % Code rateSNRb=10.^(SNRbdB/10); SNRbc=SNRb*Rc; sqrtSNRbc=sqrt(SNRbc);pemb_uncoded=Q(sqrt(SNRb)); % Uncoded msg BER with BPSK by Eq.(7.3.5)et=Q(sqrt(SNRbc)); % Crossover probability by Eq.(7.3.5)L=2^K;for i=1:L, M(i,:)=deci2bin1(i-1,K); end % All message vectors[H,G]=Hammgen(n); % [H,G]=Ham_gen(n): Eq.(9.4.12)&(9.4.14)Hamming_code=rem(M*G,2); % Eq.(9.4.13)Min_distance=min(sum(Hamming_code(2:L,:)')); % Eq.(9.4.8)No_of_correctable_error_bits=floor((Min_distance-1)/2); % Eq.(9.4.10) E= combis(N,No_of_correctable_error_bits); % Error patterns (Fig.9.8)S= rem(E*H',2); NS=size(S,1); % The syndrome matrixnombe=0; for iter=1:MaxIter msg=randint(1,K); % Message vector coded=rem(msg*G,2); % Coded vector modulated=2*coded-1; % BPSK-modulated vector r= modulated +randn(1,N)/sqrtSNRbc; % Received vector with noise r_sliced=r>0; % Sliced r_c=r_sliced; % To be corrected only if it has a syndrome s= rem(r_sliced*H',2); % Syndrome for m=1:NS % Error correction depending on the syndrome if s==S(m,:), r_c=rem(r_sliced+E(m,:),2); break; end end nombe=nombe+sum(msg~=r_c(N-K+1:N)); if nombe>100, break; endendpemb=nombe/(K*iter); % Message bit error probabilitypemb_t=prob_err_msg_bit(et,N,No_of_correctable_error_bits); % Eq.(9.4.11) fprintf('\n Uncoded Messsage BER=%8.6f',pemb_uncoded)fprintf('\nMessage BER=%5.4f(Theoretical BER=%5.4f)\n',pemb,pemb_t)

must be decreased to / times that for uncoded case for .cSNR R K N fair comparison

0 / 2

(7.3.5)sr

EQ Q SNR

N

(9.4.13) Gc m

min ( ) 1( ) floor (9.4.10)

2c

d cd c

(9.4.16) THs e

1 2 1 2 [ ] (9.4.13)N K KG p p p m m m c m p m

(9.4.16)THs r

(9.4.17) c r e

, 1

1(1 ) (9.4.11)

c

N k N ke b k d

Np k

kN

Page 24: dc09_show

Now, it is time to run the routine and analyze the effect of channel coding on the BER performance. To this end, let us type the statements

>>do_Hamming_code74(5) % with SNR=5dB

into the MATLAB Command Window, which will make the following simulation results appear on the screen:

Uncoded Messsage Bit Error Probability=0.037679 Message Bit Error Probability=0.059133 (theoretically 0.038457)What happened? There are a couple of observations to make:

- The BER (0.059) obtained from the simulation is unexpectedly higher than the theoretical value of BER (0.038) obtained from Eq. (9.4.11) where Eq. (7.3.5) with the SNR reduced to times is used to find the crossover probability . What does the big gap come from? It is because Eq. (9.4.11) does not consider the case where many transmitted bit errors exceeding the error-correcting capability of the channel coding results in more bit errors during the correction process because of wrong diagnosis (see Problem 9.5).

/cR K N

Both of these two issues are naturally resolved by increasing the SNR. Let us rerun the routine:

>>do_Hamming_code74(12) % with SNR=12dB Uncoded Messsage Bit Error Probability=0.000034 Message Bit Error Probability=0.000019 (theoretically 0.000010)

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

- It is much more surprising that even the theoretical value of BER is higher than the BER with no coding. Can you believe that we get worse BER performance for all the hardware complexity, bandwidth expansion, and/or data rate reduction paid for the encoding-decoding process? Do we still need the channel coding? Yes, certainly. Something wrong with the routine? Absolutely not. The reason behind the worse BER is just because the SNR is not high enough to let the coding-decoding scheme work properly. Even if the parity-check bits are added to the message vector, they may not play their error-checking role since they (inspectors) are also subject to noise (corruption), possibly making wrong error detections/corrections that may yield more bit errors.

Page 25: dc09_show

9.4.3 Cyclic Coding

A cyclic code is a linear block code having the property that a cyclic shift (rotation) of any codeword yields another codeword. Due to this additional property, the encoding and decoding processes can be implemented more efficiently using a feedback shift register. An cyclic code can be described by an th-degree generator polynomial( )N K

( , )N K

20 1 2( ) (9.4.24)N K

N Kx g g x g x g x g

The procedure of encoding a -bit message vector represented by a th-degree polynomial

0 1 1[ ]Km m m m ( 1)K

K

2 10 1 2 1( ) (9.4.25)K

Kx m m x m x m x m

into an -bit codeword represented by an th-degree polynomial is as follows:( 1)N N

1. Divide by the generator polynomial to get the remainder polynomial .2. Subtract the remainder polynomial from to obtain a codeword polynomial

( )N Kx x m )(xg ( )m xr( )m xr ( )N Kx x m

1 1 10 1 1 0 1 1

( ) ( ) ( ) ( ) ( ) (9.4.26)N Km

N K N K N K NN K K

x x x x x x

r r x r x m x m x m x

c m r q g

which has the generator polynomial as a (multiplying) factor. Then the f irst coefficients constitute the parity vector and the remaining coefficients make the message vector.

)(xg ( )N KK

Note that all the operations involved in the polynomial multiplication, division, addition, and subtraction are not the ordinary arithmetic ones, but the modulo-2 operations.

Page 26: dc09_show

(Example 9.6) A Cyclic Code

With a (7,4) cyclic code represented by the generator matrix

2 3 2 30 1 2 3( ) 1 1 0 1 (E9.6.1)x g g x g x g x x x x g

find the codeword for a message vector [1 0 1 1].Noting that , and , we divide by as4 ,7 KN 3 KN ( )N Kx x m )(xg

3 2 3 6 5 3

3 2 3

( ) (1 0 1 1 ) ( ) ( ) ( )

( 1)( 1) 1 (E9.6.2)

N Kmx x x x x x x x x x x x

x x x x x

m q g r

to get the remainder polynomial and add it to to make the codeword polynomial as

( ) 1m x r 3( ) ( )N Kx x x x m m

3 2 3 4 5 6

parity | message

( ) ( ) ( ) 1 0 0 1 0 1 1 (E9.6.2)

[ 1 0 0 1 0 1 1 ]

mx x x x x x x x x x

c r m

c

The codeword made in this way has the parity bits and message bits.3N K 4K

m

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 27: dc09_show

Now, let us consider the procedure of decoding a cyclic coded vector. Suppose the RCVR has received a possibly corrupted code vector where is a codeword and is an error. Just as in the encoder, this received vector, being regarded as a polynomial, is divided by the generator polynomial

c e

r

( )xs

( )g x

r c e

( ) ( ) ( ) ( ) ( ) ( ) '( ) ( ) ( ) (9.4.27)x x x x x x x x x r c e q g e q g s

to yield the remainder polynomial . e

se

e s

(9.4.28) c r e

and accept only the last bits (in ascending order) of this corrected codeword as a message.K

The polynomial operations involved in encoding/decoding every block of message/coded sequence seem to be an unbearable computational load. However, we fortunately have divider circuits which can perform such a modulo-2 polynomial operation. Fig. 9.9 illustrates the two divider circuits (consisting of linear feedback shift registers) each of which carries out the modulo-2 polynomial operations for encoding/decoding with the cyclic code given in Example 9.6. Note that the encoder/decoder circuits process the data sequences in descending order of polynomial.

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

This remainder polynomial may not be the same as the error vector , but at least it is supposed to have a crucial information about and therefore, may well be called the syndrome.

The RCVR will find the error pattern corresponding to the syndrome , subtract it from the received vector to get hopefully the correct codeword

Page 28: dc09_show

2 3(E9.6.1)

parity | message[1 0 1 1] [ 1 0 0 1 0 1 1 ] (E9.6.3)

( ) 1 1 0 1 x x x x

m c

g

Page 29: dc09_show

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 30: dc09_show

The encoder/decoder circuits are cast into the MATLAB routines ‘cyclic_encoder()’ and ‘cyclic_decoder0()’, respectively, and we make a program “do_cyclic_code.m” that uses the two routines ‘cyclic_encoder()’ and ‘cyclic_decoder()’ (including ‘cyclic_encoder0()’) to simulate the encoding/decoding process with the cyclic code given in Example 9.6. Note a couple of things about the decoding routine ‘cyclic_decoder()’:

- It uses a table of error patterns in the matrix E, which has every correctable error pattern in its rows. The table is searched for a suitable error pattern by using an error pattern index vector epi, which is arranged by the decimal-coded syndrome and therefore, can be addressed efficiently by a syndrome just like a decoding hardware circuit.- If the error pattern table E and error pattern index vector epi are not supplied from the calling program, it uses ‘cyclic_decoder0()’ to supply itself with them.function coded= cyclic_encoder(msg_seq,N,K,g)% Cyclic (N,K) encoding of input msg_seq m with generator polynomial g Lmsg=length(msg_seq); Nmsg=ceil(Lmsg/K);Msg= [msg_seq(:); zeros(Nmsg*K-Lmsg,1)]; Msg= reshape(Msg,K,Nmsg).';coded= []; for n=1:Nmsg msg= Msg(n,:); for i=1:N-K, x(i)=0; end for k=1:K tmp= rem(msg(K+1-k)+x(N-K),2); % msg(K+1-k)+g(N-K+1)*x(N-K) for i=N-K:-1:2, x(i)= rem(x(i-1)+g(i)*tmp,2); end x(1)=g(1)*tmp; end coded= [coded x msg]; % Eq.(9.4.26)end

Page 31: dc09_show

function x=cyclic_decoder0(r,N,K,g)% Cyclic (N,K) decoding of an N-bit code r with generator polynomial gfor i=1:N-K, x(i)=r(i+K); endfor n=1:K tmp=x(N-K); for i=N-K:-1:2, x(i)=rem(x(i-1)+g(i)*tmp,2); end x(1)=rem(g(1)*tmp+r(K+1-n),2);end

Page 32: dc09_show

function [decodes,E,epi]=cyclic_decoder(code_seq,N,K,g,E,epi)% Cyclic (N,K) decoding of received code_seq with generator polynml g% E: Error Pattern matrix or syndromes% epi: error pattern index vector%Copyleft: Won Y. Yang, [email protected], CAU for academic use onlyif nargin<6 nceb=ceil((N-K)/log2(N+1)); % Number of correctable error bits E=combis(N,nceb); % All error patterns consisting of 1,...,nceb errors for i=1:size(E,1) syndrome=cyclic_decoder0(E(i,:),N,K,g); synd_decimal=bin2deci(syndrome); epi(synd_decimal)=i; % Error pattern indices endend

if (size(code_seq,2)==1) code_seq=code_seq.'; endLcode= length(code_seq); Ncode= ceil(Lcode/N);Code_seq= [code_seq(:); zeros(Ncode*N-Lcode,1)];Code_seq= reshape(Code_seq,N,Ncode).';decodes=[]; syndromes=[];for n=1:Ncode code= Code_seq(n,:); syndrome= cyclic_decoder0(code,N,K,g); si= bin2deci(syndrome); % Syndrome index if 0<si&si<=length(epi) % Syndrome index to error pattern index k=epi(si); if k>0, code=rem(code+E(k,:),2); end % Eq.(9.4.28) end decodes=[decodes code(N-K+1:N)]; syndromes=[syndromes syndrome];endif nargout==2, E=syndromes; endSource: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 33: dc09_show

% do_cyclic_code : MATLAB script for cyclic code.clearN=7; K=4; N=15; K=7; N=31; K=16; lm=1*K; msg= randint(1,lm);% This msg with the queer 3-bit errors results in 5/4 bit errors % by the cyclic_decoder()/decode()nceb=ceil((N-K)/log2(N+1)); %????g=cyclpoly(N,K); %g_=fliplr(g);%gBCH=bchgenpoly(N,K); % Galois vector representing (N,K) BCH code%g=double(gBCH.x) % Extracting the elements from a Galois arraycoded = cyclic_encoder(msg,N,K,g); lc=length(coded);%no_transmitted_bit_errors=3;Er = randerr(1,lc,nceb);Er=[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0];% terrible/fatal error vector for 'encode()'Er=[1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1]; %Er=zeros(1,31);Er([4 29 31])=1; %queer error for the (31,16) cylic coder = rem(coded+Er,2);[decoded0,E,epi] = cyclic_decoder(r,N,K,g); % <-- To be run only the first time - bit time-consumingdecoded = cyclic_decoder(r,N,K,g,E,epi);nobe=sum(decoded~=msg) coded1 = encode(msg,N,K,'cyclic',g); lc=length(coded1);[coded; coded1]r1 = rem(coded1+Er,2);decoded1 = decode(r1,N,K,'cyclic',g);nobe1=sum(decoded1~=msg)[H,G] = cyclgen(N,g); % <-- To be run only the first timesyndt = syndtable(H); % To be run only the first time - time-consumingdecoded2=decode(r1,N,K,'cyclic',g,syndt)nobe2=sum(decoded2~=msg)

decode() is intolerably time-consuming for this big (?) code. But, as if by magic, it works for any case of not more bit errors than nceb, while my routine cyclic_decoder() cannot resolve some of the cases to my shame. It is really strange that the minimum distance among the codewords produced by encode() as well as cyclic_encoder() with N=31 and K=16 should be 7, but it turns out to be dmin=5 and therefore dc=2. The problem causing wrong correction is that some different error patterns results in the same syndrome.

Page 34: dc09_show

Block coding Related Communication Toolbox functions and objectsLinear block encode, decode, gen2par, syndtableCyclic encode, decode, cyclpoly, cyclgen, gen2par, syndtableBCH(Bose-Claudhuri-Hocquenghem) bchenc, bchdec, bchgenpolyLDPC (Low-Density Parity Check) fec.ldpcenc, fec.ldpcdecHamming encode, decode, hammgen, gen2par, syndtableReed-Solomon rsenc, rsdec, rsgenpoly, rsencof, rsdecof

Table 9.1 Communication Toolbox functions for block coding

Note the following about the functions ‘encode’ and ‘decode’ (use MATLAB Help for details): - They can be used for any linear block coding by putting a string ‘linear’ and a generator matrix as the fourth and fifth input arguments, respectively. - They can be used for Hamming coding by putting a string ‘hamming’ as the fourth input argument or by providing them with only the first three input arguments.

K N

The following example illustrates the usages of ‘encode()’/‘decode()’ for cyclic coding and ‘rsenc()’ and ‘rsdec()’ for Reed-Solomon coding. Note that the RS (Reed-Solomon) codes are nonbinary BCH codes, which has the largest possible minimum distance for any linear code with the same message size and codeword length , yielding the error correcting capability of .

K N( ) / 2cd N K

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 35: dc09_show

%test_encode_decode.m to try using encode()/decode()N=7; K=4; % Codeword (Block) length and Message sizeg=cyclpoly(N,K); % Generator polynomial for a cyclic (N,K) codeNm=10; % # of K-bit message vectorsmsg = randint(Nm,K); % Nm x K message matrix coded = encode(msg,N,K,'cyclic',g); % Encoding% Add bit errors with transmitted BER potbe=0.1potbe=0.3; received = rem(coded+ randerr(Nm,N,[0 1;1-potbe potbe]),2);decoded = decode(received,N,K,'cyclic',g); % Decoding% Probability of message bit errors after decoding/correctionpobe=sum(sum(decoded~=msg))/(Nm*K) % BER

% Usage of rsenc()/rsdec()M=3; % Galois Field integer corresponding to the # of bits per symbolN=2^M-1; K=3; dc=(N-K)/2; % Codeword length and Message size msg = gf(randint(Nm,K,2^M),M); % Nm x K GF(2^M) Galois Field msg matrix coded = rsenc(msg,N,K); % Encodingnoise = randerr(Nm,N,[1 dc+1]).*randint(Nm,N,2^M);received = coded + noise; % Add a noise[decoded,numerr] = rsdec(received,N,K); % Decoding[msg decoded]numerrpose=sum(sum(decoded~=msg))/(Nm*K) % SER

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 36: dc09_show

9.4.4 Convolutional Coding and Viterbi Decoding

In the previous sections, we discussed the block coding that encodes every -bit block of message sequence independently of the previous message block (vector). In this section, we are going to see the convolutional coding that converts a -bit message vector into an -bit channel input sequence dependently of the previous -bit message vector ( : constraint length). The convolutional encoder has a structure of finite-state machine whose output depends on not only the input but also the state.

KL )1(

K

L

1L

NK

( 2)K ( 3)N 1( 3)L ( 1) 3 22 2 64L K

KN

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Fig. 9.10 shows a binary convolutional encoder with a -bit input, an -bit output, and 2-bit registers that can be described as a finite-state machine having states. This encoder shifts the previous contents of every stage register except the right-most one into its righthand one and receives a new -bit input to load the left-most register at an iteration, sending an -bit output to the channel for transmission where the values of the output bits depend on the previous inputs stored in the registers as well as the current input.

Page 37: dc09_show

A binary convolutional code is also characterized by generator sequences eacheach of which has a length of . For example, the convolutional code with the encoder depicted in Fig. 9.10 is represented by the generator sequences

N 1 2, , , Ng g gLK

( 3)N

1

2

3

[0 0 1 0 1 0 0 1][0 0 0 0 0 0 0 1] (9.4.29a)[1 0 0 0 0 0 0 1]

ggg

which constitutes the generator (polynomial) matrix

1

2

3

0 0 1 0 1 0 0 10 0 0 0 0 0 0 1 (9.4.29b)1 0 0 0 0 0 0 1

N LKG

ggg

where the value of the th element of is 1 or 0 depending on whether the th one of the bits of the shift register is connected to the th output combiner or not. The shift register is initialized to all-zero state before the first bit of an input (message) sequence enters the encoder and also finalized to all-zero state by the 0-bits padded onto the tail part of each input sequence. Besides, the length of each input sequence (to be processed at a time) is made to be (an integer times ) even by zero-padding if necessary. For the input sequence made in this way so that its total length is including the zeros padded onto it, the length of the output sequence is and consequently, the code rate will be

jigj

( 1)L K

i

( 1)M L K ( 1)M L N

MKKM

LK

for (9.4.30)( 1)

M

c

MK KR M L

M L N N

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 38: dc09_show

Given a generator matrix together with the number of input bits and a message sequence , the above MATLAB routine ‘conv_encoder()’ pads the input sequence with zeros as needed and then generates the output sequence of the convolutional encoder. Communication Toolbox has a convolutional encoding function ‘convenc()’ and its usage will be explained together with that of a convolutional decoding function ‘vitdec()’ at the end of this section.

mN LKG K

m

function [nxb,yb]=state_eq(xb,u,G) % To be used as a subroutine for conv_encoder()K=length(u); LK=size(G,2); L1K=LK-K;if isempty(xb), xb=zeros(1,L1K); else N=length(xb); %(L-1)K if L1K~=N, error('Incompatible Dimension in state_eq()'); endendA=[zeros(K,L1K); eye(L1K-K) zeros(L1K-K,K)]; B=[eye(K); zeros(L1K-K,K)];C=G(:,K+1:end); D=G(:,1:K);nxb=rem(A*xb'+B*u',2)';yb=rem(C*xb'+D*u',2)';

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

11 1112 1221 21 122 2231 3132 32

[ 1] [ ]0 0 0 0 0 0 1 0[ 1] [ ]0 0 0 0 0 0 0 1[ 1] [ ] [ ]1 0 0 0 0 0 0 0[ 1] [ ]0 1 0 0 0 0 0 0

0 0 1 0 0 0 0 0[ 1] [ ]0 0 0 1 0 0 0 0[ 1] [ ]

x n x nx n x nx n x n u nx n x n ux n x nx n x n

2

1112

121 1

222 2

33132

[ ]

[ ][ ][ ] 1 0 1 0 0 1 0 0[ ] [ ][ ] 0 0 0 0 0 1 0 0[ ] [ ]0 0 0 0 0 1 1 0[ ] [ ][ ]

n

x nx ny n x n u ny n x n u ny n x nx n

11x

12x

21x

22x

31x

32x

Page 39: dc09_show

function [output,state]=conv_encoder(G,K,input,state,termmode)% generates the output sequence of a binary convolutional encoder% G : N x LK Generator matrix of a convolutional code% K : Number of input bits entering the encoder at each clock cycle.% input: Binary input sequence% state: State of the convolutional encoder% termmode='trunc' for no termination with all-0 state %Copyleft: Won Y. Yang, [email protected], CAU for academic use onlyif isempty(G), output=input; return; endtmp= rem(length(input),K);input=[input zeros(1,(K-tmp)*(tmp>0))];[N,LK]=size(G); if rem(LK,K)>0 error('The number of column of G must be a multiple of K!')end%L=LK/K;if nargin<4|(nargin<5 & isnumeric(state)) input= [input zeros(1,LK)]; %input= [input zeros(1,LK-K)]; endendif nargin<4|~isnumeric(state) state=zeros(1,LK-K); endinput_length= length(input);N_msgsymbol= input_length/K;input1= reshape(input,K,N_msgsymbol);output=[]; for l=1:N_msgsymbol % Convolution output=G*input ub= input1(:,l).'; [state,yb]= state_eq(state,ub,G); output= [output yb];end

Page 40: dc09_show

<Various Representations of a Convolutional Code>

Page 41: dc09_show
Page 42: dc09_show

<Viterbi Decoding of a Convolutional Coded Sequence>

Detector output or decoder input: [1 1 0 1 1 0 1 0 1 0 1 1 0 0]

Decoded result: [1 0 1 1 0 0 0]

Go back through survivor paths

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 43: dc09_show

Decoded result: [1 0 1 1 0 0 0]

Detector output or decoder input: [1 1 0 1 1 0 1 0 1 0 1 1 0 0]

Don’t you wonder what the convolutional encoder outputs for the decoded result given as the input?

Encoder output : 1 1 0 1 0 0 1 0 1 0 1 1 0 0

Let us compare this most likely encoder output with the detector output:

The error in the 5th bit might have been caused by the channel noise.

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 44: dc09_show

function decoded_seq=vit_decoder(G,K,detected,opmode,hard_or_soft)% performs the Viterbi algorithm on detected to get the decoded_seq% G: N x LK Generator polynomial matrix% K: Number of encoder input bits%Copyleft: Won Y. Yang, [email protected], CAU for academic use onlydetected = detected(:).'; if nargin<5|hard_or_soft(1)=='h', detected=(detected>0.5); end[N,LK]=size(G);if rem(LK,K)~=0, error('Column size of G must be a multiple of K'); endtmp= rem(length(detected),N);if tmp>0, detected=[detected zeros(1,N-tmp)]; endb=LK-K; % Number of bits representing the stateno_of_states=2^b; N_msgsymbol=length(detected)/N;for m=1:no_of_states for n=1:N_msgsymbol+1 states(m,n)=0; % inactive in the trellis p_state(m,n)=0; n_state(m,n)=0; input(m,n)=0; end end states(1,1)=1; % make the initial state active cost(1,1)=0; K2=2^K;% To be continued ...

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 45: dc09_show

for n=1:N_msgsymbol y=detected((n-1)*N+1:n*N); % Received sequence n1=n+1; for m=1:no_of_states if states(m,n)==1 % active xb=deci2bin1(m-1,b); for m0=1:K2 u=deci2bin1(m0-1,K); [nxb(m0,:),yb(m0,:)]=state_eq(xb,u,G); nxm0=bin2deci(nxb(m0,:))+1; states(nxm0,n1)=1; dif=sum(abs(y-yb(m0,:))); d(m0)=cost(m,n)+dif; if p_state(nxm0,n1)==0 % Unchecked state node? cost(nxm0,n1)=d(m0); p_state(nxm0,n1)=m; input(nxm0,n1)=m0-1; else [cost(nxm0,n1),i]=min([d(m0) cost(nxm0,n1)]); if i==1, p_state(nxm0,n1)=m; input(nxm0,n1)=m0-1; end end end end endend decoded_seq=[];if nargin>3 & ~strncmp(opmode,'term',4) [min_dist,m]=min(cost(:,n1)); % Trace back from best-metric state else m=1; % Trace back from the all-0 stateend for n=n1:-1:2 decoded_seq= [deci2bin1(input(m,n),K) decoded_seq]; m=p_state(m,n);end

Page 46: dc09_show

Given the generator polynomial matrix together with the number of input bits and the channel-DTR output sequence ‘detected’ as its input arguments, the MATLAB routine ‘vit_decoder(G,K,detected)’ constructs the trellis diagram and applies the Viterbi algorithm to find the maximum-likelihood decoded message sequence. The following MATLAB program “do_vitdecoder.m” uses the routine ‘conv_encoder()’ to make a convolutional coded sequence for a message and uses “vit_decoder()” to decode it to recover the original message.

G K

%do_vitdecoder.m % Try using conv_encoder()/vit_decoder()clear, clfmsg=[1 0 1 1 0 0 0]; % msg=randint(1,100)lm=length(msg); % Message and its lengthG=[1 0 1;1 1 1]; % N x LK Generator polynomial matrixK=1; N=size(G,1); % Size of encoder input/outputpotbe=0.02; % Probability of transmitted bit error% Use of conv_encoder()/vit_decoder()ch_input=conv_encoder(G,K,msg) % Self-made convolutional encodernotbe=ceil(potbe*length(ch_input));error_bits=randerr(1,length(ch_input),notbe); detected= rem(ch_input+error_bits,2); % Received/modulated/detecteddecoded= vit_decoder(G,K,detected)noe_vit_decoder=sum(msg~=decoded(1:lm))

The following program “do_vitdecoder1.m” uses the Communication Toolbox functions ‘convenc()’ and ‘vitdec()’ where ‘vitdec()’ is used several times with different input argument values to show the readers its various usages. Now, it is time to see the usage of the function ‘vitdec()’.

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 47: dc09_show

%do_vitdecoder1.m % shows various uses of Communication Toolbox function convenc()% with KxN Code generator matrix Gc - octal polynomial representation clear, clf, %msg=[1 0 1 1 0 0 0]msg=randint(1,100); lm=length(msg); % Message and its lengthpotbe=0.02; % Probability of transmitted bit errorGc=[5 7]; % 1 0 1 -> 5, 1 1 1 -> 7 (octal number)Lc=3; % 1xK constraint length vector for each input stream[K,N]=size(Gc); % Number of encoder input/output bitstrel=poly2trellis(Lc,Gc); % Trellis structurech_input1=convenc(msg,trel); % Convolutional encodernotbe1=ceil(potbe*length(ch_input1));error_bits1=randerr(1,length(ch_input1),notbe1);detected1= rem(ch_input1+error_bits1,2); % Received/modulated/detected % with hard decisionTbdepth=max(Gc)*5; delay=K*Tbdepth; % Traceback depth and decoding delaydecoded1= vitdec(detected1,trel,Tbdepth,'trunc','hard')noe_vitdec_trunc_hard=sum(msg~=decoded1(1:lm))decoded2= vitdec(detected1,trel,Tbdepth,'cont','hard');noe_vitdec_cont_hard=sum(msg(1:end-delay)~=decoded2(delay+1:end))% with soft decisionncode= [detected1+0.1*randn(1,length(detected1)) zeros(1,Tbdepth*N)];quant_levels=[0.001,.1,.3,.5,.7,.9,.999];NSDB=ceil(log2(length(quant_levels))); % Number of Soft Decision Bitsqcode= quantiz(ncode,quant_levels); % Quantizeddecoded3= vitdec(qcode,trel,Tbdepth,'trunc','soft',NSDB);noe_vitdec_trunc_soft=sum(msg~=decoded3(1:lm))decoded4= vitdec(qcode,trel,Tbdepth,'cont','soft',NSDB);noe_vitdec_cont_soft=sum(msg~=decoded4(delay+1:end))

Page 48: dc09_show

<Usage of the Viterbi Decoding Function ‘vitdec()’ with ‘convenc()’ and ‘poly2trellis()’>

To apply the MATLAB functions ‘convenc()’/‘vitdec()’, we should first use ‘poly2trellis()’ to build the trellis structure with an ‘octal code generator’ describing the connections among the inputs, registers, and outputs. Fig. 9.13 illustrates how the octal code generator matrix as well as the binary generator matrix and the constraint length vector is constructed for a given convolutional encoder. An example of using ‘poly2trellis()’ to build the trellis structure for ‘convenc()’/’vitdec()’ is as follows:

trellis=poly2trellis(Lc,Gc);

GcG

cL

Here is a brief introduction of the usages of the Communication Toolbox functions ‘convenc()’ and ‘vitdec()’. See the MATLAB Help manual or The Mathworks webpage for more details.

Page 49: dc09_show

(1) coded=convenc(msg,trellis); msg: A message sequence to be encoded with a convolutional encoder described by ‘trellis’.(2) decoded=vitdec(coded,trellis,tbdepth,opmode,dectype,NSDB); coded : A convolutional coded sequence possibly corrupted by a noise. It should consist of binary numbers (0/1), real numbers between 1(logical zero) and -1(logical one), or integers between 0 and 2NSDB-1 (NSDB: the number of soft-decision bits given as the optional 6th input argument) corresponding to the quantization level depending on which one of {‘hard’, ‘unquant’, ‘soft’} is given as the value of the fifth input argument ‘dectype’ (decision type). trellis : A trellis structure built using the MATLAB function ‘poly2trellis()’. tbdepth: Traceback depth (length), i.e., the number of trellis branches used to construct each traceback path. It should be given as a positive integer, say, about five times the constraint length. In case the fourth input argument ‘opmode’ (operation mode) is ‘cont’ (continuous), it causes the decoding delay, i.e., the number of zero symbols preceding the first decoded symbol in the output ‘decoded’ and as a consequence, the decoded result should be advanced by tbdepth*K where K is the number of encoder input bits. opmode: Operation mode of the decoding process. If it is set to ‘cont’ (continuous mode), the internal state of the decoder will be saved for use with the next frame. If it is set to ‘trunc’ (truncation mode), each frame will be processed independently, and the traceback path starts at the best-metric state and always ends in the all-zero state. If it is set to ‘term’ (termination mode), each frame is treated independently and the traceback path always starts and ends in the all-zero state. This mode is appropriate when the uncoded message signal has enough zeros, say, K Max(Lc)-1) zeros at the end of each frame to fill all memory registers of the encoder.

Page 50: dc09_show

dectype: Decision type. It should be set to ‘unquant’, ‘hard’, or ‘soft’ depending on the characteristic of the input coded sequence (coded) as follows: - ‘hard’ (decision) when the coded sequence consists of binary numbers 0 or 1. - ‘unquant’ when the coded sequence consists of real numbers between -1(logical 1) and +1(logical 0). - ‘soft’ (decision) when the optional 6th input argument NSDB is given and the coded sequence consists of integers between 0 and 2NDSB-1 corresponding to the quantization level. NSDB : Number of software decision bits used to represent the input coded seqeuence. It is needed and active only when dectype is set to ‘soft’.

(3) [decoded,m,s,in]=vitdec(code,trellis,tbdepth,opmode,dectype,m,s,in) This format is used for a repetitive use of ‘vitdec()’ with the continuous operation mode where the state metric ‘m’, traceback state ‘s’, and traceback input ‘in’ are supposed to be initialized to empty sets at first and then handed over successively to the next iteration.

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 51: dc09_show

%do_vitdecoder1.m % shows various uses of Communication Toolbox function convenc()% with KxN Code generator matrix Gc - octal polynomial representation clear, clf, %msg=[1 0 1 1 0 0 0]msg=randint(1,100); lm=length(msg); % Message and its lengthpotbe=0.02; % Probability of transmitted bit errorGc=[5 7]; % 1 0 1 -> 5, 1 1 1 -> 7 (octal number)Lc=3; % 1xK constraint length vector for each input stream[K,N]=size(Gc); % Number of encoder input/output bitstrel=poly2trellis(Lc,Gc); % Trellis structureTbdepth=3; delay=Tbdepth*K; % Traceback depth and Decoding delay % Repetitive use of vitdec() to process the data block by block% needs to initialize the message sequence, decoded sequence,% state metric, traceback state/input, and encoder state.msg_seq=[]; decoded_seq=[]; m=[]; s=[]; in=[]; encoder_state=[];N_Iter=100;for itr=1:N_Iter msg=randint(1,1000); % Generate the message sequence in a random way msg_seq= [msg_seq msg]; % Accumulate the message sequence if itr==N_Iter, msg=[msg zeros(1,delay)]; end % Append with zeros [coded,encoder_state]=convenc(msg,trel,encoder_state); [decoded,m,s,in]=vitdec(coded,trel,Tbdepth,'cont','hard',m,s,in); decoded_seq=[decoded_seq decoded];endlm=length(msg_seq);noe_repeated_use=sum(msg_seq(1:lm)~=decoded_seq(delay+[1:lm]))

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 52: dc09_show

9.4.6 Turbo CodingIn order for a linear block code or a convolutional code to approach the theoretical limit imposed by Shannon’s channel capacity (see Eq. (9.3.16) or Fig. 9.7) in terms of bandwidth/power efficiency, its codeword or constraint length should be increased to such an intolerable degree that the maximum likelihood decoding can become unrealizable. Possible solutions to this dilemma are two classes of powerful error correcting codes, each called turbo codes and LDPC (lower-density parity-check) codes, that can achieve a near-capacity (or near-Shannon-limit) performance with a reasonable complexity of decoder. The former is the topic of this section and the latter will be introduced in the next section.

(9.3.7)

2 2 20 0

Channel : log 1 log 1 log 1 [bits/sec] (9.3.16)capacity ( / 2) 2

where [Hz]: the channel bandwidth, [W]: the signal power,

[J/bit]: the signal energy per bit,

b

b

E RS SC B B B

N N B N B

B SE

0

0

[bits/sec] the data bit rate,

/ 2[W/Hz]: the noise PSD per unit frequency[Hz] in the passband

[W]: the noise power.

R

N

N N B

02 2 20 0

/10 0 10

log 1 log 1 log 1 ( / )

; EbN0dB 10 log ( ) 10 log (2 1) [dB] (9.3.18)/

b bb

R Bb

RE R E R RC B R B E N

BN B N B B

BE

RN

Page 53: dc09_show

Fig. 9.15(a) shows a turbo encoder consisting of two recursive systematic convolutional (RSC) encoders and an interleaver where the interleaver permutes the message bits in a random way before input to the second encoder. (Note that the modifier ‘systematic’ means that the uncoded message bits are imbedded in the encoder output stream as they are.) The code rate will be 1/2 or 1/3 depending on whether the puncturing is performed or not. (Note that puncturing is to omit transmitting some coded bits for the purpose of increasing the code rate beyond that resulting from the basic structure of the encoder.) Fig. 9.15(b) shows a demultiplexer, which classifies the coded bits into two groups, one from encoder 1 and the other from encoder 2, and applies each of them to the corresponding decoder.

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 54: dc09_show

Fig. 9.15(c) shows a turbo decoder consisting of two decoders concatenated and separated by an interleaver where one decoder processes the systematic (message) bit sequence and the parity bit sequence together with the extrinsic information (provided by the other decoder) to produce the information and provides it to the other decoder in an iterative manner. The turbo encoder and the demultiplexer are cast into the MATLAB routines ‘encoderm()’ and ‘demultiplex()’, respectively. Now, let us see how the two types of decoder, each implementing the log-MAP (maximum a posteriori probability) algorithm and the SOVA (soft-out Viterbi algorithm), are cast into the MATLAB routines ‘logmap()’ and ‘sova()’, respectively.

sy1 2/p py y ejL

eiL

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 55: dc09_show

function x = rsc_encode(G,m,termination)% encodes a binary data block m (0/1) with a RSC (recursive systematic % convolutional) code defined by generator matrix G, returns the output % in x (0/1), terminates the trellis with all-0 state if termination>0if nargin<3, termination = 0; end[N,L] = size(G); % Number of output bits, Constraint lengthM = L-1; % Dimension of the statelu = length(m)+(termination>0)*M; % Length of the inputlm = lu-M; % Length of the messagestate = zeros(1,M); % initialize the state vectorx = []; % To generate the codewordfor i = 1:lu if termination<=0|(termination>0 & i<=L_info) d_k = m(i); elseif termination>0 & i>lm, d_k = rem(G(1,2:L)*state.',2); end a_k = rem(G(1,:)*[d_k state].',2); xp = rem(G(2,:)*[a_k state].',2); % 2nd output (parity) bits state = [a_k state(1:M-1)]; % Next sttate x = [x [d_k; xp]]; % since systematic, first output is input bitend

Page 56: dc09_show

function x = rsc_encode(G,m,termination)% encodes a binary data block m (0/1) with a RSC (Recursive Systematic % Convolutional) code defined by generator matrix G, returns the output % in x (0/1), terminates the trellis with all-0 state if termination>0if nargin<3, termination = 0; end[N,L] = size(G); % Number of output bits, Constraint lengthM = L-1; % Dimension of the statelu = length(m)+(termination>0)*M; % Length of the inputlm = lu-M; % Length of the messagestate = zeros(1,M); % initialize the state vectorx = []; % To generate the codewordfor i = 1:lu if termination<=0|(termination>0 & i<=L_info) d_k = m(i); elseif termination>0 & i>lm, d_k = rem(G(1,2:L)*state.',2); end a_k = rem(G(1,:)*[d_k state].',2); xp = rem(G(2,:)*[a_k state].',2); % 2nd output (parity) bits state = [a_k state(1:M-1)]; % Next sttate x = [x [d_k; xp]]; % since systematic, first output is input bitend

function x = encoderm(m,G,map,puncture)% map: Interleaver mapping% If puncture=0(unpunctured), it operates with a code rate of 1/3. % If puncture>0(punctured), it operates with a code rate of 1/2. % Multiplexer chooses odd/even-numbered parity bits from RSC1/RSC2.[N,L] = size(G); % Number of output pits, Constraint lengthM = L-1; % Dimension of the statelm = length(m); % Length of the information message blocklu = lm + M; % Length of the input sequencex1 = rsc_encode(G,m,1); % 1st RSC encoder output% interleave input to second encodermi = x1(1,map); x2 = rsc_encode(G,mi,0); % 2nd RSC encoder output% parallel to serial multiplex to get the output vectorx = [];if puncture==0 % unpunctured, rate = 1/3; for i=1:lu, x = [x x1(1,i) x1(2,i) x2(2,i)]; end else % punctured into rate 1/2 for i=1:lu if rem(i,2), x = [x x1(1,i) x1(2,i)]; % odd parity bits from RSC1 else x = [x x1(1,i) x2(2,i)]; % even parity bits from RSC2 end end endx = 2*x - 1; % into bipolar format (+1/-1)

Page 57: dc09_show

function y = demultiplex(r,map,puncture)%Copyright 1998, Yufei Wu, MPRG lab, Virginia Tech. for academic use% map: Interleaver mapping Nb = 3-puncture; lu = length(r)/Nb;if puncture==0 % unpunctured for i=1:lu, y(:,2*i) = r(3*i-[1 0]).'; end else % punctured for i=1:lu i2 = i*2; if rem(i,2)>0, y(:,i2)=[r(i2); 0]; else y(:,i2)=[0; r(i2)]; end endendsys_bit_seq = r(1,1:Nb:end); % the systematic bits for both decodersy(:,1:2:lu*2) = [sys_bit_seq; sys_bit_seq(map)];

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 58: dc09_show

%turbo_code_demo.mm = round(rand(1,lm)); % information message bits[temp,map] = sort(rand(1,lu)); % random interleaver mappingx = encoderm(m,G,map,puncture); % encoder output x(+1/-1)noise = sigma*randn(1,lu*(3-puncture));r = a.*x + noise; % received bitsy = demultiplex(r,map,puncture); % input for decoder 1 and 2Ly = 0.5*L_c*y; % Scale the received bitsfor iter = 1:Niter if iter<2, Lu1=zeros(1,lu); % Initialize extrinsic information for Decoder 1 else Lu1(map)=L_e2; % (deinterleaved) a priori information end if dec_alg==0, L_A1=logmap(Ly(1,:),G,Lu1,1); % all information else L_A1=sova(Ly(1,:),G,Lu1,1); % all information end L_e1= L_A1-2*Ly(1,1:2:2*lu)-Lu1; % Eq.(9.4.47) Lu2 = L_e1(map); % (interleaved) a priori information for Decoder 2 if dec_alg==0, L_A2=logmap(Ly(2,:),G,Lu2,2); % all information else L_A2=sova(Ly(2,:),G,Lu2,2); % all information end L_e2= L_A2-2*Ly(2,1:2:2*lu)-Lu2; % Eq.(9.4.47) mhat(map)=(sign(L_A2)+1)/2; % Estimate the message bits noe(iter)=sum(mhat(1:lu-M)~=m); % Number of bit errorsend % End of iter loop

?

?

Page 59: dc09_show

<Log-MAP (Maximum a Posteriori Probability) Decoding cast into ‘logmap()’>To understand the operation of the turbo decoder, let us begin with the definition of priori LLR (log-likelihood ratio), called a priori L-value, which is a soft value measuring how high the probability of a binary random variable being +1 is in comparison with that of being :u 1

( 1)( ) ln with ( ) : the probability of being (9.4.33)

( 1)

P uL u P u u

P u

u

u uu

u

This is a priori information known before the result caused by becomes available . While the sign of LLR

y u

1 ( 1) ( 1)ˆ sign{ ( )} (9.4.34)

1 ( 1) ( 1)P u P u

u L uP u P u

u u

uu u

is a hard value denoting whether or not the probability of being +1 is higher than that of being , the magnitude of LLR is a soft value describing the reliability of the decision based on . Conversely, and can be derived from :

u

u

u

u1

( 1)P u u ( 1)P u u ( )L uu

( )( 1) ( 1) 1(9.4.33) ( )( ) ( )

1( 1) ( 1) ( 1) and ( 1)

1 1

L uP u P uL uL u L u

eP u e P u P u P u

e e

u u u u

This can be expressed as

( 1) ( ) / 2 ( ) ( )

( ) ( )

/(1 ) for 1( ) (9.4.35)

1 1/(1 ) for 1

u L u L u L u

L u L u

e e e uP u

e e u

u

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 60: dc09_show

u

uuAlso, we define the conditioned LLR, which is used to detect the value of based on the value

of another random variable affected by , as the LAPP (Log A Posteriori Probability):

(2.1.4)

|( 1| ) ( | 1) ( 1) / ( ) ( 1)( | 1)

( | ) ln ln ln ln (9.4.36)( 1| ) ( | 1) ( 1) / ( ) ( | 1) ( 1)

P u y P y u P u P y P uP y uL u y

P u y P y u P u P y P y u P u

u u u

u yu u u

k

ya 0/bE N

y

20

| 200

0

exp( ( / )( ) ) ( 1)( | ) ln ln 4 ( ) ( ) (9.4.37)

( 1)exp( ( / )( ) )

with 4 : the channel reliability

b bc

b

bc

E N y a P u EL u y a y L u L L u

P u NE N y aE

L aN

y

uu uu y

u

kuThe objective of BCJR (Bahl-Cocke-Jelinek-Raviv) MAP (Maximum A posteriori Probability) algorithm proposed in [B-1] is to detect the value of the th message bit depending on the sign of the following LAPP function:

1( ', )

1( ', )

( ', , ) / ( )( 1| )( ) ln ln

( 1| ) ( ', , ) / ( )

k ks s SkA k

k k ks s S

p s s s s pP uL

P u p s s s s pu

u

u

y yy

y y y

1( ', )

1( ', )

( ', , )ln (9.4.38)

( ', , )

with / : the set of all the encoder state transitions from ' to caused by 1/ 1

k ks s S

k ks s S

k

p s s s s

p s s s s

S S s s u

y

y

Now, let be the output of a fading AWGN (additive white Gaussian noise) channel (with fading amplitude and SNR per bit ) given as the input. Then, this equation for the conditioned LLR can be written as

Page 61: dc09_show

and denote the probability of a discrete-valued random variable and the probability density of a continuous-valued random variable. The numerator/denominator of this LAPP function are the sum of the probabilities that the channel output to will be with the encoder state transition from to where each joint probability density can be written as

where is the probability that the state at the th depth level (stage) in the trellis is with the output sequence generated before the th level, is the probability that the state transition from to is made with the output generated, and is the probability that the state is with the output sequence generated after the th level. The first and third factors can be computed in a forward/backward recursive way:

1 '

1 0 0'

11

'

( ) ( , ) ( ', , , ) ( ', ) ( , | ')

( ') ( ', ) with (0) 1, ( ) 0 for 0 (9.4.40)

( ') ( | ' ) ( ', , , | ') ( , | ') ( | )

( ', ) ( ) (9.

j kj k j kk k ks S

k ks S

j kj k j kk k ks S

k ks S

s p s p s s y p s p s y s

s s s s s

s p s p s y s s p s y s p s

s s s

y y y

y y y

4.41a)

(0) 1 and ( ) 0 0 if terminated at all-zero state (9.4.41b)with ( )

( ) 1/ otherwiseK K

KsK

s ss

s N s

where ( : the constraint length) is the number of states and is the number of decoder input symbols.

12LsN L K

1 1( ', , ) ( ', , ) ( ', ) ( , | ' ) ( | ) ( ' ) ( ', ) ( ) (9.4.39)k k j k k j k k kkp s s p s s s s p s p s y s p s s s s s y y y y

P p

1/ 1ku { , , }j k k j ky y y ys's

1( ', , ) ( ', , )k kp s s p s s s s y y

1( ' ) ( ', )j kk s p s y ks k's j ky ( ', ) ( , | ')k ks s p s y s

'ks s 1ks s ( ) ( | )k j ks p s y

ky1ks s

j ky 1( ' ) / ( )kk s s

k

k

Page 62: dc09_show

( ', )k s sThe second factor can be found as

0

(2.1.4)

1 1( 1) ( ) /(9.4.35)

WGNchannel with fadingamplitudeandSNR per bit /

( ', , ) ( ', ) ( | ', )( ', ) ( , | ')

( ') ( ')

' '( | ) ( | , ) ( ) ( | ) ( ) ( , | , ( ))

b

k kkk

p psk k k kk k k k k k kk k k

u L u

A aE N

p s s y p s s p y s ss s p s y s

p s p s

p s s p y s s p u p y u p u p y y u x u

e

2

2 2( )

0 0

( 1) ( ) / 22

( )0

( 1) ( ) / 2

( )

0

exp ( ) ( ( ))1

exp 2 ( ( )) where 11

1exp [ ] (9.4.42)

21 ( )

with exp

s p pb bk kk k kL u

u L us p pb

k kk k k k kL u

u L uks p

k c k kL u pkk

bk

E Ey au y a x u

N Ne

e EA a y u y x u u

Ne

ueA L y y

e x u

EA

N

2 2 2 2 2 2

0( ) ( ) ( ( )) and 4 (channel reliability)s p p b

k ck k k k

Ey a u y a x u L a

N

Note a couple of things about this equation:

- To compute , we need to know the channel fading amplitude and the SNR per bit .- does not have to be computed since it will be substituted directly or via (Eq. (9.4.40)) or (Eq. (9.4.41)) into Eq. (9.4.39), then substituted into both the numerator and the denominator of Eq. (9.4.38), and finally cancelled.

k 0/bE NakA

kk

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 63: dc09_show

The following MATLAB routine ‘logmap()’ corresponding to the block named ‘Log-MAP or SOVA’ in Fig. 9.15(c) uses these equations to compute the LAPP function (9.4.38). Note that in the routine, the multiplications of the exponential terms are done by adding their exponents and that is why Alpha and Beta (each representing the exponent of and ) are initialized to a large negative number as –Infty= –100 (corresponding to a nearly-zero ) under the assumption of initial all-zero state and for the termination of decoder 1 in all-zero state, respectively. (Q: Why is Beta initialized to (-log(Ns)) for non-termination of decoder 2?)

kk100 0e

ln Nsfunction L_A = logmap(Ly,G,Lu,ind_dec)% Log_MAP algorithm using straightforward method to compute branch cost% Input: Ly = scaled received bits Ly=0.5*L_c*y=(2*a*rate*Eb/N0)*y% G = code generator for the RSC code a in binary matrix % Lu = extrinsic information from the previous decoder.% ind_dec= index of decoder=1/2 (assumed to be terminated/open)% Output: L_A = ln (P(x=1|y)/P(x=-1|y)), i.e., Log-Likelihood Ratio % (soft-value) of estimated message input bit at each level lu=length(Ly)/2; Infty=1e2; EPS=1e-50; % Number of input bits, etc[N,L] = size(G); Ns = 2^(L-1); % Number of states in the trellisLe1=-log(1+exp(Lu)); Le2=Lu+Le1; % ln(exp((u+1)/2*Lu)/(1+exp(Lu)))% Set up the trellis[nout,ns,pout,ps] = trellis(G);% Initialization of Alpha and BetaAlpha(1,2:Ns) = -Infty; % Eq.(9.4.40) (the initial all-zero state) if ind_dec==1 % for decoder D1 with termination in all-zero state Beta(lu+1,2:Ns) = -Infty; % Eq.(9.4.41b) (the final all-zero state) else % for decoder D2 without termination Beta(lu+1,:) = -log(Ns)*ones(1,Ns);end% Compute gamma at every depth level (stage)for k = 2:lu+1 Lyk = Ly(k*2-[3 2]); gam(:,:,k) = -Infty*ones(Ns,Ns); for s2 = 1:Ns % Eq.(9.4.42) gam(ps(s2,1),s2,k) = Lyk*[-1 pout(s2,2)].' +Le1(k-1); gam(ps(s2,2),s2,k) = Lyk*[+1 pout(s2,4)].' +Le2(k-1); endend

( 1) ( ) / 2 ln( ) ( ) ( )(9.4.35)

( ) ln( ) ( )

/(1 ) ( ) ln(1 ) for 1( )1 1/(1 ) ln(1 ) for 1

u L u L u L u L u

L uL u L u

e e e L u e uP ue e e u

u

1 '

ln1 0 0'

ln

ln

( ) ( , ) ( ', , , ) ( ', ) ( , | ')

( ') ( ', ) with (0) 1, ( ) 0 ln 0 for 0 (9.4.40)

(0) 1 and ( ) 0 ln 0 0 if terminated at all-zero state( )

( ) 1/ ln

j kj k j kk k ks S

k ks S

K K

KK

s p s p s s y p s p s y s

s s s s s

s ss

s N

y y y

(9.4.41b)(1/ ) ln otherwiseN N s

( 1) ( ) / 2(9.4.32)

( )

( 1)

ln

(1)

1( ', ) exp [ ]

( )21

11ln(1 ) ln [ ] for 1

( 1)2ln ( ', )

11( 1) ln(1 ) ln [ ] for 1

( 1)2

u L uks p

k k c k pkL ukk

L s pk kc pk k

kk

L s pk kc pk k

k

ues s A L y y

x ue

e A L y y ux

s s

L e A L y y ux

Page 64: dc09_show

function L_A = logmap(Ly,G,Lu,ind_dec)% Log_MAP algorithm using straightforward method to compute branch cost% Input: Ly = scaled received bits Ly=0.5*L_c*y=(2*a*rate*Eb/N0)*y% G = code generator for the RSC code a in binary matrix % Lu = extrinsic information from the previous decoder.% ind_dec= index of decoder=1/2 (assumed to be terminated/open)% Output: L_A = ln (P(x=1|y)/P(x=-1|y)), i.e., Log-Likelihood Ratio % (soft-value) of estimated message input bit at each level lu=length(Ly)/2; Infty=1e2; EPS=1e-50; % Number of input bits, etc[N,L] = size(G); Ns = 2^(L-1); % Number of states in the trellisLe1=-log(1+exp(Lu)); Le2=Lu+Le1; % ln(exp((u+1)/2*Lu)/(1+exp(Lu)))% Set up the trellis[nout, ns, pout, ps] = trellis(G);% Compute Alpha in forward recursionfor k = 2:lu for s2 = 1:Ns alpha = sum(exp(gam(:,s2,k).'+Alpha(k-1,:))); % Eq.(9.4.40) if alpha<EPS, Alpha(k,s2)=-Infty; else Alpha(k,s2)=log(alpha); end end tempmax(k) = max(Alpha(k,:)); Alpha(k,:) = Alpha(k,:)-tempmax(k);end % Compute Beta in backward recursionfor k = lu:-1:2 for s1 = 1:Ns beta = sum(exp(gam(s1,:,k+1)+Beta(k+1,:))); % Eq.(9.4.41) if beta<EPS, Beta(k,s1)=-Infty; else Beta(k,s1)=log(beta); end end Beta(k,:) = Beta(k,:) - tempmax(k);end% Compute the soft output LLR for the estimated message inputfor k = 1:lu for s2 = 1:Ns % Eq.(9.4.39) temp1(s2)=exp(gam(ps(s2,1),s2,k+1)+Alpha(k,ps(s2,1))+Beta(k+1,s2)); temp2(s2)=exp(gam(ps(s2,2),s2,k+1)+Alpha(k,ps(s2,2))+Beta(k+1,s2)); end L_A(k) = log(sum(temp2)+EPS) - log(sum(temp1)+EPS); % Eq.(9.4.38)end

1 1' '( ) ( ') ( ', ) exp ln ( ') ln ( ', ) (9.4.40)k k k kk s S s Ss s s s s s s

(9.4.39)1 1( ', , ) ( ' ) ( ', ) ( ) exp (ln ( ' ) ln ( ', ) ln ( ))k k k kk kp s s s s s s s s s s y

1 ' '( ') ( ', ) ( ) exp ln ( ) ln ( ', ) (9.4.41a)k k k kk s S s Ss s s s s s s

Page 65: dc09_show

function [nout,nstate,pout,pstate] = trellis(G)% copyright 1998, Yufei Wu, MPRG lab, Virginia Tech for academic use% set up the trellis with code generator G in binary matrix form. % G: Generator matrix with feedback/feedforward connection in row 1/2% e.g. G=[1 1 1; 1 0 1] for the turbo encoder in Fig. 9.15(a) % nout(i,1:2): Next output [xs=m xp](-1/+1) for state=i, message in=0 % nout(i,3:4): next output [xs=m xp](-1/+1) for state=i, message in=1% nstate(i,1): next state(1,...2^M) for state=i, message input=0% nstate(i,2): next state(1,...2^M) for state=i, message input=1% pout(i,1:2): previous out [xs=m xp](-1/+1) for state=i, message in=0% pout(i,3:4): previous out [xs=m xp](-1/+1) for state=i, message in=1% pstate(i,1): previous state having come to state i with message in=0% pstate(i,2): previous state having come to state i with message in=1% See Fig. 9.16 for the meanings of the output arguments.[N,L] = size(G); % Number of output bits and Consraint lengthM=L-1; Ns=2^M; % Number of bits per state and Number of states% Set up next_out and next_state matrices for RSC code generator Gfor state_i=1:Ns state_b = deci2bin1(state_i-1,M); % Binary state for input_bit=0:1 d_k = input_bit; a_k=rem(G(1,:)*[d_k state_b]',2); % Feedback in Fig. 9.15(a) out(input_bit+1,:)=[d_k rem(G(2,:)*[a_k state_b]',2)]; % Forward state(input_bit+1,:)=[a_k state_b(1:M-1)]; % Shift register end nout(state_i,:) = 2*[out(1,:) out(2,:)]-1; % bipolarize nstate(state_i,:) = [bin2deci(state(1,:)) bin2deci(state(2,:))]+1;end% Possible previous states having reached the present state % with input_bit=0/1for input_bit=0:1 bN = input_bit*N; b1 = input_bit+1; % Number of output bits = 2; for state_i=1:Ns pstate(nstate(state_i,b1),b1) = state_i; pout(nstate(state_i,b1),bN+[1:N]) = nout(state_i,bN+[1:N]); end end

Page 66: dc09_show

( )is( )iu

<SOVA (Soft-In/Soft-Output Viterbi Algorithm) Decoding cast into ‘sova()’’> [H-2]

The objective of the SOVA-MAP decoding algorithm is to find the state sequence and the corresponding input sequence which maximizes the following MAP (maximum a posteriori probability) function

( )(2.1.4) proportional( ) ( ) ( ) ( )( )

( | ) ( | ) ~ ( | ) ( ) for given (9.4.43)( )

ii i i iP

P p p Pp

s

s y y s y s s yy

This probability would be found from the multiplications of the branch transition probabilities defined by Eq. (9.4.42). However, as is done in the routine ‘logmap()’, we will compute the path metric by accumulating the logarithm or exponent of only the terms affected by as follows:( )i

ku

( )( ) ( ) ( )

1 ( )

( ) 1( ) ( ' ) [ ] (9.4.44)

2 2 ( )

ii i s pi k

kk k c k k p ikk

uL uuM M L y y

ux

s s

The decoding algorithm cast into the routine ‘sova(Ly,G,Lu,ind_dec)’ proceeds as follows:

(Step 0) Find the number of ’s in Ly given as the first input argument: =length(Ly)/2. Find the number of output bits of the two encoders and the constraint length from the row and column dimensions of the generator matrix . Let the number of states be , the SOVA window size , and the depth level . Under the assumption of all-zero state at the initial stage (depth level zero), initialize the path metric to (corresponding to probability 1) only for the all-zero state and to (corresponding to probability 0) for the other states ( ).

[ ]s pk ky y ul

N L

12LsN

G30 0k

0( ) 0 ln1kM s 0s( ) ln 0k jM s js 0j

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 67: dc09_show

(Step 1) Increment by one and determine which one of the hypothetical encoder input (message) or would result in larger path metric (computed by Eq. (9.4.44)) for every state at level and chooses the corresponding path as the survivor path, storing the estimated value of (into ‘pinput(i,k)’) and the relative path metric difference ‘DM(i,k)’ of the survivor path over the other (non-surviving) path

for every state at the stage. Repeat this step (in the forward direction) till .

(Step 2) Depending on the value of the fourth input argument ‘ind_dec’, determine the all-zero state or any state belonging to the most likely path (with Max ) to be the final state (sh(k)).

(Step 3) Find (uhat(k)) from ‘pinput(i,k)’ (constructed at Step 1) and the corresponding previous state (shat(k-1)) from the trellis structure. Decrement by one. Repeat this step (in the backward direction) till .

(Step 4) To find the reliability of , let LLR= . Trace back the non-surviving paths from the optimal states (for such that ), find the nearly- optimal input . If for some , let LLR=Min{LLR, }. In this way, find the LLR estimate and multiply it with the bipolarized value of to determine the soft output or L-value:

k

1 0ku 1 1ku ( )ikM s( 0 : 1)sis i N

k

1ku

k

11( ) ( | 0 /1) ( | 1/ 0) (9.4.45)i i i kkk k ku uM s M s M s

uk l

ˆ( )s k0s ( )ikM s

ˆ( )u kˆ( 1)s k

0k ˆ( ( ))kM s k

ˆ( )u k

ˆ( )s k i 1:i uk i l ˆ ( )iu k ˆ ˆ( ) ( )iu k u k

ˆ( )u k

ˆ( ( ))k iM s k i i

ˆ ˆ( ( )) (2 ( ) 1)LLR (9.4.46)AL u k u k

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 68: dc09_show

function L_A = sova(Ly,G,Lu,ind_dec) % Copyright: Yufei Wu, 1998, MPRG lab, Virginia Tech for academic use% This implements Soft Output Viterbi Algorithm in trace back mode % Input: Ly : Scaled received bits Ly=0.5*L_c*y=(2*a*rate*Eb/N0)*y% G : Code generator for the RSC code in binary matrix form% Lu : Extrinsic information from the previous decoder.% ind_dec: Index of decoder=1/2% (assumed to be terminated in all-zero state/open)% Output: L_A : Log-Likelihood Ratio (soft-value) of % estimated message input bit u(k) at each stage, % ln (P(u(k)=1|y)/P(u(k)=-1|y))lu = length(Ly)/2; % Number of y=[ys yp] in Lylu1 = lu+1; Infty = 1e2;[N,L] = size(G); Ns = 2^(L-1); % Number of statesdelta = 30; % SOVA window size% Make decision after 'delta' delay. Tracing back from (k+delta) to k,% decide bit k when received bits for bit (k+delta) are processed. % Set up the trellis defined by G.[nout,ns,pout,ps] = trellis(G);% Initialize the path metrics to -InftyMk(1:Ns,1:lu1)=-Infty; Mk(1,1)=0; % Only initial all-0 state possible% Trace forward to compute all the path metricsfor k=1:lu Lyk = Ly(k*2-[1 0]); k1=k+1; for s=1:Ns % Eq.(9.4.44), Eq.(9.4.45) Mk0 = Lyk*pout(s,1:2).' -Lu(k)/2 +Mk(ps(s,1),k); Mk1 = Lyk*pout(s,3:4).' +Lu(k)/2 +Mk(ps(s,2),k); if Mk0>Mk1, Mk(s,k1)=Mk0; DM(s,k1)=Mk0-Mk1; pinput(s,k1)=0; else Mk(s,k1)=Mk1; DM(s,k1)=Mk1-Mk0; pinput(s,k1)=1; end endend% Trace back from all-zero state or the most likely state for D1/D2% to get input estimates uhat(k), and the most likely path (state) shatif ind_dec==1, shat(lu1)=1; else [Max,shat(lu1)]=max(Mk(:,lu1)); endfor k=lu:-1:1 uhat(k)=pinput(shat(k+1),k+1); shat(k)=ps(shat(k+1),uhat(k)+1);end% As the soft-output, find the minimum DM over a competing path % with different information bit estimate.for k=1:lu LLR = min(Infty,DM(shat(k+1),k+1)); for i=1:delta if k+i<lu1 u_ = 1-uhat(k+i); % the information bit tmp_state = ps(shat(k+i+1),u_+1); for j=i-1:- pu=pinput(tmp_state,k+j+1); tmp_state=ps(temp_state,pu+1); end if pu~=uhat(k), LLR=min(LLR,DM(shat(k+i+1),k+i+1)); end end end L_A(k)=(2*uhat(k)-1)*LLR; % Eq.(9.4.46)end

( )(9.4.44)( ) ( ) ( )1 ( )

( ) 1( ) ( ' ) [ ]

( )2 2

ii i s pi k

k pk k c ik kkk

L u uuM M L y yux

s s

(9.4.45)11( ) ( | 0 /1) ( | 1/ 0)i i i kkk k ku uM s M s M s

Max ( )i

iksM s

ˆLLR ( ( ))kM s k

length(Ly)/2ul

ˆLLR Min{LLR, ( ( ))}k iM s k i

(9.4.46)ˆ ˆ( ( )) (2 ( ) 1)LLRAL u k u k

Page 69: dc09_show

Now, it is time to take a look at the main program “turbo_code_demo.m”, which uses the routine ‘logmap()’ or ‘sova()’ (corresponding to the block named ‘Log-MAP or SOVA’ in Fig. 9.15(c)) as well as the routines ‘encoderm()’ (corresponding to Fig. 9.15(a)), ‘rsc_encode()’, ‘demultiplex()’ (corresponding to Fig. 9.15(b)), and ‘trellis()’ to simulate the turbo coding system depicted in Fig. 9.15. All of the programs listed here in connection with turbo coding stem from the routines developed by Yufei Wu in the MPRG (Mobile/Portable Radio Research Group) of Virginia Tech. (Polytechnic Institute and State University). The following should be noted: - One thing to note is that the extrinsic information to be presented to one decoder by the other decoder should contain only the intrinsic information of decoder that is obtained from its own parity bits not available to decoder . Accordingly, one decoder should remove the information about (available commonly to both decoders) and the priori information (provided by the other decoder) from the overall information to produce the information that will be presented to the other decoder. (Would your friend be glad if you gave his/her present back to him/her or presented him/her what he/she had already got?) To prepare an equation for this information processing job of each encoder, we extract only the terms affected by from Eqs. (9.4.44) and (9.4.42) (each providing the basis for the path metric (Eq. (9.4.45)) and LLR (Eq. (9.4.38)), respectively,) to write

which conforms with Eq. (9.4.37) for the conditioned LLR . To prepare the extrinsic information for the other decoder, this information should be removed from the overall information produced by the routine ‘logmap()’ or ‘sova()’ as

eL ij

i

AL

sy

| ( | )L u yu y

1ku

( )L u

( )AL u

j

( ) ( ) ( ) ( )

( ) ( )

( ) 1 ( ) 1( )

2 2 2 21 1

si i i is sk kk kc ck k c ki i

k k

L u L uu u u uL y L y L u L y

u u

( ) ( ) ( ) (9.4.47)se A c kL u L u L u L y

Page 70: dc09_show

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

- Another thing to note is that as shown in Fig. 9.15(c), the basis for the final decision about is the deinterleaved overall information that is attributed to decoder 2. Accordingly, the turbo decoder should know the pseudo-random sequence ‘map’ (that has been used for interleaving by the transmitter) as well as the fading amplitude and SNR of the channel.- The trellis structure and the output arguments produced by the routine ‘trellis()’ are illustrated in Fig. 9.16.

u2AL

Interested readers are invited to run the program “turbo_code_demo.m” with the value of the control constant ‘dec_alg’ set to 0/1 for Log-MAP/SOVA decoding algorithm and see the BER becoming lower as the decoding iteration proceeds. How do the turbo codes work? How are the two decoding algorithms, Log-MAP and SOVA, compared? Is there any weakpoint of turbo codes? What is the measure against the weakpoint, if any? Unfortunately, to answer such questions is difficult for the authors and therefore, is beyond the scope of this book. As can be seen from the simulation results, turbo codes have an excellent BER performance close to the Shannon limit at low and medium SNRs. However, the decreasing rate of the BER curve of a turbo code can be very low at high SNR depending on the interleaver and the free distance of the code, which is called the ‘error floor’ phenomenon. Besides, turbo codes needs not only a large interleaver and block size but also many iterations to achieve such a good BER performance, which increases the complexity and latency (delay) of the decoder.

Page 71: dc09_show

%turbo_code_demo.m% simulates the classical turbo encoding-decoding system.% 1st encoder is terminated with tails bits. (lm+M) bits are scrambled % and passed to 2nd encoder, which is left open without termination.cleardec_alg = 1; % 0/1 for Log-MAP/SOVApuncture = 1; % puncture or not rate = 1/(3-puncture); % Code rate lu = 1000; % Frame sizeNframes = 100; % Number of framesNiter = 4; % Number of iterationsEbN0dBs = 2.6; %[1 2 3];N_EbN0dBs = length(EbN0dBs);G = [1 1 1; 1 0 1]; % Code generatora = 1; % Fading amplitude; a=1 in AWGN channel[N,L]=size(G); M=L-1; lm=lu-M; % Length of message bit sequencefor nENDB = 1:N_EbN0dBs EbN0 = 10^(EbN0dBs(nENDB)/10); % convert Eb/N0[dB] to normal number L_c = 4*a*EbN0*rate; % reliability value of the channel sigma = 1/sqrt(2*rate*EbN0); % standard deviation of AWGN noise noes(nENDB,:) = zeros(1,Niter); for nframe = 1:Nframes m = round(rand(1,lm)); % information message bits [temp,map] = sort(rand(1,lu)); % random interleaver mapping x = encoderm(m,G,map,puncture); % encoder output [x(+1/-1) noise = sigma*randn(1,lu*(3-puncture)); r = a.*x + noise; % received bits y = demultiplex(r,map,puncture); % input for decoder 1 and 2 Ly = 0.5*L_c*y; % Scale the received bits for iter = 1:Niter ... ... ... ... ... ... ... ... mhat(map)=(sign(L_A2)+1)/2; % Estimate the message bits noe(iter)=sum(mhat(1:lu-M)~=m); % Number of bit errors end % End of iter loop % Total number of bit errors for all iterations noes(nENDB,:) = noes(nENDB,:) + noe; ber(nENDB,:) = noes(nENDB,:)/nframe/(lu-M); % Bit error rate for i=1:Niter, fprintf('%14.4e ', ber(nENDB,i)); end end % End of nframe loopend % End of nENDB loop

Depolarize 1/ 1 to 1/0

Page 72: dc09_show

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 73: dc09_show

%do_BCH_BPSK_sim.mclear, clfK=16; % Number of input bits to the BCH encoder (message length)N=31; % Number of output bits from the BCH encoder (codeword length)Rc=K/N; % Code rate to be multiplied with the SNR in AWGN channel blockb=1; M=2^b; % Number of bits per symbol and modulation orderT=0.001/K; Ts=b*T; % Sample time and Symbol timeEbN0dBs=[0:4:8]; SNRbdBs=EbN0dBs+3; % for simulated BER EbN0dBs_t=0:0.1:10; EbN0s_t=10.^(EbN0dBs_t/10); % for theoretical BER SNRbdBs_t=EbN0dBs_t+3;for i=1:length(EbN0dBs) EbN0dB=EbN0dBs(i); sim('BCH_BPSK_sim'); % Run the Simulink model BERs(i)=BER(1); % just ber among {ber, # of errors, total # of bits} fprintf(' With EbN0dB=%4.1f, BER=%10.4e=%d/%d\n', EbN0dB,BER);endBER_theory= prob_error(SNRbdBs_t,'PSK',b,'BER');SNRbcdB_t=SNRbdBs_t+10*log10(Rc); et=prob_error(SNRbcdB_t,'PSK',b,'BER');[g_BCH,No_of_correctable_error_bits] = bchgenpoly(N,K);pemb_theory=prob_err_msg_bit(et,N,No_of_correctable_error_bits);

semilogy(EbN0dBs,BERs,'r*', EbN0dBs_t,BER_theory,'k', EbN0dBs_t,pemb_theory,'b:')xlabel('Eb/N0[dB]'); ylabel('BER'); title('BER of BCH code with BPSK');legend('Simulation','Theoretical-No coding','Theoretical-BCH coding');

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

(9.4.11)

, 1

1(1 )

c

N k N ke b k d

Np k kN

10 100 0

10 log 10 log 3[dB]/ 2b bE E

N N

Page 74: dc09_show

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 75: dc09_show

%dc09p07.m% To practice using convenc() and vitdec() for channel codingclear, clfGc=[4 5 11; 1 4 2]; % Octal code generator matrixK=size(Gc,1); % Number of encoder input bits% Constraint length vector Gc_m=max(Gc.');for i=1:length(Gc_m), Lc(i)=length(deci2bin1(oct2dec(Gc_m(i)))); endtrel=poly2trellis(Lc,Gc);Tbdepth=sum(Lc)*5; delay=Tbdepth*K;lm=1e5; msg=randint(1,lm);transmission_ber=0.02;notbe=round(transmission_ber*lm); % Number of transmitted bit errorsch_input=convenc([msg zeros(1,delay)],trel); % Received/modulated/detected signalch_output= rem(ch_input+randerr(1,length(ch_input),notbe),2);decoded_trunc= vitdec(ch_output,trel,Tbdepth,'trunc','hard');ber_trunc= sum(msg~=decoded_trunc(????))/lm;decoded_cont= vitdec(ch_output,trel,Tbdepth,'cont','hard');ber_cont=sum(msg~=decoded_cont(????????????))/lm;% It is indispensable to use the delay for the decoding result % obtained using vitdec(,,,'cont',) nn=[0:100-1]; subplot(221), stem(nn,msg(nn+1)), title('Message sequence')subplot(223), stem(nn,decoded_cont(nn+1)), hold on stem(delay,0,'rx')decoded_term= vitdec(ch_output,trel,Tbdepth,'term','hard');ber_term=sum(msg~=decoded_term(????))/lm;fprintf('\n BER_trunc BER_cont BER_term')fprintf('\n %9.2e %9.2e %9.2e\n', ber_trunc,ber_cont,ber_term)

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 76: dc09_show

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 77: dc09_show

Normalize

Denormalize

Page 78: dc09_show

function [pemb,nombe,notmb]=Viterbi_QAM(Gc,b,SNRbdB,MaxIter)if nargin<4, MaxIter=1e5; endif nargin<3, SNRbdB=5; endif nargin<2, b=4; end[K,N]=size(Gc); Rc=K/N; Gc_m=max(Gc.');% Constraint length vector for i=1:length(Gc_m), Lc(i)=length(deci2bin1(oct2dec(Gc_m(i)))); endNf=144; % Number of bits per frameNmod=Nf*N/K/b; % Number of QAM symbols per modulated frameSNRb=10.^(SNRbdB/10); SNRbc=SNRb*Rc; % Rc does not need to be multiplied since noise will be added per symbol.sqrtSNR=sqrt(2*b*SNRb); % Complex noise for b-bit (coded) symboltrel=poly2trellis(Lc,Gc); Tbdepth=5; delay=Tbdepth*K;nombe=0; Target_no_of_error=100;for iter=1:MaxIter msg=randint(1,Nf); % Message vector coded= convenc(msg,trel); % Convolutional encoding modulated= QAM(coded,b); % 2^b-QAM-Modulation r= modulated +(randn(1,Nmod)+j*randn(1,Nmod))/sqrtSNR; demodulated= QAM_dem(r,b); % 2^b-QAM-Demodulation decoded= vitdec(demodulated,trel,Tbdepth,'trunc','hard'); nombe = nombe+sum(msg~=decoded(1:Nf)); % Number of message bit errors if nombe>Target_no_of_error, break; endendnotmb=Nf*iter; % Number of total message bitspemb=nombe/notmb; % Message bit error probability

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 79: dc09_show

%do_Viterbi_QAM.mclear, clfNf=144; Tf=0.001; Tb=Tf/Nf; % Frame size, Frame time, and Sample/Bit timeGc=[133 171]; % Octal code generator matrix [K,N]=size(Gc); Rc=K/N; % Message/Codeword length and Code rate% Constraint length vector Gc_m=max(Gc.');for i=1:length(Gc_m) Lc(i)=length(deci2bin1(oct2dec(Gc_m(i)))); endTbdepth=sum(Lc)*5; delay=Tbdepth*K; % Traceback depth and Decoding delay b=4; M=2^b; % Number of bits per symbol and Modulation orderTs=b*Rc*Tb; % Symbol time corresponding to b*Rc message bits N_factor=sqrt(2*(M-1)/3); % Eq.(7.5.4a)EbN0dBs=[3 6]; Target_no_of_error=50;for i=1:length(EbN0dBs) EbN0dB=EbN0dBs(i); SNRbdB=EbN0dB+3; randn('state', 0); [pemb,nombe,notmb]=???????_QAM(Gc,b,SNRbdB,Target_no_of_error);%MATLAB pembs(i)=pemb; sim('Viterbi_QAM_sim'); pembs_sim(i)=BER(1); % Simulinkend[pembs; pembs_sim] % Compare BERs obtained from MATLAB and SimulinkEbN0dBs_t=0:0.1:10; SNRbdBs_t=EbN0dBs_t+3;BER_theory=prob_error(SNRbdBs_t,'QAM',b,'BER');semilogy(EbN0dBs,pembs,'r*', EbN0dBs_t,BER_theory,'b')xlabel('Eb/N0[dB]'); ylabel('BER');

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 80: dc09_show

function qamseq=QAM(bitseq,b)bpsym = nextpow2(max(bitseq)); % no of bits per symbolif bpsym>0, bitseq = deci2bin(bitseq,bpsym); endif b==1, qamseq=bitseq*2-1; return; end % BPSK modulation% 2^b-QAM modulationN0=length(bitseq); N=ceil(N0/b);bitseq=bitseq(:).'; bitseq=[bitseq zeros(1,N*b-N0)];b1=ceil(b/2); b2=b-b1; b21=b^2; b12=2^b1; b22=2^b2; g_code1=2*gray_code(b1)-b12+1; g_code2=2*gray_code(b2)-b22+1;tmp1=sum([^b1-1].^2)*b21; tmp2=sum([:b22-1].^2)*b12;M=2^b; Kmod=sqrt(2*(M-1)/3); %Kmod=sqrt((tmp1+tmp2)/2/(2^b/4)) % Normalization factorqamseq=[];for i=0:N-1 bi=b*i; i_real=bin2deci(bitseq(bi+[1:b1]))+1; i_imag=bin2deci(bitseq(bi+[b1+1:b]))+1; qamseq=[qamseq (g_code1(i_real)+j*g_code2(i_imag))/Kmod];endfunction [g_code,b_code]=gray_code(b)N=2^b; g_code=0:N-1; if b>1, g_code=gray_code0(g_code); endb_code=deci2bin(g_code);function g_code=gray_code0(g_code)N=length(g_code); N2=N/2;if N>=4, N2=N/2; g_code(N2+1:N)=fftshift(g_code(N2+1:N)); endif N>4 g_code=[gray_code0(g_code(1:N2)) gray_code0(g_code(N2+1:N))]; end

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009

Page 81: dc09_show

function bitseq=QAM_dem(qamseq,b,bpsym)%BPSK demodulationif b==1, bitseq=(qamseq>=0); return; end%2^b-QAM demodulationN=length(qamseq);b1=ceil(b/2); b2=b-b1;g_code1=2*gray_code(b1)-2^b1+1; g_code2=2*gray_code(b2)-2^b2+1;tmp1=sum([^b1-1].^2)*2^b2; tmp2=sum([^b2-1].^2)*2^b1;Kmod=sqrt((tmp1+tmp2)/2/(2^b/4)); % Normalization factorg_code1=g_code1/Kmod; g_code2=g_code2/Kmod; bitseq=[];for i=1:N [emin1,i1]=min(abs(real(qamseq(i))-g_code1)); [emin2,i2]=min(abs(imag(qamseq(i))-g_code2)); bitseq=[bitseq deci2bin1(i1-1,b1) deci2bin1(i2-1,b2)];endif (nargin>2) N = length(bitseq)/bpsym; bitmatrix = reshape(bitseq,bpsym,N).'; for i=1:N, intseq(i)=bin2deci(bitmatrix(i,:)); end bitseq = intseq;end

Source: MATLAB/Simulink for Digital Communication by Won Y. Yang et al. © 2009