INFORMATION THEOTY AND CODING TECHNIQUES EXPERIMENT NO. 1 Aim: Implementation of algorithm for determination of various entropies and mutual information of the given channel. Test various types of channel such as 1. Noise free channel 2. Error free channel 3. Binary symmetric channel 4. Noisy channel Equipment: PC with ‘C’ programming tool. Theory: 1.What is discrete memory less source and discrete memory less channel. 2.Write definition, formula and units for the following i} Information ii) Entropy iii) Information rate. iv) Mutual information 1
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
INFORMATION THEOTY AND CODING TECHNIQUES
EXPERIMENT NO. 1
Aim: Implementation of algorithm for determination of various
entropies and mutual information of the given channel.
Test various types of channel such as
1. Noise free channel
2. Error free channel
3. Binary symmetric channel
4. Noisy channel
Equipment: PC with ‘C’ programming tool.
Theory:
1. What is discrete memory less source and discrete memory less
channel.
2. Write definition, formula and units for the following
i} Information
ii) Entropy
iii) Information rate.
iv) Mutual information
3. What are the different types of entropies?
Algorithm:
Part: I
1. Input the no. of inputs of a channel.
2. Input the no. of outputs of a channel.
1
INFORMATION THEOTY AND CODING TECHNIQUES
3. Input the channel matrix. Test the condition that sum of all the
entries in each row should be exactly equal to 1.
4. Input the channel input probabilities. i.e. P[X].Condition
mentioned in step no.3 remains the same. Calculate the
entropy of the channel input. i.e. H(X)
5. Calculate output probability matrix P[Y], by multiplying input
probability matrix by channel matrix. Also calculate entropy
of channel output. i.e. H(Y).
6. Convert input probability matrix into diagonal matrix.
i.e. P[X]d
7. Calculate the joint probability matrix by multiplying input
probability matrix in diagonal form by channel matrix.
8. Calculate joint entropy with the help of formula
m n
H(X,Y)= p( xi yj)(log p( xi yj))-1
i=0 j=0
9. From the values of H(X),H(Y) and H(X,Y) we can calculate
conditional entropies as
H(Y/X)=H(X,Y)-H(X)
H(X/Y)=H(X,Y)-H(Y)
10.Also we can calculate mutual information as
I(X;Y)=H(X)-H(X,Y)
I(X;Y)=H(Y)-H(Y/X)
2
INFORMATION THEOTY AND CODING TECHNIQUES
Part-II
A) Noise free channel
1. For noise free channel enter only diagonal elements of
the joint probability matrix. Condition mentioned on step
no.3 should be satisfied.
2. Repeat steps from 4 to 10.
B) Error free channel
1. A channel is said to be error free if capacity of the
channel is greater than entropy of the channel. So at first
calculate the capacity of the channel using the formula
Capacity C= log M bits/symbol
Where M:- No. of inputs of the channel.
2. Calculate entropy of the input.
3.Compare capacity of the channel with channel input
entropy.Test the condition for error free channel .
C) Binary Symmetric Channel
1.BSC channel is characterized by
No.of input = No. of output = 2
2.It’s conditional probability matrix is as follows
1-P PP(Y/X) =
P 1-P3. Derive the joint probability matrix from this matrix by
multiplying it by
P(X) = 0 1
3
INFORMATION THEOTY AND CODING TECHNIQUES
So the matrix which we take input from user is
0 0P(X,Y) = P 1-P
Where P is the probability which user should enter.
4.Then repeat steps 4 to 8 to calculate all the required
quantities.
D) Noisy channel
1.Repeat first two steps from Part I.
2.Enter the joint probability matrix just by removing the
condition of checking sum of all the elements equal to 1.
3.Repeat all the remaining steps to calculate required
quantities.
Conclusion:
1.Why we calculate entropy?
2.What is the significance of conditional entropy?
4
INFORMATION THEOTY AND CODING TECHNIQUES
EXPERIMENT NO. 2
Aim: Implementation of algorithm for generation and evaluation of
variable length source coding using
1) Shannon-Fano Coding
2) Huffman Coding
Equipment:
PC with C as a programming tool
PART 1) Shannon-Fanno coding
Theory:
1) Need of source coding
2) Explain source encoder with block diagram
3) What is variable length source coding? Explain with example.
4) State Shannon’s first theorem.
Algorithm:
1) Accept number of symbols and their respective probabilities.
Sort the probabilities in the descending order.
2) Partition these probabilities such that their sum should be equal
or nearly equal.
3) Assign ‘0’ to upper group symbols and ‘1’ to lower group
symbols.
4) Repeat steps 2 and 3 till all the probabilities are finished.
5
INFORMATION THEOTY AND CODING TECHNIQUES
5) For the formation of code read assigned code from left to right.
Example:
Symbol Respectiv
e
Probabiliti
es
Coding Steps Codewo
rdI II III IV
S1 0.4 0 0
S2 0.3 1 0 10
S3 0.2 1 1 0 110
S4 0.05 1 1 1 0 1110
S5 0.05 1 1 1 1 1111
PART 2) Huffman Coding
Theory:
1) What are the types of source coding?
2) State advantages of Huffman coding.
3) How we calculate average codeword length and efficiency?
Explain with formula and example.
Algorithm:
1) List the source symbols in order of decreasing probability.
2) Combine the probabilities of the two symbols having the lowest
probabilities and reorder the resultant probabilities. The same procedure
is repeated until there are two ordered probabilities remaining.
6
INFORMATION THEOTY AND CODING TECHNIQUES
3) Start encoding with the last reduction, which consist of exactly two
ordered probabilities. Assign 0 as the first digit in the codeword for all
the source symbols associated with the first probability; assign 1 to the
second probability.
4) Now go back and assign 0 and 1 to the second digit for the two
probabilities that were combined in the previous reduction step,
retaining all assignments made in step 3.
5) Keep regressing this way until the first column is reached.
Example:
Symbol step 1 step 2 step 3 Code word
0.4 0.4 0.4 0.6 0 1
0.2 0.2 0.4 0.4 1 01
0.2 0.2 0.2 000
0.1 0.2 0010
0.1 0011
Conclusion:
Write your own conclusion.
EXPERIMENT NO. 3
7
INFORMATION THEOTY AND CODING TECHNIQUES
Aim: Implementation of algorithm for generating and decoding of
linear block code.
Apparatus: PC with MATLAB programming tool.
Theory:
Channel Coding: In channel coding, we map the incoming data
sequence to a channel input sequence. This encoding procedure is done
by the channel encoder. The encoder sequence is then transmitted over
the noisy channel. The channel output sequence at the receiver is
inverse mapped an input data sequence. This is called as the decoding
procedure and is carried out by the channel decoder. But the encoder
and decoder are under the designer’s control.
The encoder introduces redundancy in the prescribed manner.
The decoder exploits redundancy in a prescribed manner in order to
reconstruct the source sequence as accurately as possible. The channel
coding makes possible reliable communication over unreliable
channels. Channel coding is also called as ‘error control coding’.
Questions: 1) Explain the necessity of the channel coding.
2) Define and explain the following terms.
1) Weight of the code
2) Minimum distance
3) Syndrome
4) Hamming bound.
Encoding and decoding procedure of the linear block code:
Consider the (n,k) systematic linear block code(LBC) code, where n is
length of the codeword and k is the length of the message.
8
INFORMATION THEOTY AND CODING TECHNIQUES
Encoding : Let’s consider the (6,3) LBC with the following parity
matrix
Here n=6 and k=3 hence no. of redundant bits are (n-k) =3.
Generator matrix G is,
G = [ In-k: P]
Now from the above generator matrix and message vector we can
calculate the codewords.
No. of message vectors = 2k = 23 = 8
For the first message M0 = [ 0 0 0], codeword C0 is
Similarly for the second message M1 = [0 0 1], codeword C1 is
In the same way calculate the code vectors for all the eight message vectors. All
codewords are as follows
Data Codeword
000 000000
001 001110
010 010011
011 011101
9
INFORMATION THEOTY AND CODING TECHNIQUES
100 100101
101 101011
110 110110
111 111000
Decoding:
The parity check matrix P is
The PT can be represented as
The parity check matrix is given by
H = [ PT : In-k]
The syndrome table can be calculated as
St = HT
Let the received codeword R is
R = [ 1 0 1 1 1 0 ]
The syndrome is calculated as
S = R * HT
By calculation S= [1 0 1]
This matches with first row of the syndrome. It means that error has
occurred in the first bit. Now take the all zero vector of size n=6 and
make the first bit of it 1.
Let this vector as error vector e, which is
10
INFORMATION THEOTY AND CODING TECHNIQUES
E = [ 1 0 0 0 0 0 ]
Now corrected codeword Cc is
Cc = R E
Cc = [ 0 0 1 1 1 0 ]
Algorithm:
1) Enter the values of n and k.
2) Enter the parity matrix.
3) Calculate generator matrix G.
4) Enter the data matrix Mi.e. any one combination out of the
total possible combinations.
5) Calculate the code using C = M * G
6) In the same way calculate all the codewords.
7) Introduce error in the m bit position changing it either from 1
to 0 or 0 to 1.
8) Calculate the syndrome.
9) Compare the error pattern with corresponding syndrome.
10) Evaluate correct code Cc.
Conclusion :
What are the other types linear block code? Also comment on
error correcting capability of linear block code.
11
INFORMATION THEOTY AND CODING TECHNIQUES
EXPERIMENT NO. 4
Aim: Implementation of algorithm for generation and decoding of
cyclic codes.
Equipment: PC with ‘MATLAB’ programming tool.
Theory: Cyclic codes form a subclass of linear block codes. An
advantage of cyclic codes over most other types of codes is that they are
easy to encode. Furthermore, they possess a well defined mathematical
structure which has led to the development of very efficient decoding
schemes for them.
A code C is cyclic if
1. C is linear code and
2. any cyclic shift if a codeword is also a codeword i.e. if the
codeword is a0a1….an-1 is in C then an-1a0……..an-2 is also in C.
12
INFORMATION THEOTY AND CODING TECHNIQUES
Que. List out the properties of the cyclic codes and explain them in
order.
Encoding and decoding procedure of the cyclic codes:
a) Encoding:
Consider the (7,4) cyclic code. i.e. n=7 and k=3
The generator polynomial is
G(P) = P3 + P + 1
The message polynomial is
M(P) = P3 + P2 + 1
C(P) = Q(P)*G(P)
Where Q(P) is quotient polynomial
Q(P)= quotient of [M(P)*Xn-k / G(P)]
P3 + P2 + P + 1
P3 + P + 1 P6 + P5 + P3
P6 + P5 + P3
P5 + P4
P5 + P3 + P2
P4 + P3 + P2
P4 + P2 + P
P3 + P
P3 + P + 1
13
INFORMATION THEOTY AND CODING TECHNIQUES
1
Q(P) = P3 + P2 + P + 1
C(P)= G(P) * Q(P)
= (P3 + P + 1)*( P3 + P2 + P + 1)
= P6 + P5 + P3 + 1
Hence C = 1101001. This is the codeword generated.
b) Decoding:
Let the received codeword is 1111001
Hence the corresponding polynomial is
R(P) = P6 + P5 + P4 + P3 + 1
P3 + P2 + 1
P3 + P + 1 P6 + P5 + P4 + P3 + 1
P6 + P4 + P3
P5 + 1
P5 + P3 + P2
P3 + P2 + 1
P3 + P + 1
P2 + P
Hence syndrome is S(P) = P2 + P
In the next step we have to consider error pattern E(P).
Let E(P)= 1000000
In polynomial form E(P) = P6
14
INFORMATION THEOTY AND CODING TECHNIQUES
P3 + P + 1
P3 + P + 1 P6
P6 + P4 + P3
P4 + P3
P4 + P2 + P
P3 + P2 + P
P3 + P + 1
P2 +1
Hence syndrome is S’(P) = P2 +1
But in this case S(P) != S’(P). So error is not present at position 6. Same
situation occurs when we consider error at position 5.
Now consider the error at position 4.
P
P3 + P + 1 P6
P4 + P2 + P
P2 + P
Hence syndrome is S’(P) = P2 +P
Here S(P) = S’(P)
So error is present at position 4.
Valid codeword is
C(P) = R(P) E(P)
C(P) = P6 + P5 + P3 + 1
C = 1101001
Algorithm:
15
INFORMATION THEOTY AND CODING TECHNIQUES
1. Start
2. Accept the values of n,k,G(P) , M(P).
3. Evaluate the value of [M(P)*Xn-k] / G(P)
4. Determine the codeword.
5. Transmit data with error.
6. Determine the received data.
7. Compare S and obtain corresponding error pattern from
syndrome table.
8. Evaluate corrected code.
9. Stop.
Conclusion:
Que.1. When linear code is called as cyclic code?
2. What are the properties of the syndrome table?
16
INFORMATION THEOTY AND CODING TECHNIQUES
EXPERIMENT NO. 5
Aim: Implementation of algorithms for generating convolution code
using
i] Code Tree
ii] Trellis code
Equipment: PC with MATLAB programming tool
Theory:
Requirement of convolutional code:-
In encoding of block of k-information symbols are
encoded into a block of n coded symbols. There is always one-to-one
correspondence between the uncoded block of symbols & the coded
block of symbols. It is used for high data rate applications, A large
block length is imp because:-
1] Many of good codes that have large distance properties are of large
block lengths.
2] Larger block length imp to that the encoding overhead is small.
17
INFORMATION THEOTY AND CODING TECHNIQUES
But large block lengths causes delays. There is
another coding scheme in which much smaller blocks of uncoded data
of length k0 are used. These are called information frames.
Convolutional codes have memory which retain the previous incoming
information frames. Thus decoding & encoding in this code is based on
past information i.e. memory is required.
Convolutional codes are often used to improve the performance of
digital radio, mobile phones and satellite links.
Convolutional Encoding:
To convolutionally encode data, start with k memory registers,
each holding 1 input bit. Unless otherwise specified, all memory
registers start with a value of 0. The encoder has n modulo-2 adders,
and n generator polynomials — one for each adder (see figure below).
An input bit m1 is fed into the leftmost register. Using the generator
polynomials and the existing values in the remaining registers, the
encoder outputs n bits. Now bit shift all register values to the right (m1
moves to m0, m0 moves to m-1) and wait for the next input bit. If there
are no remaining input bits, the encoder continues output until all
registers have returned to the zero state.The figure below is a rate 1/3
(m/n) encoder with constraint length (k) of 3. Generator polynomials are
G1 = (1,1,1), G2 = (0,1,1), and G3 = (1,0,1). Therefore, output bits are