DHANALAKSHMI COLLEGE OF ENGINEERING, CHENNAI DEPARTMENT OF INFORMATION TECHNOLOGY IT6002 - INFORMATION THEORY AND CODING TECHNIQUES UNIT – I INFORMATION ENTROPY FUNDAMENTALS PART – A (2 MARKS) 1. Write the Kraft-Mc million inequality for the instantaneous code. (M - 10) Kraft-Mc million inequality: K-1 ∑ 2 -lk ≤ 1 K=0 The codeword length of the prefix code satisfy above equation. Here, l k is the codeword length of k th symbol sk, and K is the total number of symbols in the alphabet. Apply the above equation to the prefix code. 3 ∑ 2 -lk = 2 -l0 + 2 -l1 + 2 -l2 +2 -l3 K=0 From the above equation, observe l0 = 1, l1 = 2, l2 = 3 and l3 =3 digits (bits). Putting these values in the above equation, 3 ∑ 2 -lk = 2 -1 + 2 -2 + 2 -3 +2 -3 = 1 K=1 Thus Kraft – McMillian inequality is satisfied. 2. What is meant by prefix property? In the prefix coding ‟no codeword‟ is prefix of „other codeword‟. i. The prefix codes are uniquely decodable. ii. The prefix codes satisfy Kraft – McMillian inequality. 3. State the properties of entropy. (M - 11) Properties of Entropy: a. Entropy is zero, if the event is sure or it is impossible. H = 0, if pk = 0 or 1 b. When pk = 1/M for all the M symbols, then the symbols are equally likely for such source entropy is given as H = log2M. c. Upper bound on entropy is given as, Hmax = log2M 4. What is Shannon limit? (D - 13) The Shannon-Hartley theorem tells the maximum rate at which information can be transmitted over a communications channel of a specified bandwidth in the presence of noise. It is an application of the noisy channel coding theorem to the archetypal case of a continuous-time analog communications channel subject to Gaussian noise.
41
Embed
DHANALAKSHMI COLLEGE OF ENGINEERING, CHENNAI DEPARTMENT OF INFORMATION … · · 2017-01-27DEPARTMENT OF INFORMATION TECHNOLOGY IT6002 - INFORMATION THEORY AND CODING TECHNIQUES
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
DHANALAKSHMI COLLEGE OF ENGINEERING, CHENNAI
DEPARTMENT OF INFORMATION TECHNOLOGY
IT6002 - INFORMATION THEORY AND CODING TECHNIQUES
UNIT – I INFORMATION ENTROPY FUNDAMENTALS
PART – A (2 MARKS) 1. Write the Kraft-Mc million inequality for the instantaneous code. (M - 10) Kraft-Mc million inequality:
K-1 ∑ 2-lk ≤ 1 K=0
The codeword length of the prefix code satisfy above equation. Here, lk is the codeword length of kth symbol sk, and K is the total number of symbols in the alphabet. Apply the above equation to the prefix code.
3 ∑ 2-lk = 2-l0 + 2-l1 + 2-l2 +2-l3 K=0
From the above equation, observe l0 = 1, l1 = 2, l2 = 3 and l3 =3 digits (bits). Putting these values in the above equation,
3 ∑ 2-lk = 2-1 + 2-2 + 2-3 +2-3 = 1 K=1
Thus Kraft – McMillian inequality is satisfied. 2. What is meant by prefix property? In the prefix coding ‟no codeword‟ is prefix of „other codeword‟.
i. The prefix codes are uniquely decodable. ii. The prefix codes satisfy Kraft – McMillian inequality.
3. State the properties of entropy. (M - 11) Properties of Entropy:
a. Entropy is zero, if the event is sure or it is impossible. H = 0, if pk = 0 or 1
b. When pk = 1/M for all the M symbols, then the symbols are equally likely for such source entropy is given as H = log2M.
c. Upper bound on entropy is given as, Hmax = log2M
4. What is Shannon limit? (D - 13) The Shannon-Hartley theorem tells the maximum rate at which information can be transmitted over a communications channel of a specified bandwidth in the presence of noise. It is an application of the noisy channel coding theorem to the archetypal case of a continuous-time analog communications channel subject to Gaussian noise.
5. State the Channel Capacity theorem. (M - 11)(N - 10) The channel capacity of the discrete memoryless channel is given as maximum average mutual information. The maximization is taken with respect to input probabilities P(xi).
C = B log2(1+S/N) bits/sec Here B is channel bandwidth. 6. What is the relationship between uncertainity and information?
Uncertainity: i. It is probability of occurrence of the events. ii. Uncertainity of event decides amount of information.
Information: i. It is the content received due to occurrence of events. ii. Information received in certain event is zero. But information received in rate even is
maximum. 7. What is a Huffman Coding? In Huffman coding, a separate codeword is used for each character. The code words produced by huffman coding gives an optimum value. 8. Calculate the entropy of the source emitting symbol with probability x = 1/5, y = 1/2, z = 1/3.
9. Define – Uncertainty
Uncertainity: i. It is probability of occurrence of the events. ii. Uncertainity of event decides amount of information.
10. Differentiate: Uncertainty, Information and Entropy. (N - 10)
S. No Uncertainity Information Entropy
1. 2.
It is probability of occurrence of the events. Uncertainity of event decides amount of information.
It is the content received due to occurrence of events. Information received in certain event is zero. But information received in rate even is maximum.
It is the average information received due to occurrence of multiple events Entropy is zero, if the event is sure or impossible.
11. List out the properties of mutual information.
i. The mutual information is symmetric, where I(X; Y) = I(X:Y) ii. The mutual information is always positive, when I(X;Y) ≥ 0.
12. Calculate the amount of information if Pk = 1/4? Amount of information : Ik = log2 (1 / pk)
= log10 4 / Log10 2 = 2 bits
13. Define – Shannon Fano Code Method This is a much simpler code than the Huffman code, and is not usually used, because it is not as efficient, however, this is generally combined with the Shannon Method (to produce Shannon - Fano codes). The main difference, such that I have found, is that one sorts the Shannon probabilities, though the Fano codes are not sorted. 14. What is average information? Average information or Entropy is given as,
Here,pk is the probability of kth message, M is the total number of memories generated by the source.
15. Define – Rate of information transmission across the channel Rate of information transmission across the channel is given as, Dt = [H(X) – H(X/Y)]r bits/sec
Here H(X) is the entropy of the source. H(X/Y) is the conditional entropy.
16. Define – Bandwidth efficiency The ratio of channel capacity to bandwidth is called bandwidth efficiency Bandwidth efficiency = channel capacity (C) Bandwidth (B) 17. What is the capacity of the channel having infinite bandwidth? The capacity of such channel is given as,
C = 1.44 (S/N0) Here S/N0 is signal to noise ratio
18. Define – Discrete memory less channel For the discrete memory less channels, input and output, both are discrete random variables. The current output depends only upon current input for such channel.
19. Find entropy of downloaded source emitting symbols x, y, z with probabilities of 1/5,1/2, 1/3 respectively.
p1 = 1/5, p2 = 1/2, p3 = 1/3.
H k log 2 (1/pk) = 1/5 log25 + 1/2 log22 +1/3 log23
= 1.497 bits/symbol
20. An alphabet set contains 3 letters A,B, C transmitted with probabilities of 1/3,1/4,1/4. Find entropy. p1 = 1/3, p2 = 1/4, p3 = 1/4.
H k log 2 (1/pk) = 1/3 log23 + 1/4 log2 4 +1/4 log24
= 1.52832 bits/symbol
21. Define – Information
Amount of information : Ik = log2 (1/pk)
22. Write the properties of information.
If there is more uncertainty about the message, information carried is also more.
If receiver knows the message being transmitted, the amount of information carried is zero.
If I1 is the information carried by message m1, and I2 is the information carried by m2, then amount of
information carried compontely due to m1 and m2 is I1+I2
Part – B (16 MARKS)
1. Explain briefly the source coding theorem. (N - 10)
2. Explain Huffman encoding algorithm. (M - 11)
3. A discrete memory less source has an alphabet of five symbols whose probabilities of occurrence are
as described here :
Symbols S0 S1 S2 S3 S4
Probability 0.4 0.2 0.2 0.1 0.1
Compute the Huffman code for this source, entropy and the average codeword length of the source
encoder. (M - 11)(N - 10)
4. Explain the properties of entropy.
5. Explain channel capacity and derive the channel capacity for binary symmetric channel.
Channel capacity can be expressed in terms of mutual information. Mutual information is given by
equation as,
6. Explain mutual information and its properties. (J - 14) (N - 10)
7. In the messages, each letter occurs the following percentage of times:
Letter A B C D E F
% of
occurrence
23 20 11 9 15 22
(1) Calculate the entropy of this alphabet of symbols.
(2) Devise a codebook using Huffman technique and find the average codeword length.
(3) Devise a codebook using Shannon-Fano technique and find the average codeword length.
(4) Compare and comment on the results of both techniques.
8. A discrete memoryless source has five symbols X1, X2, X3, X4 and X5 with probabilities 0.4, 0.19,
0.16, 0.15 and 0.15 respectively. Calculate a Shannon – Fano code for the source and code efficiency.
The Shannon – Fano algorithm is explained in table.
Message Probability
of messages
I II III Code word
for
messages
Number of
bits per
messages
i.e nK
X1
X2
X3
X4
X5
0.4
0.19
0.16
0.15
0.15
0
1
1
1
1
0
0
1
1
0
1
0
1
0
100
101
110
111
1
3
3
3
3
9. Write short notes on:
(a) Binary Communication channel
(b) Binary Symmetric channel.
10. i) How will you calculate channel capacity?
ii) Write channel coding theorem and channel capacity theorem
iii) Calculate the entropy for the given sample data AAABBBCCD
iv) Prove Shannon information capacity theorem
11. i) Use differential entropy to compare the randomness of random variables
ii) A four symbol alphabet has following probabilities
Pr(a0) =1/2
Pr(a0) = 1/4
Pr(a0) = 1/8
Pr(a0) = 1/8 and an entropy of 1.75 bits. Find a codebook for this four letter alphabet that satisfies
source coding theorem
iii) Write the entropy for a binary symmetric source
iv) Write down the channel capacity for a binary channel
12. (a) A discrete memory less source has an alphabet of five symbols whose probabilities of occurrence
are as described here
Symbols : X1 X2 X3 X4 X5
Probability: 0.2 0.2 0.1 0.1 0.4
Compare the Huffman code for this source .Also calculates the efficiency of the source encoder
(b) A voice grade channel of telephone network has a bandwidth of 3.4 kHz Calculate
(i) The information capacity of the telephone channel for a signal to noise ratio of 30 dB and
(ii) The min signal to noise ratio required to support information transmission through the
telephone channel at the rate of 9.6Kb/s (8)
13. A discrete memory less source has an alphabet of seven symbols whose probabilities of occurrence are
as described below
Symbol: s0 s1 s2 s3 s4 s5 s6
Prob : 0.25 0.25 0.0625 0.0625 0.125 0.125 0.125
(i) Compute the Huffman code for this source moving a combined symbols as high as possible
(ii) Calculate the coding efficiency
(iii) Why the computed source has a efficiency of 100%
14. (i) Consider the following binary sequences 111010011000101110100.Use the Lempel – Ziv
algorithm to encode this sequence. Assume that the binary symbols 1 and 0 are already in the code
book
(ii) What are the advantages of Lempel – Ziv encoding algorithm over Huffman coding?
15. A discrete memory less source has an alphabet of five symbols with their probabilities for its output
as given here
[X] = [x1 x2 x3 x4 x5 ]
P[X] = [0.45 0.15 0.15 0.10 0.15]
Compute two different Huffman codes for this source .for these two codes .Find
(i) Average code word length
(ii) Variance of the average code word length over the ensemble of source symbols
16. A discrete memory less source X has five symbols x1,x2,x3,x4 and x5 with probabilities p(x1) – 0.4,
PART A (2 MARKS) 1. Draw the block diagram for pulse code modulator?
2. Differentiate delta modulation from DPCM?
Delta Modulation DPCM
DM encodes the input sample by only one bit. It sends the information about +∂ or -∂, ie. Step rise or fall.
DPCM can have more than one bit for encoding the sample, it sends the information about difference between actual sample value and predicted sample value.
3. What is adaptive DPCM?
Adaptive Differential Pulse-Code Modulation (ADPCM) is a variant of Differential Pulse-Code
Modulation (DPCM) that varies the size of the quantization step, to allow further reduction of the
required bandwidth for a given signal-to-noise ratio.
4. Mention two merits of DPCM.
Bandwidth requirement of DPCM is less compared to PCM.
Quantization error is reduced because of perdition filter.
5. What is the main difference in DPCM and DM?
DM encodes the input sample by only one bit. It sends the information about +∂ or -∂, ie. Step rise or
fall. DPCM can have more than one bit for encoding the sample, it sends the information about
difference between actual sample value and predicted sample value.
(ii) How many states are in the trellis diagram of this code
(iii) What is the code rate of this code?
14. Construct a convolution encoder for the following specifications: rate efficiency =1/2
Constraint length =4.the connections from the shift to modulo 2 adders are described by following
equations g1(x) = 1+x, g2(x) = x
Determine the output codeword for the input message [1110]
UNIT – IV COMPRESSION TECHNIQUES
PART A (2 MARKS)
1. What is Dolby AC-3? (M - 10) The audio signal is not given directly to bit allocation algorithm. Rather, a spectral envelop of audio
signal is provided to bit allocation algorithm as well as decoder. The bit allocation is not sent to the
decoder. But, the decoder reconstructs the audio signals from representation of spectral envelope.
Hence, it doesn‟t need bit allocation details directly.
2. What is sub-band coding?
In signal processing, sub-band coding is any form of transform coding that breaks a signal into a
number of different frequency bands and encodes each one independently. This decomposition is
often the first step in data compression for audio and video signals.
3. State the main application of GUI.
The main application of GUI are :
GUI facilitates smooth interface between the human users and the operating system.
Parameters handling, controlling and monitoring is done with the help of GUI.
4. Distinguish between global color table and local color table in GIF.
The GIF format is used mainly with internet to represent and compress graphical images. GIF images
can be transmitted and stored over the network in interlaced mode. This is very useful when images are
transmitted over low bit rate channels.
5. State the various methods used for text compression. (M - 10)
The various methods used for text compression are:
Lossy compression :
All original data can be recovered when the file is uncompressed.
Lossless compression:
With lossless compression, every single bit of data that was originally in the file remains after the file
is uncompressed
6. What is Run-length coding?
The run length encoding is simplest lossless encoding techniques. It is mainly used to compress text or
digitized documents. Binary data strings are better compressed by run length encoding. Consider the
binary data string. 1111110000011111……….
If we apply run length coding to above data string, we get, 7,1; 6,0; 5,1; 3,0……
Thus there are seven binary 1‟ s, followed by six binary 0‟ s followed by five binary 1‟ s and so on.
7. What is arithmetic coding?
The arithmetic coding offers the opportunity to create a code that exactly represents the frequency of
any character. A single character will be defined by belonging to a specific interval. For more frequent
characters larger intervals will be used contrary to rare characters.
The size of those intervals is proportional to the frequency. The arithmetic coding is the most efficient
procedure but its usage will be restricted by patents.
8. What are the types of JPEG algorithms?
There are two types of JPEG algorithms they are:
Baseline JPEG: During decoding, this algorithm draws line until complete image.
Progressive JPEG: During decoding, this JPEG algorithm draws the whole image at once,
but in very poor quality. Then another layer of data is added over the previous image to improve its
quality. Progressive JPEG is used for images on the web. The used can make out the image before it is
fully downloaded.
9. What is TIFF? (D - 13)
Tagged Image File Format(TIFF) is used for the transfer of images as well as digitized documents. It
supports 48 bits of pixels resolutions. 16 bits are used for each of R, G, and b colours. The code
number indicates particular format in TIFF.
10. What is Graphics Interchange Format(GIF)?
Graphics Interchange Format(GIF) is used for representation and compression of graphical images.
Each pixels of 24-bit colour image is represented by 8-bits for each of R, G and B colours. Such
image has 224 colours. The GIF uses only 256 colours from the set of 224 colours.
11. What is Code Efficiency?
The code efficiency is the ratio of messag bits in a block to the transmitted bits for that block by the
encoder i.e.,
Code efficiency = Message bits / Transmitted bits
= k / n
12. State the main application of Graphics Interchange Format(GIF)
The GIF format is used mainly with internet to represent and compress graphical images. GIF images can be
transmitted and stored over the network in interlaced mode, this is very useful when images are transmitted over
low bit rate channels.
13. What is JPEG standard?
JPEG stands for joint photographic exports group. This has developed a standard for compression of
monochrome/color still photographs and images. This compression standard is known as JPEG standard. It is
also know as ISO standard 10918. It provides the compression ratios upto 100:1.
14. Why differential encoding is carried out only for DC coefficient in JPEG?
The DC coefficient represents average color/luminance/ chrominance in the corresponding
block. Therefore it is the largest coefficient in the block.
Very small physical area is coveredfrombyeach block. Hence the DC suitable compressions for
DC coefficients do not vary much one block to next block.
The DC coefficient vary slowly. Hence differential encoding is best suitable compression for DC
coefficients. It encodes the difference between each pair of values rather than their absolute values.
15. What do you meant by “GIF interlaced node”?
The image data can be st red and transferred over the network in an interlaced mode. The data is stored in such
a way that the decompressed image is built up in progressive way.
16. Write the advantages of Data compression.
Huge amount of data is generated in text, images, audio, speech and video.
Because of compression, transmission data rate is reduced.
Storage becomes less due to compression. Due to video compression, it is possible to store one complete
movie on two CDs.
Transportation of the data is easier due to compression.
17. Write the drawbacks of data compression.
Due to compression, some of the data is lost.
Compression and decompression increases complexity of the transmitter and receiver.
Coding time is increased due to compression and decompression.
18. How compression is taken place in text and audio?
In text the large volume of information is reduced, where as in audio the bandwidth is reduced.
19. Specify the various compression principles?
• Source encoders and destination decoders
• Loss less and lossy compression
• Entropy encoding
• Source encoding
20. What is lossy and loss less compression?
The Compressed information from the source side is decompressed in the destination side and if
there is loss of information then it is said to be lossy compression.
The Compressed information from the source side is decompressed in the destination side and if
there is no loss of information then it is said to be loss less compression. Loss less compression is
also known as reversible.
21. What is statistical encoding?
Statistical encoding is used for a set of variable length code words, in which shortest code words are
represented for frequently occurring symbols (or) characters.
22. What is Differential encoding?
Differential encoding is used to represent the difference in amplitude between the current
value/signal being encoded and the immediately preceding value/signal.
23. What is transform coding?
This is used to transform the source information from spatial time domain representation into
frequency domain representation.
24. What is meant by spatial frequency?
The rate of change in magnitude while traversing the matrix is known as spatial frequency.
25. What is a horizontal and vertical frequency component?
If we scan the matrix in horizontal direction then it is said to be horizontal frequency components.
If we scan the matrix in Vertical direction then it is said to be vertical frequency components.
26. What is static and dynamic coding?
After finding the code words these code words are substituted in a particular type of text is known as
static coding.
If the code words may vary from one transfer to another then it is said to be dynamic coding.
27. Let us consider the codeword for A is 1, the codeword for B is 01,the codeword for c is 001 and the
codeword for D is 000. How many bits needed for transmitting the text AAABBCCD.
4 * 1 + 2 * 2 + 1 * 3 + 1 * 3 = 14 bits
28. Give two differences between Arithmetic coding and Huffman coding.
The code words produced by arithmetic coding always achieve Shannon value
The code words produced by Huffman coding gives an optimum value.
In arithmetic coding a single code word is used for each encoded string of characters.
In Huffman coding a separate codeword is used for each character.
29. What is global and local color table? (N - 10)
If the whole image is related to the table of colors then it is said to be global color table.
If the portion of the image is related to the table of colors then it is said to be local color table.
30. What is termination code and Make-up code table?
Code words in the termination-code table are for white (or) black run lengths of from 0 to 63 pels in
steps of one pel.
Code words in the Make up-code table are for white (or) black run lengths that are multiples of 64
pels.
31. What is meant by over scanning?
Over scanning means all lines start with a minimum of one white pel. Therefore the receiver knows
the first codeword always relates to white pels and then alternates between black and white.
32. What is modified Huffman code?
If the coding scheme uses two sets of code words (termination and make up). They are known as
Modified Huffman codes.
33. What is one-dimensional coding?
If the scan line is encoded independently then it is said to be One-dimensional coding.
34. What is two-dimensional coding?
Two-dimensional coding is also known as Modified Modified Read (MMR) coding. MMR identifies
black and White run lengths by comparing adjacent scan lines.
35. What is meant by pass mode?
If the run lengths in the reference line (b1b2) is to the left of the next run-length in the coding line
(a1a2) (i.e.) b2 is to the left of a1, then it is said to be pass mode.
36. What is meant by vertical mode?
If the run lengths in the reference line (b1b2) overlap the next run-length in the coding line (a1a2) by
a maximum of plus or minus 3 pels, then it is said to be vertical mode.
37. What is meant by horizontal mode?
If the run lengths in the reference line (b1b2) overlap the next run-length in the coding line (a1a2) by
more than plus or minus 3 pels, then it is said to be horizontal mode.
PART – B (16 MARKS)
1. Explain the working of JPEG encoder and decoder, with a block diagram. (N - 10)
2. For a linear block code, prove with example that
(i) The syndrome depends only on error pattern and not on transmitted code word.
(ii) All error patterns that differ by a codeword have the same syndrome.
i. the number of codeworde is 2k since there are 2k distinct messages.
ii. The set of vectors {gi} are linearly independent since we must have a set of unique codewords.
iii. linearly independent vectors mean that no vector gi can be expressed as a linear combination of the other
vectors.
iv. These vectors are called baises vectors of the vector space C.
v. The dimension of this vector space is the number of the basis vector which are k.
vi. Gi є C the rows of G are all legal codewords.
vii. A (n,k) block code , where Block : encoder accepts a block of message symbols and generates a block of
codeword
viii. Linear : addition of any two valid codeword results in another valid codeword
Code rate
i. Code rate increases , error correcting capability decreases more bandwidth efficiency
ii. Code rate decreases , error correcting capability increases waste of bandwidth.
3. Consider the generation of a (7,4) cyclic code by the generator polynomial g(x) = 1 +x + x3 .
Calculate the code word for the message sequence [1001] and construct systematic generator matrix G.
Draw the diagram of encoder and syndrome calculator generated by the polynomial.(M - 11)
4. Explain syndrome and its properties. (M - 10)
5. Explain syndrome decoding in linear block codes with example. (M - 10)
6. Explain the Hamming codes with example.
7. Explain in detail, Cyclic codes.
8. Explain the entropy encoding blocks of JPEG standard.
9. Explain in detail, Adaptive Huffman coding, with the help of an example.
10. Assume that the character set and probabilities are e =0.3, n = 0.3, t=0.2, w=0.1, .= 0.1. Derive the
codeword value for string “entw#”. Explain how the decoder determines original string from the
received codeword value.
11. Explain adaptive Huffman coding for the message “Malayalam”.
The basic Huffman algorithm has been extended, for the following reasons:
(a) The previous algorithms require the statistical knowledge which is often not available (e.g., live audio, video).
(b) Even when it is available, it could be a heavy overhead especially when many tables had to be sent when a non-
order0 model is used, i.e. taking into account the impact of the previous symbol to the probability of the current
symbol (e.g., "qu" often come together, ...).
12. (i)Explain the various stages in JPEG standard
(ii)Differentiate loss less and lossy compression technique and give one example for each
(iii)State the prefix property of Huffman code
13. (a) Draw the JPEG encoder schematic and explain
(b) Assuming a quantization threshold value of 16, derive the resulting quantization error for each of
the following DCT coefficients 127, 72, 64, 56,-56,-64,-72,-128.
14. (i) Explain arithmetic coding with suitable example
(ii) Compare arithmetic coding algorithm with Huffman coding
15. (i) Draw JPEG encoder block diagram and explain each block
(ii) Why DC and AC coefficients are encoded separately in JPEG
16. (a) Explain in brief ,the principles of compression
(b) In the context of compression for Text ,Image ,audio and Video which of the compression
techniques discussed above are suitable and Why?
UNIT – V: AUDIO AND VIDEO CODING
PART A (2 MARKS) 1. What is Dolby AC-1? (M – 10)
Dolby AC-1 is used for audio coding. It is MPEG audio coding standard. It uses psychoacoustic
model at the encoder and has fixed bit allocations to each subband.
2. What is the minimum frame/sec required in MPEG? (M - 10)
MPEG-Motion Pictures Expert Group (MPEG) was formed by the ISO to formulate a set of standards
relating to a rangeof multimedia applications that involves the use of video with sound.
3. State the main difference between MPEG video compression algorithm and H.261.
MPEG H.261
MPEG stands for Motion Pictures Expert
Group(MPEG). It was formed by ISO. MPEG
has developed the standards for compression of
video with audio.
H.261 video compression standard has been
defined by the ITU-T for the
provision of video telephony and video
conferencing over an ISDN.
4. What is MPEG?
MPEG stands for Motion Pictures Expert Group(MPEG). It was formed by ISO. MPEG has
developed the standards for compression of video with audio. MPEG audio coders are used for
compression of audio. This compression mainly uses perceptual coding.
5. State the advantages of Lempel-Ziv algorithm over Huffman coding. (N - 10)
Advantages of Lempel-Ziv algorithm:
i. The codewords are formed by appending new character.
ii. No separate character set is stored in the dictionary initially.
iii. The data characters as well as dictionary indices are transmitted.
Advantages of Huffman coding:
i. Codes for the characters are derived.
ii. Precision of the computer does not affect coding.
iii. Huffman coding is the simple technique.
6. Compare LZW with Huffman coding. (N - 10)
S.No LZW Coding Huffman Coding
1.
2.
Codewords are new words appearing in
the string.
Complete character set is stored in the
dictionary initially.
Codes for the character are derived.
Precision of the computer does not
affect coding.
7. What are vocal tract excitation parameters?
The origin, pitch, period and loudness are known as vocal tract excitation parameters
8. Give the classification of vocal tract excitation parameters
voiced sounds
unvoiced sounds
9. What is CELP?
CELP – code excited Linear Prediction
In this model, instead of treating each digitized segment independently for encoding purpose, just a
limited set of segments is used, each known as waveform template.
10. What are the international standards used in code excited LPC?
ITU-T Recommendations
G.728
G. 729
G. 7.29(A) and
G. 723.1
11. What is perceptual coding?
Perceptual encoders are designed for the compression of general audio such as that associated with a
digital television broadcast. This process is called perceptual coding.
12. What is algorithmic delay?
Before the speech samples can be analyzed, it necessary to store the block of samples in memory
(i.e.) in buffer. The time taken to accumulate the block of samples in memory is known as
algorithmic delay.
13. What is temporal masking?
After the ear hears a loud sound, it takes a further short time before it can hear a quieter sound. This is
known as temporal masking.
14. What is called critical bandwidth?
The width of each curve at a particular signal level is known as the critical bandwidth. For that
frequency and experiments have shown that for frequencies less than 500 HZ, the critical bandwidth
remains constant at about 100 HZ.
15. What is meant by dynamic range of a signal?
Dynamic range of a signal is defined as the ratio of the maximum amplitude of the signal to the
minimum amplitude and is measured in decibels (db)
16. What is MPEG?
MPEG-Motion Pictures Expert Group (MPEG)
MPEG was formed by the ISO to formulate a set of standards relating to a range of multimedia
applications that involves the use of video with sound
17. What is the use of DFT in MPEG audio coder?
DFT-Discrete Fourier transforms
DFT is a mathematical technique by which the 12 sets of 32 PCM samples are first transformed into
an equivalent set of frequency components.
18. What are SMRs?
SMRs – Signal to Mask Ratios
SMRs indicate those frequency components whose amplitude is below the related audible threshold.
19. What is meant by AC in Dolby AC-1?
AC Acoustic coder. It was designed for use in satellites to relay FM radio programs and the sound
associated with television programs.
20. What is meant by the backward adaptive bit allocation mode?
The operation mode in which, instead of each frame containing bit allocation information in addition
to the set of quantized samples it contains the encoded frequency coefficients that are present in the
sampled waveform segment. This is known as the encoded spectral envelope and this mode of
operation is the backward adaptive bit allocation mode.
21. List out the various video features used in multimedia applications.
a. Interpersonal - Video telephony and video conference
b. Interactive – Access to stored video in various forms
c. Entertainment – Digital television and movie/video – on demand
22. What does the digitization format define?
The digitization format defines the sampling rate that is used for the luminance y and two
chrominance Cb and Cr, signals and their relative position in each frame
23. What is SQCIF?
SQCIF – Sub Quarter Common Intermediate Format.
It is used for video telephony, with 162 Mbps for the 4:2:0 format as used for digital television
broadcasts.
24. What is motion estimation and motion compensation?
The technique that is used to exploit the high correlation between successive frames it to predict the
content of many of the frames. The accuracy of the prediction operation is determined by how well
any movement between successive frames is estimated. This operation is known as motion estimation
If the motion estimation process is not exact, so additional information must also be sent to indicate
any small differences between the predicted and actual positions of the moving segments involved.
This is known as motion compensation.
25. What are intracoded frames?
Frames that encoded independently are called intracoded frames or I-frames
26. What is meant by GOP?
GOP- group of pictures. The number of frames/pictures between successive I- frames is known as a
group of pictures.
27. What is a macro block?
The digitized contents of the Y matrix associated with each frame are first divided into a two
dimension matrix of 16 x 16 pixels known as a macro block.
28. What is H.261?
H.261 video compression standard has been defined by the ITU-T for the provision of video
telephony and video conferencing over an ISDN.
29. What is GOB?
GOB- Group of blocks. Although the encoding operation is carried out on individual macro blocks, a
large data structure known as a group of block is also defined.
30. What are AVOs and VOPs?
AVOs- Audio Visual Objects
VOPs- Video Objects Planes
31. What is the difference between MPEG and other standard.
Difference between MPEG-4 and other standard is that MPEG-4 has a number of content based
functionalities.
32. What are blocking artifacts?
The high quantization threshold leads to blocking artifacts which are cause by the macro block
Encoded using high thresholds differing from those quantized using lower thresholds.
33. What are conventional codes? How are they different from block codes? (N - 10)
Convolution codes are error detecting codes used to reliably transmit digital data over unreliable
communication channel system to channel noise.
34. State the principle of Turbo coding. (N - 10)
The significance of Turbo coding is,
i. High weight code words
ii. Decoder generates estimates of code words in two stages of decoding and interleaving-deinter
leaving.
iii. This is like circulation of air in turbo engine for better performance. Hence these codes called turbo
codes
35. What are the reasons to use an inter leaver in a turbo code? (M - 10)
An inter leaver is a device that rearranges the ordering of sequence of symbols in a deterministic
manner. The two main issues in the inter leaver size and the inter leaver map. Inter leaver is used to
feed the encoders with permutations so that the generated redundancy sequences can be assumed
independent
36. Define – Constraint Length
The constraint length (K) of a convolutional code is defined as the number of shifts a single message
bit to enter the shift register and finally comes out of the encoder output.
K= M + 1
37. What are the differences between block and convolution codes?
S.No Block codes Convolution codes
1.
2.
3.
The information bits are followed by the
parity bits.
There is no data dependency between
blocks
Useful for data communications.
The information bits are spread along the
sequence
Data passes through convolutional codes
in a continuous stream.
Useful for low – latency communications.
38. Define – Constraint Length of a Convolutional Code.
Constraint length is the number of shifts over which the single message bit can influence the
encoder output. It is expressed in terms of message bits.
39. What are convolutional codes?
A convolutional code in which parity bits are continuously interleaved by information (or) message
bits.
40. Define – Turbo Code
The Parallel Concatenated Convolutional Codes(PCCC), called Turbo Codes, has solved the dilemma
of structure and randomness through concatenation and interleaving respectively. The introduction of
turbo codes has given most of the gain promised by the channel – coding theorem.
41. What is dolby AC-1?
Dolby AC-1 is used for audio coding. It is MPEG audio coding standard. It uses psychoacoustic
model at the encoder and has fixed bit allocations to e ch subband.
42. What is the need of MIDI standard?
The MIDI stands for musical instrument digital interface(MIDI). It normally specifies the details of
digital interface of various musical instruments to micro-computer. It is essential to access, record or
store the music generated from musical instruments.
43. What is perceptual coding?
In perceptual coding only perceptual feature of the sound are stored. This gives high degree of
compression. Human ear is not sensitive to all frequencies equally. Similarly masking of weaker
signal takes place when louder signal to present nearby. These parameters are used in perceptual
coding.
PART – B (16 MARKS) 1. Write short note on CELP principles.
2. Explain the significance of D- frames in video coding?
3. Explain in detail, Turbo decoding.
4. Explain in detail, Turbo codes and their uses. 5. Explain the MPEG algorithm for video encoding, with a block diagram. (N - 10) 6. Explain Linear predictive coding. (D - 13)(M - 11)
7. Write short notes on H.261 video compression standard. (D - 13)
8. Explain the concepts of frequency masking and temporal masking. How they are used in perceptual coding? (N - 10) 9. Explain the various versions of Dolby ACs stating its merits and demerits, With neat illustrations.
10. (i) Explain the principles of perceptual coding
(ii) Why LPC is not suitable to encode music signal?
11. (i) Explain the encoding procedure of I,P and B frames in video encoding with suitable diagrams.
(ii) What are the special features of MPEG -4 standards (J - 14)
12. Explain the Linear Predictive Coding (LPC) model of analysis and synthesis of speech signal. State
the advantages of coding speech signal at low bit rates
13. Explain the encoding procedure of I,P and B frames in video compression techniques, State intended
application of the following video coding standard MPEG -1 , MPEG -2, MPEG -3 , MPEG -4
14. (i) What are macro blocks and GOBs?
(ii) On what factors does the quantization threshold depend in H.261 standards?
(iii) Expalin the MPEG compression techniques. (M - 11)
15. (i) Explain the various Dolby audio coders.
(ii) Explain any two audio coding techniques used in MPEG.
16. Explain the following audio coders:
i) MPEG audio coders
ii) DOLPY audio coders.
17. (i) Explain the “Motion estimation” and “Motion Compensation” phases of P and B frame encoding
process with diagrams wherever necessary
(ii) Write a short note on the “Macro Block” format of H.261 compression standard