 # Course Work 1

Jul 25, 2015

## Documents

emaimoun

Due to the growth of digital media our need to efficiently store large amounts of data has increased. This report will investigate how and why Joint Photographic Expert Group (JPEG) is able to store high quality images in a fraction of the space required for uncompressed storage. The software package MATLAB has been used to carry out the investigation as it contains all the tools required for image processing.

#### image processing

THE UNIVERSITY OF NOTTINGHAM SCHOOL OF ELECTRICAL AND ELECTRONIC ENGINEERING

JPEG Computing Exercise

AUTHOR SUPERVISOR DATE SUBMITTED

Mr Richard Krishna Beharry Dr James Bonnyman 09/03/2012

Introduction This coursework intends to give a deeper understanding of JPEG image compressions ability to store high quality images in a fraction of the space required for uncompressed storage. JPEG (Joint Photographic Experts Group) compression is a primarily lossless method of compression. This means it throws away useless data during compression thus obtaining superior compression ratios over most lossless schemes. It was specifically designed to discard information that the human eye cannot easily see. Slight changes in colour are not perceived well by the human eye, while slight changes in intensity (light and dark) are. Therefore JPEGs lossy encoding tends to be more careful with the grayscale part of an image than with the colour. This report documents the operation of JPEG compression, analysing the quality of results at different stages and the amount of data reduction achieved. The software used to write the code and produce the required results was MATLAB (See appendix for attached code). Part A Figure 1 shows the selected 8 by 8 pixel region of the given Lena image. The region chosen is good for image processing as it contains areas of both low and high contrast.

Figure 1 Selected 8 by 8 pixel region of the image

Figure 2 is a matrix of the 8 by 8 pixel region values. Note that the pixel values of a black and white image range from 0 to 255 in steps of 1, where pure black is represented by 0 and pure white by 255. Thus it can be seen how a photo can be accurately represented by these 256 shades of grey. region = 204 206 206 209 208 211 210 211 153 179 190 200 203 205 205 206 52 81 102 135 160 180 192 202 43 41 40 46 53 80 112 152 47 47 42 44 44 45 47 57 50 116 130 57 125 131 63 121 132 62 128 131 70 131 122 72 128 124 78 124 126 87 123 123

Figure 2 Matrix of the pixel values

The discrete cosine transform (DCT) represents an image as a sum of sinusoids of varying magnitudes and frequencies (frequency-domain representation). The DCT has the property that, for a typical image, most of the visually significant information about the image is concentrated in just a few coefficients of the DCT. For this reason, the DCT is used in JPEG compression. As in JPEG compression the input data are twodimensional, there is a need of using two-dimensional DCT. A 2-D DCT is a one dimensional DCT performed along the rows and then along the columns, or vice versa. This DCT converts the signal into numeric data so that the images information exists in a quantitative form that can be manipulated for compression. A 2-D DCT operation on this small image was performed and the following DCT coefficient values were obtained (shown in Figure 3).

Figure 3 - matrix of the DCT coefficient values

DCTregion =

975.5000 254.2903 345.3094 -40.8565 -69.5000 16.9532 -28.0277 18.1639 -112.6977 -62.5799 71.8404 77.7430 43.2054 -34.6079 -39.2199 4.9461 2.9862 -8.4158 -30.4099 0.3610 38.3562 21.2970 -5.3891 -26.5753 -16.5984 -4.7116 7.5783 6.4973 -0.2745 1.3335 1.5234 10.9329 0 -1.6046 -7.2093 1.3530 6.0000 4.8341 -1.0728 -0.5922 -7.0184 -2.3423 4.3685 -1.3663 6.0400 -0.7455 1.6082 -1.1375 -2.5899 -0.4998 -2.3891 1.5390 4.4072 0.8281 1.4099 -3.6306 -5.9873 -2.2537 0.4263 3.5338 -1.2701 -1.3476 2.0993 -1.6719

6. Comment on the distribution of the coefficient values and their positions in the matrix compared with the original 8 by 8 pixel values.

By performing the DCT on the input data, we have concentrated the representation of the image in the upper left coefficients of the output matrix, with the lower right coefficients of the DCT matrix containing less useful information. An inverse 2-D DCT operation on the matrix of DCT coefficients was performed and the following inverse DCT coefficient values were obtained (shown in Figure 4).Figure 4 - Inverse DCT on the matrix of DCT coefficients

InverseDCTregion = 204.0000 206.0000 206.0000 209.0000 208.0000 211.0000 210.0000 211.0000 153.0000 179.0000 190.0000 200.0000 203.0000 205.0000 205.0000 206.0000 52.0000 81.0000 102.0000 135.0000 160.0000 180.0000 192.0000 202.0000 43.0000 47.0000 50.0000 116.0000 130.0000 41.0000 47.0000 57.0000 125.0000 131.0000 40.0000 42.0000 63.0000 121.0000 132.0000 46.0000 44.0000 62.0000 128.0000 131.0000 53.0000 44.0000 70.0000 131.0000 122.0000 80.0000 45.0000 72.0000 128.0000 124.0000 112.0000 47.0000 78.0000 124.0000 126.0000 152.0000 57.0000 87.0000 123.0000 123.0000

8. Comment on the comparison and what this indicates about whether the DCT operation is a lossy or lossless stage in the compression process. It can clearly be seen that the inverse 2-D DCT coefficient values (Figure 4) are identical to the original pixel values (figure 2) indicating the DCT operation is a lossless stage in the compression process, i.e. allows the exact data to be reconstructed from the compressed data. After DCT, the image is described in terms of its frequency domain in great detail. However, the human eye cannot pick out changes very bright or very dim colours. JPEGs can be compressed further by rounding the changes in frequency that are indistinguishable to the eye to zero quantization stage.

Part B The next step, quantization of the coefficients in the output matrix, "discards" the less useful data and in turn compresses the image data. This stage controls the quality of most JPEG compressors and it is dependent on the actual quantization table used (in this case the matrix was taken from the file M1.mat). Figure 5 shows scaled and quantized DCT coefficients once the matrix of DCT coefficients had been divided by the quantization-interval matrix and rounded the result to the nearest integer. Each of the 64 positions of the DCT output block has its own quantization coefficient, with the higher-order terms being quantized more heavily than the low-order terms (that is, the higher-order terms have larger quantization coefficients). The result of this typically is that many of the higher frequency components are rounded to zero, and many of the rest become small positive or negative numbers, which take many fewer bits to represent. quantizedDCT = 61 -9 0 -1 0 0 0 0 23 -5 -1 0 0 0 0 0 35 5 -2 0 0 0 0 0 -3 4 0 0 0 0 0 0 -3 0 2 -1 1 0 0 0 0 0 0 0 0 0 0 0 -1 0 -1 0 0 0 0 0 0 0 0 0 0 0 0 0

Figure 5 Scaled and quantized DCT coefficients

By multiplying the scaled and quantized DCT coefficients by the quantization-interval matrix and then performing an inverse DCT operation the following pixels values are obtained, shown in Figure 6.Figure 6 - Restored quantization-interval matrix

restoreDCT = 191.9059 197.3065 203.7128 206.8807 206.6441 206.0155 207.0527 208.6569 178.4332 189.3663 203.8066 213.1922 214.2217 209.7150 204.5087 201.5495 47.6071 65.0366 93.1277 123.5694 151.5020 175.6495 194.8390 205.9741 38.4608 62.8490 43.0532 109.5809 137.8473 40.3662 56.4415 49.6299 117.0046 137.7165 44.3815 45.3941 58.5150 126.2284 135.3976 52.3790 34.2931 65.5197 131.2778 130.0905 67.5250 29.5098 70.1997 130.7799 124.4151 90.4900 34.7489 74.8188 127.7869 122.2119 115.6723 46.8096 80.3302 125.6370 124.2825 132.5451 56.7421 84.5213 125.0182 127.2555

Figure 7 - Comparison between original (top) and quantized (bottom)

12. Compare the restored pixel values with the original values and comment on the quality of the restoration process and the potential amount of data reduction. Comparing the values of the restored quantization-interval matrix in, Figure 6, to the original values in Figure 2, it can be seen that the numbers are different, but only slightly different. The higher frequencies have been removed, and the overall shape of this image appears to be smoother than the original. This stage takes into account that human eyes are unable to distinguish the fine detail below certain luminance level. Therefore, it will zero the coefficients below a predetermined threshold while quantizing the remaining with decreasing accuracies as frequency increases. Quantization significantly reduces file size, reducing the amount of bits to encode every pixel.

Part C Using block processing, the whole Lena image can operated on, not just on one 8 pixel by 8 pixel region. It performs iterations by implementing block operations to process a whole image but at a specified block at a time.

Figure 8 Restored image

Figure 9 Original Lena image

14. Compare the restored image with the original Lena image and comment on the visual image quality By comparing the original Lena image (Figure 9) with the restored image (Figure 8) we can see that the restored image shape appears to be smoother as the higher frequencies have been removed. This is as expected as it is just treating the full image as smaller 8 by 8 pixels and performing block operations (i.e. as the comparison in Figure 7 shows on an 8 by 8 pixel region).

15. Consider/devise how to assess the quality of the restored images in some quantitative way and see how well this agrees with the visual quality. A quantitative way to assess the quality of the restored image is to calculate the root mean square value. This is done by finding the difference between the pixel values of the Lena image, squaring them, finding the average and the square rooting the value. This will give a statistical measure of the magnitude of the varying quantity, the higher the value, the worse the quality (This is key to understanding the graph in Figure 10). The rms value obtained was around 4 and as we can see from the values in Figure 10, this value is a good indication of the visual quality.

Devise some method to estimate quantitatively the amount of data reduction achieved. Comment on th