8/8/2019 Hybrid JPEG Compression Using Histogram Based Segmentation http://slidepdf.com/reader/full/hybrid-jpeg-compression-using-histogram-based-segmentation 1/7 Hybrid JPEG Compression Using Histogram Based Segmentation M.Mohamed Sathik 1 , K.Senthamarai Kannan 2 and Y.Jacob Vetha Raj 3 1 Department of Computer Science, Sadakathullah Appa College, Tirunelveli, India [email protected]2,3 Department of Statistics, Manonmanium Sundaranar University, Tirunelveli, India [email protected][email protected]Abstract--Image compression is an inevitable solution for image transmission since the channel bandwidth is limited and the demand is for faster transmission. Storage limitation is also forcing to go for image compression as the color resolution and spatial resolutions are increasing according to quality requirements. JPEG compression is a widely used compression technique. JPEG compression method uses linear quantization and threshold values to maintain certain quality in an entire image. The proposed method estimates the vitality of the block of the image and adapts variable quantization and threshold values. This ensures that the vital area of the image is highly preserved than the other areas of the image. This hybrid approach increases the compression ratio and produces a desired high quality output image. .Key words-- Image Compression, Edge-Detection, Segmentation. Image Transformation, JPEG, Quantization. I. INTRODUCTION Every day, an enormous amount of information is stored, processed, and transmitted digitally. Companies provide business associates, investors, and potential customers with financial data, annual reports, inventory, and product information over the Internet. And much of the online information is graphical or pictorial in nature; the storage and communications requirements are immense. Methods of compressing the data prior to storage and/or transmission are of significant practical and commercial interest. Compression techniques fall under two broad categories such as lossless[1] and lossy[2][3]. The former is particularly useful in image archiving and allows the image to be compressed and decompressed without losing any information. And the later, provides higher levels of data reduction but result in a less than perfect reproduction of the original image. Lossy compression is useful in applications such as broadcast television, videoconferencing, and facsimile transmission, in which certain amount of error is an acceptable trade-off for increased compression performance. The foremost aim of image compression is to reduce the number of bits needed to represent an image. In lossless image compression algorithms, the reconstructed image is identical to the original image. Lossless algorithms, however, are limited by the low compression ratios they can achieve. Lossy compression algorithms, on the other hand, are capable of achieving high compression ratios. Though the reconstructed image is not identical to the original image, lossy compression algorithms obtain high compression ratios by exploiting human visual properties. Vector quantization [4],[5] , wavelet transformation [1] , [5]-[10] techniques are widely used in addition to various other methods[11]-[17] in image compression. The problem in lossless compression is that, the compression ratio is very less; where as in the lossy compression the compression ratio is very high but may loose vital information of the image. Some of the works carried out in hybrid image compression [18]-[19] incorporated different compression schemes like PVQ and DCTVQ in a single image compression. But the proposed method uses lossy compression method with different quality levels based on the context to compress a single image by avoiding the difficulties of using side information for image decompression in [20]. The proposed method performs a hybrid compression, which makes a balance on compression ratio and image quality by compressing the vital parts of the image with high quality. In this approach the main subject in the image is very important than the background image. Considering the importance of image components, and the effect of smoothness in image compression, this method segments the image as main subject and background, then the background of the image is subjected to low quality lossy compression and the main subject is compressed with high quality lossy compression. There are enormous amount of work on image compression is carried out both in lossless [1] [14] [17] and lossy [4] [15] compression. Very few works are carried out for Hybrid Image compression [18]-[20]. In the proposed work, for image compression, the edge detection, segmentation, smoothing and dilation techniques are used. For edge detection, segmentation [21],[22] smoothing and dilation, there are lots of work has been carried out [2],[3]. A novel and a time efficient method to detect edges and segmentation used in the proposed work are described in section II, section III gives a detailed description of the proposed method, the results and discussion are given in section IV and the concluding remarks are given in section V. II. BACKGROUND A. JPEG Compression Components of Image Compression System (JPEG).Image compression system consists of three closely connected components namely Source encoder (DCT based) (IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 8, November 2010 300 http://sites.google.com/site/ijcsis/ ISSN 1947-5500
7
Embed
Hybrid JPEG Compression Using Histogram Based Segmentation
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
8/8/2019 Hybrid JPEG Compression Using Histogram Based Segmentation
Abstract-- Image compression is an inevitable solution for
image transmission since the channel bandwidth is limited andthe demand is for faster transmission. Storage limitation is alsoforcing to go for image compression as the color resolution andspatial resolutions are increasing according to qualityrequirements. JPEG compression is a widely used compression
technique. JPEG compression method uses linear quantization
and threshold values to maintain certain quality in an entireimage. The proposed method estimates the vitality of the blockof the image and adapts variable quantization and thresholdvalues. This ensures that the vital area of the image is highly
preserved than the other areas of the image. This hybridapproach increases the compression ratio and produces adesired high quality output image.
.
Key words-- Image Compression, Edge-Detection,
Segmentation. Image Transformation, JPEG,
Quantization.
I. INTRODUCTION
Every day, an enormous amount of information is stored, processed, and transmitted digitally.
Companies provide business associates, investors, andpotential customers with financial data, annual reports,inventory, and product information over the Internet. Andmuch of the online information is graphical or pictorial in
nature; the storage and communications requirements areimmense. Methods of compressing the data prior to storage
and/or transmission are of significant practical andcommercial interest.
Compression techniques fall under two broad
categories such as lossless[1] and lossy[2][3]. The formeris particularly useful in image archiving and allows the
image to be compressed and decompressed without losingany information. And the later, provides higher levels of data reduction but result in a less than perfect reproduction
of the original image. Lossy compression is useful inapplications such as broadcast television,videoconferencing, and facsimile transmission, in whichcertain amount of error is an acceptable trade-off forincreased compression performance. The foremost aim of image compression is to reduce the number of bits needed
to represent an image. In lossless image compressionalgorithms, the reconstructed image is identical to the
original image. Lossless algorithms, however, are limitedby the low compression ratios they can achieve. Lossycompression algorithms, on the other hand, are capable of achieving high compression ratios. Though the
reconstructed image is not identical to the original image,
lossy compression algorithms obtain high compression
ratios by exploiting human visual properties.Vector quantization [4],[5] , wavelet transformation
[1] , [5]-[10] techniques are widely used in addition tovarious other methods[11]-[17] in image compression. The
problem in lossless compression is that, the compression
ratio is very less; where as in the lossy compression thecompression ratio is very high but may loose vitalinformation of the image. Some of the works carried out inhybrid image compression [18]-[19] incorporated different
compression schemes like PVQ and DCTVQ in a singleimage compression. But the proposed method uses lossycompression method with different quality levels based on
the context to compress a single image by avoiding thedifficulties of using side information for imagedecompression in [20].
The proposed method performs a hybrid
compression, which makes a balance on compression ratioand image quality by compressing the vital parts of the
image with high quality. In this approach the main subjectin the image is very important than the background image.Considering the importance of image components, and the
effect of smoothness in image compression, this methodsegments the image as main subject and background, thenthe background of the image is subjected to low quality
lossy compression and the main subject is compressed withhigh quality lossy compression. There are enormousamount of work on image compression is carried out both
in lossless [1] [14] [17] and lossy [4] [15] compression.Very few works are carried out for Hybrid Imagecompression [18]-[20].
In the proposed work, for image compression, the
edge detection, segmentation, smoothing and dilationtechniques are used. For edge detection, segmentation
[21],[22] smoothing and dilation, there are lots of work hasbeen carried out [2],[3]. A novel and a time efficientmethod to detect edges and segmentation used in the
proposed work are described in section II, section III givesa detailed description of the proposed method, the resultsand discussion are given in section IV and the concludingremarks are given in section V.
II. BACKGROUND
A. JPEG Compression
Components of Image Compression System (JPEG). Image compression system consists of three closelyconnected components namely
Source encoder (DCT based)
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 8, No. 8, November 2010
300 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
8/8/2019 Hybrid JPEG Compression Using Histogram Based Segmentation
Entropy encoderFigure 2 shows the architecture of the JPEG encoder.Principles behind JPEG Compression. A common
characteristic of most images is that the neighboring pixelsare correlated and therefore contain redundant information.The foremost task then is to find less correlated
representation of the image. Two fundamental componentsof compression are redundancy and irrelevancy reduction.
Redundancy reduction aims at removing duplication fromthe signal source. Irrelevancy reduction omits parts of thesignal that will not be noticed by the signal receiver, namelythe Human Visual System (HVS). The JPEG compression
standard (DCT based) employs the use of the discrete cosinetransform, which is applied to each 8 x 8 block of thepartitioned image. Compression is then achieved by
performing quantization of each of those 8 x 8 coefficientblocks.Image Transform Coding For JPEG Compression
Algorithm.In the image compression algorithm, the input image is
divided into 8-by-8 or 16-by-16 non-overlapping blocks,and the two-dimensional DCT is computed for each block.The DCT coefficients are then quantized, coded, and
transmitted. The JPEG receiver (or JPEG file reader)decodes the quantized DCT coefficients, computes theinverse two-dimensional DCT of each block, and then putsthe blocks back together into a single image. For typical
images, many of the DCT coefficients have values close tozero; these coefficients can be discarded without seriouslyaffecting the quality of the reconstructed image. A twodimensional DCT of an M by N matrix A is defined asfollows
M-1 N-1
p q mn
m =0n=0
0 p M-1
0 q N-1
(2m +1)p (2n +1)qα α A cos cos ,
2M 2N pq B
where
1/ M , p = 0p =
2/M , 1 p M-1α
1/ N , q = 0q =
2/N , 1 q N-1α
The DCT is an invertible transformation and its inverse is
given byM-1 N-1
p q pq
m =0n=0
0 p M-1
0 q N-1
(2m +1)p (2n +1)qα α B cos cos ,
2M 2Nmn A
Where
1/ M , p = 0p =
2/M , 1 p M-1α
1/ N , q = 0q =
2/N , 1 q N-1α
The DCT based encoder can be thought of as essentiallycompression of a stream of 8 X 8 blocks of image samples.Each 8 X 8 block makes its way through each processing
step, and yields output in compressed form into the datastream. Because adjacent image pixels are highly correlated,the ‘forward’ DCT (FDCT) processing step lays thefoundation for achieving data compression by concentrating
most of the signal in the lower spatial frequencies. For atypical 8 X 8 sample block from a typical source image,
most of the spatial frequencies have zero or near-zeroamplitude and need not be encoded. In principle, the DCTintroduces no loss to the source image samples; it merely
transforms them to a domain in which they can be moreefficiently encoded.After output from the FDCT, each of the 64 DCT
coefficients is uniformly quantized in conjunction with acarefully designed 64 – element Quantization Table. At thedecoder, the quantized values are multiplied by thecorresponding QT elements to recover the originalunquantized values. After quantization, all of the quantized
coefficients are ordered into the “zig-zag” sequence asshown in figure 1. This ordering helps to facilitate entropyencoding by placing low-frequency non-zero coefficientsbefore high-frequency coefficients. The DC coefficient,
which contains a significant fraction of the total imageenergy, is differentially encoded.
Figure 1 Zig-Zag Sequence
The JPEG decoder architecture is shown if figure 3which is the reverse procedure described for compression.
B. Segmentation
Let D be a matrix of order m x n, represents the
image of width m and height n. The domain for Di,j is[0..255] , for any i=1..m, and any j=1.. n.
The architecture of segmentation using histogram
is shown in figure 4. To make the process faster the highresolution input image is down sampled 2 times. When theimage is down sampled each time the dimension is reduced
by half of the original dimension. So the final down sampled
image (D) is of the dimension 4m
x 4n
. The down sampledimage is smoothed to get smoothed gray scale image usingequation (1).
The histogram(H) is computed for the gray scale image(S).
The most frequently present gray scale value (Mh) isdetermined from the histogram by equation (2) and is shown
as indicated by a line in figure 5.
Mh = arg{max(H(x))} …(2)
The background value of the images is having the highest
frequency in the case of homogenous background. In orderto surmise background textures a range of gray levelvalues are considered for segmentation. The range is
computed using the equations (3) and (4).
L = max( Mh – 30,0) …(3)
U = min( Mh + 30,255) …(4)
The gray scale image S is segmented to detect the
background area of the image using the function given inequation (5)
Bi,j = (Si,j > L) and (Si,j < U) …(5)
Figure – 5
After processing the pixel values for background area is 1 inthe binary image B. To avoid the problem of oversegmentation the binary image is subjected to sequence of
morphological operations. The binary image is eroded withsmaller circular structural element (SE) to remove smallersegments as given in equation (6).
SE B B …(6)
Smooth theImage
(S)
DownSample
2 Times (D)
HistogramComputation
(H)
RangeCalculation
(L & U)
BinarySegmentation
(B)
UpSample
2 Times
8 X 8 blocks
FDCT Quantizer Entropy
Encoder
Quantizer
Table
Huffman
Table
Compressed
image data
Source image
Entropy
Decoder
Dequantizer IDCT
Quantizer
Table
Huffman
TableReconstructed
Image
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 8, No. 8, November 2010
302 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
8/8/2019 Hybrid JPEG Compression Using Histogram Based Segmentation
Then the resultant image is subjected to morphologicalclosing operation with larger circular structural element asgiven in equation (7).
SE B B …(7)
III. PROPOSED HYBRID JPEG COMPRESSION.
The input image is initially segmented intobackground and foreground parts as described in section
II.B Then the image is divided into 8x8 blocks and DCTvalues are computed for each block. The quantization isperformed according to the predefined quantization table.The quantized values are then reordered based on zig-zagordering method described in section II A. The lower values
of AC coefficients are discarded from the zig-zag orderedlist by comparing the threshold value selected by theselector as per the block’s presences identified by the
classifier. If the block is present in foreground area then thethreshold is set to a higher value by the selector, otherwise a
lower value for threshold is set by the selector. Afterdiscarding insignificant coefficients the remaining data are
compressed by the standard entropy encoder based on thecode table.
Algorithm
1. Input High Resolution Color image.
2. Down sample the input image 2 times.
3. Convert the down sampled image to gray scaleimage (G).
4. Find histogram (H) of the gray scale image.
5. Find the lower (L) and upper (U) gray scale value
of background area.
6. Find Binary segmented image (B) from the gray
scale image (G)
7. Up sample Binary image (B) two times.8. Divide the input image into 8x8 blocks
9. Find DCT coefficients for each blocks
10. Quantize the DCT coefficients
11. Discard lower quantized values based on the
threshold value selected by the selector.
12. Compress remaining DCT coefficients by Entropy
Encoder
The architecture of the proposed method is shown in figure6. The Quantization Table is a fixed classical table derivedfrom empirical results. The Quantizer quantizes the DCT
coefficients computed by FDCT. The classifier identifies theclass of each pixel by segmenting the given input image.The selector and limiter works together to find the discard
threshold limit. The entropy encoder creates compressedcode using the Code Table. The compressed image may bestored or transmitted faster than the existing method.
IV. RESULTS AND DISCUSSION
The Hybrid JPEG Compression method is implemented
according to the description in section III and tested with aset of test images shown in figure 8. The results obtained
from the implementation of the proposed algorithms areshown in figures 7, 9,10 and table I. Figure 7.a shows theoriginal input image. In Figure 7.b the segmented objectand background area is discriminated by black and white.
The compressed bit rates of the twelve test images arecomputed and tabulated in table 1. The low quality (LQ) andhigh quality (HQ) JPEG compression is performed and the
corresponding compression ratios(CR) and PSNR valuesare tabulated. The PSNR is higher for HQ and CR is higherfor LQ. The Hybrid JPEG compression performs HQcompression on main subject area and LQ compression onbackground area thus the PSNR value at main subject area is
the same for Hybrid JPEG and HQ JPEG. Figure 9 showsthe comparison of normalized CRs of Hybrid JPEG and HQJPEG, it is observed that almost all of the images arecompressed better than classical JPEG compression. Figure
10 shows how well the compression ratio is increased than
the classical JPEG compression method.
QuantizingTable
Input
Image
FDCT Quantizer
Classifier Selector Limiter
Code Table Entropy
Encoder
Compressed
Image
Figure 6. Hybrid JPEG compression method
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 8, No. 8, November 2010
303 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
8/8/2019 Hybrid JPEG Compression Using Histogram Based Segmentation
Figure – 8 Test Images (1-12 from Left to Right and Top to Bottom)
0
0.2
0.4
0.6
0.8
1
1.2
1 2 3 4 5 6 7 8 9 10 11 12
Images
N o r m a l i z e
d C R
HyBrid
HQ Lossy
Figure – 9 Normalized Compression Ratio Obtained for Test ImagesIncreased Compression Ratio by Hybrid JPEG Compression
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
1 2 3 4 5 6 7 8 9 10 11 12
Images
C o m p r e s s i o n R a t i o
Figure – 10 Increased Compression Ratios by Hybrid Compression.
V. CONCLUSION
The compression ratio of Hybrid JPEG method is
higher than JPEG method in more than 90% of test cases. Inthe worst case both Hybrid JPEG and JPEG method givesthe same compression ratio. The PSNR value at the main
subject area is same for both methods. The PSNR value at
the background area is lower in Hybrid JPEG method whichis acceptable, since the background area is not vital. The
Hybrid JPEG method is suitable for imagery with largertrivial background and certain level of loss is permissible.
(IJCSIS) International Journal of Computer Science and Information Security,
Vol. 8, No. 8, November 2010
305 http://sites.google.com/site/ijcsis/
ISSN 1947-5500
8/8/2019 Hybrid JPEG Compression Using Histogram Based Segmentation