Top Banner
Fast Codebook Generation for Vector Quantization using Ordered Pairwise Nearest Neighbor with Multiple Merging K.Somasundaram Department of Computer Science and Applications Gandhigram Rural Institute Gandhigram – 624 301 Tamilnadu, India [email protected] S.Vimala Department of Computer Science Mother Teresa Women’s University Kodaikanal – 624 101 Tamilnadu, India [email protected] Abstract – In this paper, we propose two fast codebook generation techniques with iterative clustering for Vector Quantization (VQ). The techniques proposed in this paper are, Ordered Pairwise Nearest Neighbor (OPNN) and Ordered Pairwise Nearest Neighbor with Multiple Merging (OPNNMM). The conventional PNN technique has been improved using the proposed techniques to reduce the time taken in searching the nearest neighbors. Codebooks of various sizes for images of size 256 x 256 pixels are created. We also introduce codebook optimization, using clustering, for both the methods. Experimental results show that the time taken to generate the codebooks is reduced and the quality quality of the reconstructed images, in terms of PSNR, is better when compared to the results of earlier PNN based methods. OPNNMM with codebook optimization improves the quality of images and is fat better than OPNN method. keywords – image compression; vector quantization; nearest neighbor; multiple merging; codebook optimization I. INTRODUCTION Images Images are used for communication from ancient age and because of the rapid technological growth and the advent of internet, we are able to store and transmit digital data/image today. The transmission of multimedia contents over the net is increasing day by day. The multimedia elements are mostly speech, images, and videos. These elements require large amount of data resulting in consumption of huge bandwidth and storage resources. Vector quantization (VQ) [1]-[3] is an efficient technique for data compression and has been successfully used in various applications involving VQ-based encoding and VQ- based recognition. VQ is an efficient image coding technique achieving bit rates less than one bit per pixel (bpp). Several VQ algorithms have been extensively investigated/developed for speech and image compression. VQ is a lossy image Compression technique and has applications in different areas: protein classification and secondary structure computation [4], speech recognition, face detection, pattern recognition, real-time video based event detection and anomaly intrusion detection systems [5] etc. Different types of VQ, such as classified VQ [6],[ 7], address VQ[6], [8], finite state VQ[6], [9], side match VQ[6], [10], mean-removed classified VQ[9], [11], and predictive classified VQ[9], [11], have been used for various purposes. VQ has been applied to other applications, such as index compression [9], [13], and inverse half toning [9], [14], [15]. The important component of a VQ algorithm is the codebook generation. The most widely used technique for codebook generation is the Linde, Buzo and Gray (LBG) algorithm [16]. With its relatively simple structure and computational complexity, VQ has received much attention in the last decade [17]. The performance of the VQ highly depends on the effectiveness of the codebook. A vector quantizer Q of dimension k and size N is a mapping from a vector in k-dimensional Euclidean space points. This can be expressed mathematically as Q:R k C, where C={Y 1 , Y 2 , Y 3 , …,Y N }, and k i R Y . The set C is called the codebook and Y i , 1≤i≤N are called codewords. The most widely used Generalized Lloyd Algorithm (GLA) starts with an initial solution, which is iteratively improved using two optimality criteria in turn until a local minimum is reached. A codebook can also be built hierarchically. The iterative splitting algorithm [18], [19] starts with a codebook of size one, which is the centroid of the entire training set. The codebook is then iteratively enlarged by a splitting procedure until it reaches the desired size. Another hierarchical algorithm, the Pairwise Nearest Neighbor (PNN) [20], uses an opposite, bottom-up approach to generate the codebook. It starts by initializing a codebook PROCEEDINGS OF ICETECT 2011 978-1-4244-7926-9/11/$26.00 ©2011 IEEE 581
8

Fast codebook generation for Vector Quantization using Ordered Pairwise Nearest Neighbor with Multiple Merging

Apr 27, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Fast codebook generation for Vector Quantization using Ordered Pairwise Nearest Neighbor with Multiple Merging

Fast Codebook Generation for Vector Quantization using Ordered Pairwise

Nearest Neighbor with Multiple Merging

K.Somasundaram Department of Computer Science and Applications

Gandhigram Rural Institute Gandhigram – 624 301

Tamilnadu, India [email protected]

S.Vimala Department of Computer Science

Mother Teresa Women’s University Kodaikanal – 624 101

Tamilnadu, India [email protected]

Abstract – In this paper, we propose two fast codebook generation techniques with iterative clustering for Vector Quantization (VQ). The techniques proposed in this paper are, Ordered Pairwise Nearest Neighbor (OPNN) and Ordered Pairwise Nearest Neighbor with Multiple Merging (OPNNMM). The conventional PNN technique has been improved using the proposed techniques to reduce the time taken in searching the nearest neighbors. Codebooks of various sizes for images of size 256 x 256 pixels are created. We also introduce codebook optimization, using clustering, for both the methods. Experimental results show that the time taken to generate the codebooks is reduced and the quality quality of the reconstructed images, in terms of PSNR, is better when compared to the results of earlier PNN based methods. OPNNMM with codebook optimization improves the quality of images and is fat better than OPNN method. keywords – image compression; vector quantization; nearest neighbor; multiple merging; codebook optimization I. INTRODUCTION

Images Images are used for communication from ancient age and because of the rapid technological growth and the advent of internet, we are able to store and transmit digital data/image today. The transmission of multimedia contents over the net is increasing day by day. The multimedia elements are mostly speech, images, and videos. These elements require large amount of data resulting in consumption of huge bandwidth and storage resources. Vector quantization (VQ) [1]-[3] is an efficient technique for data compression and has been successfully used in various applications involving VQ-based encoding and VQ-based recognition. VQ is an efficient image coding technique achieving bit rates less than one bit per pixel (bpp). Several VQ algorithms have been extensively investigated/developed for speech and image compression. VQ is a lossy image Compression technique and has applications in different areas: protein classification and secondary structure computation [4], speech recognition,

face detection, pattern recognition, real-time video based event detection and anomaly intrusion detection systems [5] etc. Different types of VQ, such as classified VQ [6],[ 7], address VQ[6], [8], finite state VQ[6], [9], side match VQ[6], [10], mean-removed classified VQ[9], [11], and predictive classified VQ[9], [11], have been used for various purposes. VQ has been applied to other applications, such as index compression [9], [13], and inverse half toning [9], [14], [15]. The important component of a VQ algorithm is the codebook generation. The most widely used technique for codebook generation is the Linde, Buzo and Gray (LBG) algorithm [16]. With its relatively simple structure and computational complexity, VQ has received much attention in the last decade [17]. The performance of the VQ highly depends on the effectiveness of the codebook.

A vector quantizer Q of dimension k and size N is a mapping from a vector in k-dimensional Euclidean space points. This can be expressed mathematically as Q:RkàC,

where C={Y1, Y2, Y3, …,YN}, and ki RY ∈ . The set C is

called the codebook and Yi, 1≤i≤N are called codewords. The most widely used Generalized Lloyd Algorithm (GLA) starts with an initial solution, which is iteratively improved using two optimality criteria in turn until a local minimum is reached. A codebook can also be built hierarchically. The iterative splitting algorithm [18], [19] starts with a codebook of size one, which is the centroid of the entire training set. The codebook is then iteratively enlarged by a splitting procedure until it reaches the desired size. Another hierarchical algorithm, the Pairwise Nearest Neighbor (PNN) [20], uses an opposite, bottom-up approach to generate the codebook. It starts by initializing a codebook

PROCEEDINGS OF ICETECT 2011

978-1-4244-7926-9/11/$26.00 ©2011 IEEE 581

Page 2: Fast codebook generation for Vector Quantization using Ordered Pairwise Nearest Neighbor with Multiple Merging

where each training vector is considered as its own code vector. The number of codewords is reduced iteratively. In each iteration, two of the vectors which are very close to each other are merged and the process is repeated until the desired size of the codebook is reached. During the iteration, a set of N codevectors is reduced to N-1 codevectors. Among the two hierarchical approaches [21], the PNN has higher potential because it gives better results with a simpler implementation. It can also be used to produce an initial codebook for the GLA or it can be embedded into other hybrid methods. Research efforts in codebook generation techniques have been concentrated in two directions: one to generate a better codebook that approaches global optimal solution, and the other to reduce the computational complexity of the LBG algorithm. Many methods for reducing the time for codebook generation have appeared in literature. The subspace distortion method [22] and the PNN algorithm [23] are few of the important works to quote. The codebooks generated by both methods are slightly degraded even though the computation time is reduced significantly. In general, the generation of better codebooks requires longer computation time and the fast codebook generation methods often suffer from increase in overall distortion of the reconstructed image [23]. Few such algorithms are [24]. Kaukoranta et al [25] introduced a faster method of PNN. Their main idea is to maintain the pointer of nearest neighbor to avoid the calculation of the distance between the two neighbors. After each merge, the pointer table is to be updated for the merged vector only. This reduces the computational time significantly. In this paper, we propose two variants of PNN method for generating codebooks, the Ordered Pairwise Nearest Neighbor (OPNN) and Ordered PNN with Multiple Merging (OPNNMM). Experimental results show that the proposed methods yield better results than that of the existing PNN method. The rest of the work is organized as follows: In section 2, the proposed methods OPNN and OPNNMM are explained. The results and discussion are given in section 3. Conclusion is given in section 4. II. PROPOSED METHODS

The proposed methods initially treat the training vectors generated from the image as the base for generating the codebook. A set of N training vectors is reduced to a set of N-1 vectors. This is repeated iteratively, reducing one training vector each time until the desired codebook size is reached. In another method training vectors in multiples of

32/64/128/256/512/1024 are merged in each iteration till the desired codebook size is reached. These two methods of codebook generation are described below. A. Ordered PNN (OPNN)

In this method, an image of size M x M pixels is divided into small non-overlapping blocks of size 4x4 pixels. Usually M is a power of 2. Each block is then converted to a one dimensional array of 16 elements. This array becomes the training vector. An image of size M x M will give N training vectors, where,

N=M x M / 16 (1)

The set of N training vectors is called the training set TVj, where 1≤j≤N. The sum of the elements in each training vector is then computed as:

( ) Nj1 ,16

1

≤≤=∑=

iTVSi

jj (2)

The training set TVj is then sorted in ascending order of the sum of elements Sj, to give TVSj, where,

TVSj = sort(Sj), where Sj < Sj+1 (3)

Now the adjacent vectors in the set TVSj will be closer to each other than any other vectors. We treat two adjacent training vectors to be one pair. We then find one pair of adjacent vectors among N vectors whose difference dxm is the least one. The difference between the adjacent vectors of a pair is computed as:

11 ,)()(16

11 −<=<=−=∑

=+ NmiTViTVdx

ismsmm (4)

The locations of the adjacent vectors of a pair with the minimum difference is found by

l = index(min(dxm) ) (5)

These two vectors are then merged into a single vector as: ( ) ( )( ) 61,2,....,1i , 2/)( 1 =+= + iTViTViTV SlSlSl (6)

Now the training set of size N is reduced to N-1. This process is repeated until N reaches n, the desired size of the codebook. B. The algorithm for OPNN method

Step1: Input the given image of size M x M pixels. Divide the image into blocks of size 4 x 4.

582

Page 3: Fast codebook generation for Vector Quantization using Ordered Pairwise Nearest Neighbor with Multiple Merging

Step2: Generate N training vectors using equation (1). Step3: Calculate the sum of the components of the training

vectors using equation (2). Step4: Set the size of the codebook as n. Step5: Rearrange the training vectors in ascending order

based on the sum values using equation (3). Step6: Find the difference dxm between two consecutive

vectors using equation (4). Step7: Identify the pair that has the minimum difference

using the equation (5). Step8: Merge that pair of vectors into one using equation

(6). Step9: Repeat the steps from 4 to 7 until N reaches n. C.. Ordered PNN with Multiple Merging (OPNNMM)

In the above OPNN method, in each iteration, only one pair of training vectors is merged and thus it takes more time to generate the desired codebook. Instead of merging one pair in each iteration, we thought of merging multiple pairs at a time with an intention to reduce the time taken by the OPNN method. In this method, first we we compute the dxm. Then the difference values, dx are rearranged in ascending order using:

dxsm =sort(dxm), such that dxi < dxi+1 (7) Obviously the vectors at the top of the sorted list will be having the least distance. The pairs that are associated with the top p (merge level) difference values are merged. Every time, after sorting the difference, the top p pairs are merged. By doing so, a set of N vectors will be reduced to N-p vectors during a single iteration. After the required iterations, the size N will reach n, which is the desired codebook size. D. The algorithm for OPNNMM method Step1: Input the given image of size M x M pixels. Divide

the image into blocks of size 4 x 4 pixels. Step2: Generate N training vectors using equation (1). Set

the size of the codebook as n. Step3: Calculate the sum of the elements of the training

vectors using equation (2). Step4: Rearrange the training vectors in ascending order

based on the sum values using equation (3). Step5: Find the difference between the two consecutive

vectors using the equation (4). Step6: Sort the difference values dxm in ascending order

using equation (7). Step7: Merge the top p pairs of vectors using the equation

(6).

Step8: Repeat the steps from 3 to 7 till the size of the codebook N reaches n, the desired codebook size.

E. Optimization of codebook

The codebook generated using any of the method can be optimized by using the procedure given below. Let Xi be the sub image block of size 4 x 4, obtained from the given image of size m x m. Yi represents the codevectors generated by any method. Pick up any codevector Yi(j). Find all the sub image blocks Xi that are closer to Yi than to any other Yj. i.e. find the set of all Xi (training vectors) that satisfy:

d(Xi,Yi) < d(Xi,Yj) for all j ≠ i, (8)

where the distance between the training vector Xi and the codevector Yi is computed as

dy=∑=

−16

1jijj YX (9)

Calculate the sum vector by adding all the training vectors Xi that are closer to Ci. The individual component of the sum vector is calculated by adding the corresponding elements of all the training vectors of the same cluster as:

∑=

=Q

iijij XSum

1

(10)

Where Q is the total number of vectors in that cluster and j=1,….., 16.

The centroid of the cluster is computed as:

Yij = Sumij /Qi , where i=1,2,….n (11) Thus the old codevector Yi is replaced with the newly generated codevector Yi in eqn. (11). These steps are repeated until the code vectors converge to the desired level. F. Algorithm for codebook optimization Step1: The training vectors are grouped into n clusters

based on the distance between the codevectors and the training vectors using the equation (8) and (9).

Step2: Compute the sum vector for every cluster using the equation (10).

Step3: Compute the centroid for each cluster using the equation (11).

Step4: Replace the existing codevector with the new codevector (centroid) as given in eqn. (11).

583

Page 4: Fast codebook generation for Vector Quantization using Ordered Pairwise Nearest Neighbor with Multiple Merging

Step5: Repeat steps 1 through 4 till the code vectors converge to the desired level.

III. Result and Discussion

Experiments using the proposed methods were conducted on standard images, Lena, Boats, Bridge and Cameraman of size 256 x 256 pixels. The PSNR values and the time taken for generating the codebook using PNN, OPNN and OOPNN methods are given in Table I for different codebook sizes 128, 256, 512, and 1024 for Lena image. TABLE I. TIME (IN SECONDS) TO GENERATE THE CODEBOOKS OF VARIUOS SIZES AND PSNR VALUES OF RECONSTRUCTED LENA IMAGE USIG PNN, OPNN AND OOPNN METHODS.

CB Size Lena

PNN OPNN OOPNN

128 PSNR 27.02 28.73 33.34 Time 9718.20 430.42 430.42 + 4.13

256 PSNR 28.30 29.76 34.91 Time 9704.69 427.33 427.33 + 8.27

512 PSNR 31.01 34.28 37.18 Time 9694.55 422.97 422.97 + 16.72

1024 PSNR 36.58 37.96 41.34 Time 9627.34 414.67 414.67 + 35.52

TABLE II. TIME (IN SECONDS) TO GENERATE THE CODEBOOKS OF VARIUOS SIZES AND PSNR VALUES TO RECONSTRUCTED BRIDGE IMAGE USING PNN, OPNN AND OOPNN METHODS.

CB Size Bridge PNN OPNN OOPNN

128 PSNR 24.63 25.56 27.96 Time 9786.80 434.44 434.44 + 4.14

256 PSNR 25.33 26.82 28.90 Time 9743.58 431.13 431.13 + 8.30

512 PSNR 26.69 28.36 30.33 Time 9687.20 421.44 421.44 + 16.69

1024 PSNR 29.09 30.18 32.79 Time 9558.17 415.66 415.66 + 33.55

TABLE III. TIME (IN SECONDS) TO GENERATE THE CODEBOOKS OF VARIUOS SIZES AND PSNR VALUES OF RECONSTRUCTED CAMERAMAN IMAGE USIG PNN, OPNN AND OOPNN METHODS.

CB Size Cameraman PNN OPNN OOPNN

128 PSNR 24.92 24.32 30.46 Time 9794.81 422.44 422.44 + 4.13

256 PSNR 26.37 27.23 32.35 Time 9754.17 419.58 419.58 + 8.28

512 PSNR 29.77 32.12 35.53 Time 9698.34 414.39 414.39 + 16.70

1024 PSNR 35.44 37.33 40.66 Time 9551.74 410.80 410.80 + 33.45

From Table I, we observe that for Lena image, it took about 9718 seconds to generate a codebook of size 128. The time taken to generate the codebook decreases as the size of the codebook increases. It took 9627 seconds for a codebook of size 1024. The time taken to optimize the codebook increased with increase in codebook size. The PSNR values for PNN method increased from 27.02 for a codebook of size 128 to 36.58 for a codebook of size 1024. For OPNN, we found that the time is reduced to about 420 seconds, which is about 23 times faster than that of PNN. We observe that for all images, the OOPNN took about 434 seconds to generate an optimized codebook of size 128 with 20 iterations. The time taken (4 seconds) to optimize a codebook of size 128 is doubled to about 8 seconds for 256, 16 seconds for 512 and about 32 seconds for 1024. The PSNR value for Lena image is 33.34 for a codebook of size 128 and is increased to 41.34 with the codebook of size 1024. This is better than OPNN method and at 1/10th of the time. In OOPNN, the total time taken to generate the codebook is the time taken to generate the initial codebook plus the time taken to optimize the codebook. From Table I, we observe that the time taken to generate a codebook of size 128 is 430 seconds and the time taken for optimizing the codebook is 4 seconds leading to a total time of 434 seconds. With just an extra 4 seconds, the PSNR is increased from 28.73 to 33.34 for Lena image with the codebook of size 128. The same trend is found for higher codebook sizes also. Table II and Table III show the results for images, Bridge and Cameraman. We observe that the results are similar to that of Lena image. For OOPNN, we performed experiments with iterations up to 40. However we found that beyond 20 iterations, not much improvement in PSNR values were observed. Hence, we fixed the number of iterations for codebook optimization as 20 for all images. We then carried out experiments on the test images Lena, Boats, Bridge and Cameraman using OPNNMM. The experimental results obtained for different merge levels p=32, 64, 128 for codebooks of sizes 128, 256, 512 and 1024 are given in Table IV. We note form Table IV that to generate a codebook of size 128, the time taken is 25.66 seconds for a merge level of 32, 6.97 seconds for a merge level of 128 without much degradation in PSNR. For high detail image Cameraman, there is a significant reduction in PSNR. However it is compensated by the gain in time for generating the codebook. When the codebook is optimized, by spending about 4 seconds for a codebook of size 128, the PSNR value is significantly increased to 33.19 from 26.64 for Lena image for a merge level of 32. The same quality of image is obtained for higher merge levels 64 and 128 after

584

Page 5: Fast codebook generation for Vector Quantization using Ordered Pairwise Nearest Neighbor with Multiple Merging

optimization. With optimized codebook, we got better consistent PSNR values for all the merge levels and for all images. For example, for Lena image, after codebook optimization we got a PSNR value of 30 for all merger levels for a codebook size 128, 35 for 256, 37 for 512, and 41 for 1024. The same trend is found for higher codebook sizes 256, 512 and 1024 with increased PSNR values.

The time taken by the OPNNMM method to generate a codebook of size 1024 at merge level 32 is 16 times less than that of the time taken by OPNN method. On an average, the time taken by OPNNMM method to generate a codebook of size 1024 for Lena image is just 1/48th of the time taken by OPNN method.

TABLE IV. COMPARISON OF TIME (IN SECONDS) TAKEN TO GENERATE THE CODEBOOKS AND PSNR VALUES BY VARYING THE NUMBER OF PAIRS MERGED IN SINGLE ITERATION WITH CODEBOOK OF SIZES 128/256/512/1024 FOR IMAGES OF SIZE 256 X 256 PIXELS.

Codebook Size

Merge Level

OPNNMM Optimization with 20 iterations Lena Boats Bridge Camera Lena Boats Bridge Camera

128

32 Time 26.55 27.30 25.20 27.25 4.14 4.14 4.14 4.14 PSNR 26.64 23.10 23.89 25.57 33.19 30.54 27.64 30.30 64 Time 13.39 14.03 12.88 13.94 4.13 4.16 4.14 4.10 PSNR 25.16 23.06 24.52 19.63 33.41 30.68 27.64 30.33 128 Time 6.97 7.28 6.67 7.23 4.14 4.14 4.14 4.14 PSNR 26.53 22.22 23.59 19.34 33.51 30.55 27.67 30.26

256

32 Time 26.36 27.50 25.27 27.39 8.26 8.30 8.27 8.25 PSNR 29.47 23.26 25.15 28.76 35.49 32.21 28.93 32.20 64 Time 13.45 13.84 12.75 13.83 8.27 8.32 8.27 8.27 PSNR 28.64 23.92 25.20 28.76 35.00 32.08 28.91 32.44 128 Time 6.98 7.22 6.61 7.20 3.28 8.31 8.28 8.25 PSNR 27.96 24.58 25.60 20.94 35.44 32.52 28.89 32.43 256 Time 3.72 3.86 3.53 3.84 8.28 8.31 8.28 8.26 PSNR 26.39 23.00 25.36 22.71 35.10 32.36 28.89 32.44

512

32 Time 26.93 27.38 25.13 27.23 16.56 16.61 16.64 16.52 PSNR 31.08 28.88 27.51 31.98 37.65 34.83 30.39 35.57 64 Time 13.41 14.02 12.81 13.88 16.55 16.59 16.63 16.56 PSNR 32.42 29.22 27.50 32.00 37.20 34.65 30.64 35.57 128 Time 6.94 7.25 6.61 7.17 16.59 16.59 16.64 16.58 PSNR 29.78 26.82 27.30 31.68 37.58 34.70 30.60 35.40 256 Time 3.70 3.84 3.55 3.81 16.56 16.61 16.67 16.58 PSNR 30.95 26.13 26.87 31.45 37.44 34.78 30.60 35.40 512 Time 2.09 2.30 1.98 2.17 16.59 16.63 16.64 16.58 PSNR 31.07 24.55 26.61 31.19 37.00 34.78 30.53 35.40

1024

32 Time 26.02 27.49 24.91 26.95 33.33 33.40 33.55 33.36 PSNR 36.33 33.73 29.37 37.42 41.78 38.43 33.11 40.91 64 Time 13.19 13.81 12.64 13.69 33.31 33.42 33.55 33.38 PSNR 36.20 34.00 29.31 37.33 41.62 38.46 33.12 40.65 128 Time 6.84 7.19 6.52 7.11 33.36 33.41 33.56 33.39 PSNR 35.70 32.63 29.31 37.33 41.51 38.40 33.03 40.74 256 Time 3.66 3.80 3.48 3.80 33.36 33.39 33.47 33.40 PSNR 35.51 32.21 29.19 36.74 41.18 38.22 33.10 40.71 512 Time 2.05 2.14 1.95 2.09 33.38 33.40 33.48 33.38 PSNR 34.77 28.62 29.59 35.94 41.20 38.29 32.98 40.53 1024 Time 1.25 1.31 1.17 1.28 33.36 33.41 33.53 33.39 PSNR 31.57 30.49 28.91 34.58 41.54 37.83 32.94 40.07

We further made an analysis of the time taken for codebook generation, optimization and the reconstructed image for OPNN and OPNNMM methods. We extract values from Table I to Table IV and generate Table V.

Table V shows the time to generate an optimized codebook and the corresponding image quality of the reconstructed images in terms of PSNR.

585

Page 6: Fast codebook generation for Vector Quantization using Ordered Pairwise Nearest Neighbor with Multiple Merging

TABLE V. COMPARISON OF TIME AND IMAGE QUALITY FOR OOPNN AND OOPNNMM METHODS FOR CODEBOOK SIZE 128 FOR DIFFERENT IMAGES.

Image OOPNN OOPNNMM (Mergelevel = 128)

CB Generation Optimization Total

Time PSNR CB Generation Optimization Total

Time PSNR

Lena 430.42 4.13 434.55 33.34 6.64 4.14 11.08 33.51 Bridge 434.44 4.14 438.58 27.96 6.67 4.14 10.81 27.67 Cameraman 422.44 4.13 426.57 30.46 7.23 4.14 11.37 30.26

We note from Table V that OPNN method takes 434.55 seconds to generate an optimized codebook for Lena image and the reconstructed image has a PSNR value of 33.34, whereas the OPNNMM method takes only 11.08 seconds to produce the same result. Similar is the case with other images too. We further compare the gain factor in time using equation (12) between OPNN and OPNNMM and are given in Table VI. TABLE VI. THE REDUCTION IN TIME FOR GENERATING THE CODEBOOK OF SIZE 1024 BY OPNNMM METHOD WHEN COMPARED TO OPNN METHOD FOR LENA IMAGE

Method No. of Pairs Merged

Time (in

seconds) PSNR

Gain Factor in TIme

OPNN 1 420.45 37.96 -

OPNNMM

32 26.02 36.33 93.81% 64 13.19 36.20 96.86%

128 6.84 35.70 98.37% 256 3.66 35.51 99.13% 512 2.05 34.77 99.51%

1024 1.25 31.57 99.70% The gain Factor is calculated using the equation

GF = 100-(TimeOPNMM/TimeOPNN)*100 (12) The performance of OPNN and OPNNMM methods with respect to number of iterations is given in Table VII. TABLE VII: PERFORMANCE COMPARISON OF THE PROPOSED METHODS WITH RESPECT TO NUMBER OF ITERATIONS.

The OPNN method performs (N-n) iterations to generate a codebook of size n, whereas the OPNNMM method performs (N-n)/p iterations. The algorithms are implemented using Matlab 7.0 on Windows Operating System. The hardware used is the Intel Core 2 Duo E7400@ 2.8 GHz Processor with 2 GB RAM. When compared to others’, the proposed method OPNNMM give better results and the results are compared in Table VIII thru. Table XI.

In the following tables, the time (in seconds) to generate the codebook and the PSNR of reconstructed images are compared with respect to Ordered Codebook Generation (OCG), Split and Merge (SM) and OPNNMM methods for various codebook sizes 128, 256, 512 and 1024.

TABLE VIII. COMPARISON WITH RESPECT TO CODEBOOK OF SIZE 128.

Codebook

Size Image Performance OCG SM OPNNMM

128

Lena Time 4.20 24.16 11.11 PSNR 33.03 32.89 33.51

Boats Time 4.22 23.98 11.42 PSNR 30.59 30.18 30.55

Bridge Time 4.22 24.09 10.81 PSNR 28.16 27.87 27.67

Camera Time 4.22 24.13 11.37 PSNR 29.94 29.18 30.26

TABLE IX. COMPARISON WITH RESPECT TO CODEBOOK OF SIZE 256.

Codebook

Size Image Performance OCG SM OPNNMM

256

Lena Time 8.89 48.61 12.00 PSNR 34.82 34.60 35.10

Boats Time 8.73 48.59 12.17 PSNR 32.00 31.89 32.36

Bridge Time 8.72 48.42 11.81 PSNR 29.52 29.04 28.89

Camera Time 8.73 48.45 12.10 PSNR 31.12 31.07 32.44

TABLE X. COMPARISON WITH RESPECT TO CODEBOOK OF SIZE 512.

Codebook Size Image Performance OCG SM OPNNMM

512

Lena Time 17.21 100.64 18.68 PSNR 36.86 36.76 37.00

Boats Time 17.20 100.23 18.93 PSNR 33.95 33.95 34.78

Bridge Time 17.20 100.08 18.62 PSNR 31.12 30.75 30.53

Camera Time 17.20 100.66 18.75 PSNR 33.10 33.43 35.40

Method No. of iterations OPNN (N-n) OPNNMM (N-n)/p

586

Page 7: Fast codebook generation for Vector Quantization using Ordered Pairwise Nearest Neighbor with Multiple Merging

TABLE XI. COMPARISON WITH RESPECT TO CODEBOOK OF SIZE 1024

Cameraman (27.23) Bridge (26.82)

Figure 3. Ordered PNN technique with CB Size = 256

1024

Image Performance OCG SM OPNNMM

Lena Time 34.33 199.28 34.61 PSNR 39.59 39.43 41.54

Boats Time 34.34 199.95 34.72 PSNR 36.60 37.09 37.83

Bridge Time 34.45 200.33 34.70 PSNR 33.71 33.55 32.94

Camera Time 34.38 201.19 34.67 PSNR 35.45 35.93 40.07

Cameraman (26.37)

Figure 2. PNN techniq

(a) Lena

(c) Bridge

Figure 1. Training sets

Bridge (25.33)

ue with CB Size=256

Cameraman (28.76) Bridge (25.15)

Figure 4. OPNNMM with Mergelevel 32 and CB Size=256

Cameraman(32.20) Bridge(28.93)

Figure 5. Optimized OPNNMM with Merge Level 32 and CB Size=256.

The original images Lena, Boats, Bridge and Camera taken for the study are shown in Figure 1(a) – (d). For visual comparison, the reconstructed images obtained with PNN, Ordered PNN, OPNNMM and Optimized OPNNMM are given in Figure 2 – Figure 6 for a codebook of size 256. Values within brackets give the PSNR values. IV. CONCLUSION We have proposed two methods OPNN and OPNNMM along with codebook optimization. OPNN takes about 1/23rd of the time taken by the traditional PNN method, but with significant improvement in PSNR value. On an average, OPNNMM method takes only 1/48th of the time taken by OPNN method only with slight reduction in PSNR. Merging multiple pairs of vectors in OPNNMM, instead of single pair, results in faster rate than that of OPNN. In certain cases, the PSNR values are also improved in OPNNMM method. The

(b) Boats

(d) Camera

taken for the study

587

Page 8: Fast codebook generation for Vector Quantization using Ordered Pairwise Nearest Neighbor with Multiple Merging

codebooks generated by both the OPNN and OPNNMM methods are optimized using the codebook optimization technique. There is a significant improvement in the PSNR value after optimizing the codebook. For any codebook size, the PSNR values obtained for any image after codebook optimization is the same for all merge levels. It is therefore possible to generate codebook at higher merge levels and optimize it to get better results, in terms of PSNR values, than OOPNN. OOPNNMM is a far superior method than OOPNN method. Hence the OOPNNMM method may find place in applications, where time is a prime factor.

REFERENCES [1] Jeng-Shyang Pan, Zhe-Ming Lu, and Sheng-He Sun.: “An Efficient

Encoding Algorithm for Vector Quantization Based on Subvector Technique,” IEEE Transactions on image processing, vol 12 No. 3 March 2003.

[2] R. M. Gray, “Vector quantization,” IEEE ASSP Mag., pp. 4-29, Apr.1984.

[3] Y. Linde, A. Buzo, and R. M. Gray.: “An algorithm for vector quantizer design,” IEEE Trans. Commun.’, vol. COM-28, no. 1, pp. 84-95, 1980.

[4] Juan J. Merelo Guervos, and et, al., “Protien Classfication and Secondary Structure Computetion,” Proceedings of the International Workshop on Artificial Neural Newtworks , Pages 415-421, 1991, Springer – Verlog.

[5] C.Garcia and G.Tzirita, “Face Detection using Quantized Skin Color Regions Merging and Wavelet Packet Analysis,” IEEE Trans. Multimedia, vol. 1, No. 3, pp. 264-277, Septemeber 1999.

[6] Jim Z.C. Lai, Yi-Ching Liaw, and Julie Liu, “A fast VQ codebook generation algorithm using codeword displacement,” Pattern Recogn. vol. 41, no. 1, pp 315–319, 2008.

[7] Y.C. Liaw, J.Z.C. Lai, W. Lo, “Image restoration of compressed image using classified vector quantization,” Pattern Recogn. vol. 35, No. 2, pp 181–192, 2002.

[8] N.M. Nasrabadi, Y. Feng, “Image compression using address vector quantization,” IEEE Trans. Commun. vol. 38 No. 12, pp. 2166–2173, 1990.

[9] J. Foster, R.M. Gray, M.O. Dunham, “Finite state vector quantization for waveform coding,” IEEE Trans. Inf. Theory vol. 31, No. 3, pp. 348–359, 1985.

[10] T. Kim, “Side match and overlap match vector quantizers for images,” IEEE Trans. Image Process. vol. 1, No. 2, pp. 170–185, 1992.

[11] J. Z. C. Lai, Y.C. Liaw, W. Lo, “Artifact reduction of JPEG coded images using mean-removed classified vector quantization,” Signal Process. vol. 82, No. 10, pp. 1375–1388, 2002.

[12] K. N. Ngan, H.C. Koh, “Predictive classified vector quantization,” IEEETrans. Image Process. vol. 1, No. 3, pp. 269–280, 1992.

[13] C. H. Hsieh, J.C. Tsai, “Lossless compression of VQ index with search order coding,” IEEE Trans. Image Process. vol. 5, No. 11, pp. 1579–1582, 1996.

[14] J. C. Lai, J.Y. Yen, “Inverse error-diffusion using classified vector quantization,” IEEE Trans. Image Process. vol. 7, No. 12, pp. 1753–1758, 1998.

[15] P.C. Chang, C.S. Yu, T.H. Lee, “Hybrid LMS-MMSE inverse halftoning technique,” IEEE Trans. Image Process. vol. 10, No. 1, pp. 95–103, 2001.

[16] Y.Linde, A.Buzo and R.M.Gray, “An Algorithm for Vector Quantizer Design,” IEEE Trans. Communication, Vol. 28, pp. 84-95, Jan. 1980.

[17] C.C.Ioannis Katsavounidis, Jay Kuo, Zhen Zhang, “A New Initialization Technique for Generalized Lloyd Iteration,” IEEE Signal Processing Letters, Vol. 1, No. 10, pp. 144-146, October 1994.

[18] X.Wu and K.Zhang, “A Better Tree-Structured Vector Qunatizer,” IEEE Proc., Data Compression Conference, Snowbird, UT, 1991, pp. 392-401.

[19] P.Franti, T.Kaukoranta, and O.Nevalainen, “On the Splitting Method for VQ Codebook Generation,” Opt. Eng., vol 36, pp.3043-3051, Nov. 1997.

588