This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Compressive Data Hiding: An Unconventional Approachfor Improved Color Image Coding
Patrizio Campisi
Dipartimento di Ingegneria Elettronica, Universitá degli Studi di Roma “Roma Tre,” Via della Vasca Navale 84, 00146 Roma, ItalyEmail: [email protected]
Deepa Kundur
Edward S. Rogers Sr. Department of Electrical and Computer Engineering, University of Toronto,Toronto, Ontario, Canada M5S 3G4Email: [email protected]
Dimitrios HatzinakosEdward S. Rogers Sr. Department of Electrical and Computer Engineering, University of Toronto,Toronto, Ontario, Canada M5S 3G4Email: [email protected]
Alessandro NeriDipartimento di Ingegneria Elettronica, Universitá degli Studi di Roma “Roma Tre,” Via della Vasca Navale 84, 00146 Roma, ItalyEmail: [email protected]
Received 30 April 2001 and in revised form 14 September 2001
Traditionally, data hiding and compression have had contradictory goals. The former problem adds perceptually irrelevant infor-mation in order to embed data, while the latter removes this irrelevancy and redundancy to reduce storage requirements. In thispaper, we use data hiding to help improve signal compression. We take an unconventional approach and consider “piggy-backing”the color information on the luminance component of an image for improved color image coding. Our new technique essentiallytransforms a given color image into the YIQ color space where the chrominance information is subsampled and embedded in thewavelet domain of the luminance component. Our technique can be used as preprocessing to improve the performance of popularimage compression schemes such as SPIHT that are optimized for grayscale image compression. Simulation results demonstratethe superior performance of the proposed technique in comparison to JPEG and straightforward SPIHT.
Keywords and phrases: color images, data hiding, compression, multiresolution analysis.
1. INTRODUCTION
Data hiding within multimedia has received growing interestin recent years due to its potential for signal captioning, main-taining audit trails in media commerce, and copy protectionthrough the development of digital watermarking technol-ogy. By embedding key information with the media itself, itis safe from content separation. Data hiding is the generalprocess by which a discrete information stream is mergedwithin media content by imposing imperceptible changes onthe original host signal.
One of the main obstacles within the data hiding commu-nity has been developing a scheme which is robust to percep-tual coding. Perceptual coding refers to the lossy compressionof multimedia signals using human perceptual models; the
compression mechanism is based on the premise that mi-nor modifications of the signal representation will not benoticeable in the displayed signal content. These modifica-tions are imposed on the signal in such a way as to reducethe number of information bits required for storage of thecontent. Human perceptual models are often theoreticallyand experimentally derived to determine the changes on asignal which remain imperceptible. A duality exists betweenthe problems of perceptual coding and data hiding; the for-mer problem attempts to remove irrelevant and redundantinformation from a signal, while the latter uses the irrele-vant information to mask the presence of the hidden data.Thus, the goals of data hiding and perceptual coding can beviewed as being somewhat contradictory. As a result, severalpapers have dealt with integrating perceptual coding with
Compressive Data Hiding: An Unconventional Approach for Improved Color Image Coding 153
data hiding [1, 2, 3, 4, 5, 6, 7], and others have investigatedthe theoretical relationship between both processes [8, 9]. In[10], data hiding for media compression is investigated. Themethod operates in the frequency domain and it is based onlinear projection, quantization, and perturbation.
The central theme of all the works cited above is thatthere must be an appropriate compromise between data hidingand compression to develop a method which performs bothreasonably. It is assumed that each process hinders, not helps,the objective of the other. Specifically, data hiding decreasesthe overall possible compression ratio and perceptual codingtampers with the hidden information, so that extraction isdifficult.
In this paper, we take a different, perhaps even eccentric,perspective. We try to identify how data hiding can be usedfor improved practical compression.
1.1. Objectives of this paper
In this work we present an approach to improve the efficiencyof compression by incorporating data hiding principles. Ona larger scale, the presented work aims to, in part, investigatethe contradictory processes of data hiding and compressionin order to derive insights into effective means to merge them.
Specifically, we wish
• to design a compression scheme in which color infor-mation is “piggybacked” on the grayscale componentto provide the option of viewing the information incolor or as a monochrome signal;
• to compare our proposed data hiding-based compres-sion approach practically with JPEG, a popular colorimage compression format, and SPIHT, an effectivewavelet-based compression algorithm.
2. HYPOTHESIS AND INTUITION
Consider the signal f0(x) which represents an audio signal,image, or video sequence. There exists a family of functions inthe set P(f0) which are perceptually identical to f0(x). Thus,if fk(x) ∈ P(f0) then we know that fk(x) is perceptuallyidentical to f0(x).
In ideal compression, every possible perceptually iden-tical signal is mapped to the same representation. Thus, allsignals in the set P(f0) will be collapsed into one compressedsignal. In data hiding, information is embedded into a hostsignal f0(x) by modifying it so that the resulting signal is per-ceptually identical to the original; therefore this new signal isalso in the set P(f0). It then follows that ideal compressionapplied to a signal containing hidden information has theeffect of annihilating the discrete data.
However, in practice, compression is not completely effi-cient; that is, there exists some irrelevant information whichhas not been removed. The nonideality comes from the con-straint that the coder have structure, and inadequate percep-tual models to account for all masking characteristics [11].In terms of our description above, for practical compressionnot all signals in P(f0) are mapped to the same represen-tation. Thus, there is a small bandwidth available for data
hiding. If this bandwidth could be used to transmit informa-tion about the signal such as chrominance components, theneven greater practical compression may be achieved. This ap-proach cannot help to improve the compression ratio in thecase of ideal perceptual coding, but can improve the situationin the case of practical compression.
The approach can potentially provide improved perfor-mance when we are restricted to use a coding scheme whichis not very efficient. For example, one of the most popu-lar wavelet image compression schemes SPIHT [12] is opti-mized for grayscale compression, and may not perform ac-ceptably for color images. If the technique outlined abovecould be applied to piggy-back the chrominance componentsin a compressed version of the luminance image, then someperformance advantage may be established. Thus, we proposeour scheme as a possible improvement to existing approacheswhich may not perform optimally. This also has the advantagein that the color chrominance is embedded in the grayscale,so that it may be viewed later even if color is not initiallyimportant.
In Section 3, we present our approach for compressivedata hiding. Experimental results are provided in Section 4in which we compare the proposed technique to SPIHT andJPEG to demonstrate its superior performance. Discussionand final remarks conclude the paper.
3. COMPRESSIVE DATA HIDING
3.1. The problem
As discussed in the introduction, we focus on the problemof improving compression through the use of judicious datahiding. Consider the situation in which we are restricted touse a given compression algorithm. For example, we may beusing a specific web browser or media player that supportsonly a limited number of image compression algorithmsor an online viewer for dynamic information swapping. Itwould be valuable to be able to improve the quality of thecompressed information without interfering with the com-pression algorithm. One approach would be to preprocessthe signal in order to improve upon potential compressionartefacts.
In this paper, we consider using robust data hiding at thispreprocessing stage in order to pass or tag signal informa-tion reliably through the compression stage. The informa-tion then can be used, after compression, in order to improvesignal fidelity or provide some other form of added-value.As discussed in the introduction, many watermarking algo-rithms have been proposed which are robust to specific formsof compression, so we believe that this preprocessing stage ispossible to effectively design.
For this work,we specifically consider the case of color im-age compression; we embed chrominance information intothe luminance component of the image to improve the sig-nal fidelity upon post-compression reconstruction for a fixedcompression ratio. We make use of the popular wavelet-basedcompression algorithm SPIHT, so that data hiding can also beeffectively accomplished in this multiresolution-like domain.
154 EURASIP Journal on Applied Signal Processing
X[n1, n2]
Q[n1, n2]
I[n1, n2]
Y[n1, n2]
Two levelDWT
Two levelDWT
Multiresolutionlike wavelet
decomposition
Adaptivecompression
Split intoits Y , I,Q
components
Embedding ......
Q2LL[n1, n2]
I2LL[n1, n2]
Yemb[n1, n2]
Ycomp[n1, n2]
Figure 1: Compressive data hiding scheme for color images.
3.2. Overview of the approach
A specific overview of the proposed compressive data hidingscheme for color images is discussed in this section. Figure 1provides a block diagram representation of the approach. Ouroriginal color image to be compressed is denoted X[n1, n2].
Since the chrominance components of the signal arepiggy-backed on the luminance component of the image us-ing data hiding it is important to judiciously select both thecolor space and transform domain for this processing. Selec-tion of these two components can affect the data embeddingcapacity and, hence, the performance of the proposed ap-proach.
As we see in Figure 1, the YIQ color space and the discretewavelet transform (DWT) domain [13] are incorporated intoour approach. Essentially, the color image X[n1, n2] is con-verted to the YIQ color space where the chrominance (orcolor) components are processed in the DWT domain andare embedded in the wavelet domain of the luminance (orgrayscale) component of the original color image. After em-bedding the information, an adaptive scheme is used to aidin the compression process.
The next two sections discuss why the choice of the colorspace and transform domain provide a good tradeoff betweenimperceptibility and robustness of the embedded informa-tion. Details of the embedding and adaptive compressiontechnique are also provided in the subsequent sections.
3.3. The YIQ color space
The choice of the color space where to perform the decom-position of the given color image is relevant to the problemwe have addressed. In fact, since our goal is compression,color spaces, such as the RGB space, where there is a signifi-cant correlation between the three color components shouldbe avoided. On the contrary, the color spaces YIQ, YUV ,and YCbCr nearly provide as much energy compaction asa theoretically optimal decomposition performed using theKarhunen-Loeve transform. Since they are equivalent for ourapplication, we have resorted to split the given color imageinto its three components in the YIQ color space, where theY coordinate represents the luminance Y[n1, n2] and the Iand Q coordinates represent the chrominance componentsI[n1, n2] and Q[n1, n2], respectively. The I and Q compo-nents are used to jointly represent saturation and hue.
It is worth noting that the luminance component Y con-tains a large component of the visual content, whereas the twochrominance components, I and Q, contain less perceptualinformation. Thus, due to the human visual system’s lowersensitivity to color information, it is possible to subsamplethe chrominance information and then integrate it back intothe overall color image without any loss of perceptual quality[14].
This color space division is useful as the overall signalcan be separated into a high volume luminance componentwhich can act as the host image for data hiding, and twolower volume color information-bearing signals which canact as the payload. In addition, the luminance component isessentially the grayscale component of the signal which allowsus to incorporate well-established grayscale image data hidingprinciples for our embedding procedure.
3.4. The compressive data hiding procedure
3.4.1 The transform domain
The chrominance components I[n1, n2] and Q[n1, n2] aresubsampled and hidden in the luminance component (seeFigure 1). For robust grayscale image data hiding, the em-bedding is performed in a specific transform domain whichis selected to be suitable for a specific application or attack onthe hidden information. It has been shown [9, 15] that differ-ent domains have significantly different data hiding channelcapacities. Selection of a suitable transform can significantlyimprove the robustness and imperceptibility of the hiddeninformation.
In this work, we incorporate the DWT domain. The spa-tial and frequency localization of this transform suits the be-havior of the human visual system (HVS) to visual stimuli[16]. Embedding information in this domain allows the hid-den information to be appropriately shaped for reduced dis-tortion. In particular, we have the flexibility to judiciouslyselect DWT subbands of the luminance component in whichto hide the chrominance information imperceptibly. In addi-tion, recent work by Fei et al. [9] demonstrates the advantageof this domain for wavelet-based compression. Furthermore,for the adaptive compression approach shown in Figure 1and discussed in Section 3.4.3, decomposition of the imageto compress into multiresolution-like components is con-venient and saves unnecessary complexity to transform the
Compressive Data Hiding: An Unconventional Approach for Improved Color Image Coding 155
signal into this form for more effective compression.We next provide the specifics of our proposed data em-
bedding technique. We make use of an unconventional two-level multiresolution-like wavelet decomposition on the lu-minance component. With reference to Figure 2, the firstlevel of the multiresolution decomposition is obtained byperforming a DWT on Y[n1, n2]. In particular, an 8 tapDaubachies filter is used:
Y[n1, n2
] DWT−−−−→{YLL
[n1, n2
], YHH
[n1, n2
],
YHL[n1, n2
], YLH
[n1, n2
]},
(1)
thus obtaining the subbands YLL[n1, n2], YLH[n1, n2],YHL[n1, n2], and YHH[n1, n2], which take into account theimage at a coarser resolution plus the “horizontal,”“vertical,”and “diagonal” details of the image also at this resolution,respectively.
The subbands YLH[n1, n2] and YHL[n1, n2] are chosento host the chrominance information. The rational behindthis choice relies on the observation that, in order to obtain agood tradeoff between robustness and transparency, manywatermarking techniques (cf. [17, 18] and the referencestherein) use “middle frequency” coefficients. This makes thesubbands YLH[n1, n2] and YHL[n1, n2] suitable to host thedata, whereas the subband YHH[n1, n2] is not.
The next step of the method consists in further decom-posing the subbands YLH[n1, n2] and YHL[n1, n2] in thewavelet domain thus leading to the subbands:1
YHL[n1, n2
] DWT−−−−→{Yll,HL
[n1, n2
], Yhh,HL
[n1, n2
],
Yhl,HL[n1, n2
], Ylh,HL
[n1, n2
]},
YLH[n1, n2
] DWT−−−−→{Yll,LH
[n1, n2
], Yhh,LH
[n1, n2
],
Yhl,LH[n1, n2
], Ylh,LH
[n1, n2
]}.
(2)
In particular, Yll,HL[n1, n2] and Yll,LH[n1, n2] representthe low-pass subbands, at coarser resolution, obtained fromthe high frequency subbands YHL[n1, n2] and YLH[n1, n2],respectively. It is expected that their energy contributionis relatively small compared to the energy of the remain-ing subbands of the set Yα,β[n1, n2] (α ∈ {ll, hl, lh,hh},β ∈ {HL,LH}).
Thus, they can be neglected introducing a very low meansquare error (MSE) that takes into account the variationsoccurred in the image details, which however do not affectthe image in a perceptual sense.
This conjecture has been experimentally verified upon awide range of images’ typology using both subjective evalua-tion criteria such as visual perceptibility measures and objec-tive ones like the Peak Signal to Noise Ratio (PSNR), given by
1This is the unconventional part of the scheme. Usually the YLL[n1, n2]is further decomposed instead of its details. However, this decomposition isnecessary for the imperceptibility of our embedding stage as we will discusslater in this section.
is the mean square error between the original image Y[n1,n2], of N×N pixels, and a modified replica Ymod[n1, n2]. Toassess the general perceptual irrelevance of the Yll,HL[n1, n2]and Yll,LH[n1, n2] bands, we consider a series of test im-ages with widely varying characteristics. For each image, amodified replica is produced by zeroing only the subbandsYll,HL[n1, n2] and Yll,LH[n1, n2] and keeping the remainingperfectly intact. The PSNRs of the resulting images (denotedby PSNRzeroed) are presented in the first column of Table 1.The values are reasonably high; in addition, there is no percep-tual change in the quality of the modified signal Ymod[n1, n2]in each test case.
3.4.2 Subsampling of the chrominance informationfor embedding
After having thus verified that the subbands Yll,HL[n1, n2]and Yll,LH[n1, n2] are perceptually negligible, we use a verystraightforward and computationally simple approach to em-bed the chrominance components: direct replacement ofthe Yll,HL[n1, n2] and Yll,LH[n1, n2] information with thechrominance content. However, the chrominances need to bepreprocessed in order to obtain a perceptually lossless par-simonious representation. This can be performed throughsubsampling of the color information.
Specifically, the components undergo a two-level waveletdecomposition using, as for the luminance analysis, an 8 tapDaubachies filter. As already outlined, their perceptual con-tribution is much smaller than the luminance’s and thus, theirsize may be reduced without significant distortion [19]. Thesubbands, I2LL[n1, n2] andQ2LL[n1, n2], which are the low-pass chrominance replicas at the coarsest resolution of theperformed pyramidal decomposition, can be used to recon-struct the color image without any perceptual quality loss.Experiments have verified this conjecture.
Therefore, we can replace the subbands Yll,HL[n1, n2]and Yll,LH[n1, n2] with information from I2LL[n1, n2] andQ2LL[n1, n2] to obtain Yemb[n1, n2] as shown in Figure 2.However, before the embedding, the energies of I2LL[n1, n2]and Q2LL[n1, n2] have to be normalized to the values ofthe corresponding host subbands, as not to impair the per-ceptual appearance of the reconstructed image. It should be
Figure 2: Multiresolution-like wavelet decomposition and data embedding scheme.
noted that the normalization values, say NI and NQ, haveto be transmitted to the decoder since they are necessary toproperly reconstruct the color information. To this end theycan be embedded in the header of the image.
We have experimentally verified that the aforementionedembedding procedure causes no perceptual degradation tothe luminance component of the signal. Some of these re-sults are reported in Figure 3 where Yemb[n1, n2], the lumi-nance image component with the embedded chrominanceinformation, is shown for different test cases.
It is worth noting that, the chrominance information(2 ∗ (64 ∗ 64) bytes) represented by I2LL[n1, n2] andQ2LL[n1, n2] is embedded into the luminance ((256 ∗256) bytes), having assumed to use a true color 24-bit repre-sentation for the color images used in our experiments (i.e.,8-bit representation for each component). Nevertheless, theluminance after the embedding appears perceptually indistin-guishable from the host image. Also a quantitative evaluationis performed by calculating the PSNR, denoted PSNR embed,whose values for different test images, are presented in thesecond column of Table 1.
The reader should note that the embedding procedureproposed for our implementation is based on replacement ofperceptually irrelevant bands which is distinct from popularmethods such as Spread Spectrum (SS) watermarking pro-posed by Cox et al. [17] and Quantization Index Modulation(QIM) proposed by Chen and Wornell [20]. SS watermark-ing suffers from host signal interference which limits the datahiding capacity necessary for our application. Replacementembedding as performed in the paper is a form of informa-tion hiding termed “Low Bit Modulation (LBM)” in [20].
Chen and Wornell show that using coding theory measuressuch as minimum distance and information capacity thatLBM and SS are inferior to QIM in certain contexts.
However, the QIM method is not appropriate for the pro-posed data hiding application presented in the paper. Oneproblem is that robust QIM implementations require errorcorrection coding which increases bandwidth and makes itimpractical to embed the necessary volume of color informa-tion in the luminance component. A second problem is thatthe QIM method is designed to reliably embed data bits, notperceptually viewable information such as logos or chromi-nance image bands. With perceptual information, some er-rors can be tolerated unlike with data and hence our datahiding problem is less restricted than that formulated in [20],and hence their solution is not as appropriate. We attempt toclarify in the next paragraph.
The natural redundancy of many forms of perceptual in-formation, such as images, often makes it robust to errors.Specifically, when viewed, the received signal is of better per-ceptual quality if it is transmitted in raw form through certainnonideal channels than it would be if the information was firstsource and channel coded [21]. Intuitively, the degradationon the embedded information which characterize our effec-tive attack channel are from quantization for compression.This process is designed to be applied on raw data and leaveperceptually salient information in tact. Thus, sophisticatedhigh bandwidth error correction codes are not necessarilyrequired. Our embedding scheme of simple replacement isanalogous to transmitting the chrominance bands in the raw.
Overall, the low bandwidth requirements, natural robust-ness to quantization distortions, and low complexity makes
Compressive Data Hiding: An Unconventional Approach for Improved Color Image Coding 157
Figure 3: Left column: original grayscale images. Right column:images with the chrominance components I and Q embedded.
our embedding technique more appropriate than other pop-ular schemes such as SS and QIM.
3.4.3 Adaptive compression
A compression algorithm such as SPIHT optimized forgrayscale image compression may be applied to the result-ing signal in order to produce a coded image. However, wefind that further preprocessing can be applied for improvedcompression by taking into account the diverse nature of theluminance signal component at the different subbands. Ourapproach follows.
The first step consists of obtaining the subbandsY(e)HL [n1, n2] and Y(e)LH [n1, n2], after having performed theembedding, by calculating the inverse discrete wavelet trans-
Yll[n1, n2]
YHH[n1, n2]
Y (e)HL [n1, n2]
Y (e)LH [n1, n2]
Ycomp
NQ
NI
bHH bHL bLH
bLL btot
Waveletcoder
Waveletcoder
Waveletcoder
Waveletcoder
bppsevalution
Bit streamgenerator
Adaptive compression
Figure 4: Adaptive compression scheme.
form (IDWT) on the subbands of (5):{I(n)2LL
[n1, n2
], Yhh,HL
[n1, n2
],
Yhl,HL[n1, n2
], Ylh,HL
[n1, n2
]} IDWT−−−−→ Y(e)HL[n1, n2
],{
Q(n)2LL[n1, n2
], Yhh,LH
[n1, n2
],
Yhl,LH[n1, n2
], Ylh,LH
[n1, n2
]} IDWT−−−−→ Y(e)LH[n1, n2
].
(5)
Therefore, the luminance and the embedded chrominancecomponents are represented, in a perceptually lossless man-ner, by the subbands:{YLL
[n1, n2
], YHH
[n1, n2
], Y (e)HL
[n1, n2
], Y (e)LH
[n1, n2
]}.(6)
In order to preserve the information embedded,each subbandof (6) is coded, according to the scheme shown in Figure 4,separately. Specifically, a wavelet-based coder is used insteadof a DCT-based coder; it has been proven that wavelet basedcoders provide better rate-distortion performance than theDCT-based JPEG and also allow a progressive coding ap-proach [22]. The method of set partitioning in hierarchi-cal tree (SPIHT) [12] has been employed in the proposedscheme. It is well known that the magnitude of the waveletcoefficients varies from band to band; in particular, lower fre-quency subbands usually have a higher magnitude than thecoefficients in the higher frequency subbands. This suggeststhat different bit rates must be used according to the specificsubband. Consider,{
bLL, bHH, bHL, bLH}
(7)
158 EURASIP Journal on Applied Signal Processing
Table 2: Compressive data hiding procedure.
Compressive Data Hiding
Embedding steps(1) The color image X[n1, n2] is split into its three color
components in the YIQ color space.(2) The luminance Y undergoes a multiresolution-like
wavelet decomposition:
Y DWT−−−−→ (YLL, YHH, YHL, YLH
),
YHLDWT−−−−→ (
Yll,HL, Yhh,HL, Yhl,HL, Ylh,HL),
YLHDWT−−−−→ (
Yll,LH, Yhh,LH, Yhl,LH, Ylh,LH).
(3) The chrominance components I and Q undergo atwo-level wavelet decomposition and only the “low-pass” subbands at the coarsest resolution I2LL andQ2LL are kept.
(4) I2LL and Q2LL are normalized to the energy of Yll,HLandYll,LH , respectively, thus obtaining I(e)2LL andQ(e)2LL.
(5) The subbands Y(e)HL and Y(e)LH are obtained as follows:
Compression steps(6) The global bit rate btot and the bit rate bLL, for the
subband YLL, are chosen by the user.(7) The bit rates bHH,bHL, bLH corresponding to the re-
maining subbands are evaluated according to (8) and(9).
(8) Finally each subband is compressed using the SPIHTcoder and the bit stream Ycomp is generated.
the bit per pixel (bpp) for each of the subbands in (6), respec-tively. Moreover, letbtot be the desired bpp for the compressedcolor image, which is related to the bpps in (7) as follows:
btot = bLL + bHH + bHL + bLH. (8)
The proposed criterion for adaptive compression consists inspecifying the global bit rate btot and the bit rate bLL for thesubband YLL[n1, n2], since this latter value plays the mostsignificant role in the decoded image appearance. The re-maining bpps bHH,bHL, bLH are automatically assigned bythe coder in such a way that a higher bit rate is assured to thesubbands having higher energy. Therefore,
bLH = ELHEHH
· bHH, bHL = EHLEHH
· bHH, (9)
where Eγ (γ ∈ (LH,HL,HH)) are the energy of the differentsubbands. After having chosen btot and bLL, according to theuser’s needs, the bit rates for each subband are obtained from(8) and (9).
As a rule of thumb we have chosen for our experiments abit rate bLL that is half the global bit rate btot. As pointed out
in Section 4, this choice, with no claim of optimality, showsthe potentialities of our approach.
Finally each subband is compressed, at the rates previ-ously evaluated, using the SPIHT coder. The so obtainedcompressed subbands along with the energy normalizationvalues generate the bit stream Ycomp. The compressive datahiding procedure is summarized in Table 2.
3.5. Color information retrieval
At the receiver, the color image is reconstructed, from thecompressed bit streamYcomp[n1, n2], by performing dual op-erations of those of the coding stage. The first step consists ofextracting the single subbands from the bit stream and thendecoded, thus obtaining an estimate (denoted by)
{YLL
[n1, n2
], YHH
[n1, n2
], Y (e)HL
[n1, n2
], Y (e)LH
[n1, n2
]}(10)
of the corresponding quantities in (6). The estimated chromi-nance information I2LL[n1, n2] and Q2LL[n1, n2] is ex-tracted from Y (e)HL [n1, n2], Y
(e)LH [n1, n2] by performing the
DWT as follows:
Y (e)HL[n1, n2
]DWT−−−−→
{Yll,HL
[n1, n2
] = I2LL[n1, n2], Yhh,HL
[n1, n2
],
Yhl,HL[n1, n2
], Ylh,HL
[n1, n2
]},
Y (e)LH[n1, n2
]DWT−−−−→
{Yll,LH
[n1, n2
] = Q2LL[n1, n2
], Yhh,LH
[n1, n2
],
Yhl,LH[n1, n2
], Ylh,LH
[n1, n2
]}.
(11)
After having zeroed Yll,HL[n1, n2] and Yll,LH[n1, n2], thesubbands YHL[n1, n2] and YLH[n1, n2] are reconstructed byperforming a one level IDWT{Yll,HL[n1, n2] = 0, Yhh,HL
[n1, n2
],
Yhl,HL[n1, n2
], Ylh,HL
[n1, n2
]} IDWT−−−−→ YHL[n1, n2
],{
Yll,LH[n1, n2
] = 0, Yhh,LH[n1, n2],
Yhl,LH[n1, n2
], Ylh,LH
[n1, n2
]} IDWT−−−−→ YLH[n1, n2
].
(12)
An estimate of the luminance Y [n1, n2] is achieved accordingto the following formula:{
YLL[n1, n2
], YHH
[n1, n2
],
YHL[n1, n2
], YLH
[n1, n2
]} IDWT−−−−→ Y[n1, n2
].
(13)
Finally, the chrominance components are upsampled to theimage dimension and combined with the estimated lumi-nance thus obtaining the color image. Experimental resultsare presented in the next section.
Compressive Data Hiding: An Unconventional Approach for Improved Color Image Coding 159
Figure 5: For each row from left to right: original color image (24 bits/pixel), compressed image using the proposed approach (0.15 bits/pixel),compressed image using SPIHT (0.15 bits/pixel), and compressed image using the JPEG method (0.25 bits/pixel (maximum compressionrate allowed by the JPEG coder)). First row: “Baboon,” second row: “Biked,” third row: “Lena,” fourth row: “GoldHill.”
Table 3: Bit rates employed for the different subbands with a globalcompression rate of 0.30 bpp.
In this section the effectiveness of the proposed method isdiscussed.
In Figures 5, 6, and 7, the compressed color images, ob-tained using the proposed approach are shown. The employed
bpps values for the different subbands of the images underexamination are reported in Table 3 for the case of compres-sion at 0.30 bpp. For the sake of comparison, in Figures 5, 6,and 7, along with the original color images and their com-pressed replicas obtained using our approach, their JPEG andSPIHT compressed versions at different bit rates (0.15 bpp,0.30 bpp, and 0.45 bpp), are also provided. It is worth point-ing out that in Figure 5 the JPEG displayed images have beencompressed at 0.25 bpp, instead of 0.15 bpp like the others inthe same figure, since the JPEG coder does not allow furthercompression.
The assessment of the performance of our method re-quires the quantification of the perceptual error between twocolor images. To this end, it is crucial to adopt color spaceswhich are related to the perceptual characteristic of the hu-man visual system thus allowing the definition of simple
160 EURASIP Journal on Applied Signal Processing
Figure 6: For each row from left to right: original color image (24 bits/pixel), compressed image using the proposed approach (0.30 bits/pixel),compressed image using SPIHT (0.30 bits/pixel), and compressed image using the JPEG method (0.30 bits/pixel). First row:“Baboon,”secondrow: “Biked,” third row: “Lena,” fourth row: “GoldHill.”
metrics capable of properly measuring the perceptual dis-tance between two colors. As is well known in literature[23], the RGB space, although widely used for different ap-plications, is not suitable for accurate perceptual computa-tions. More appropriate color spaces are the L∗u∗v∗ andL∗a∗b∗, standardized by the Commission Internationale deL’Enclairage (CIE) in 1976 as perceptually uniform. They areboth equally good in providing a quantitative estimation ofthe perceptual distance between two colors. For our perfor-mance evaluations, we resort to use the L∗a∗b∗ color spacewhere the L∗ coordinate corresponds to the luminance, a∗
corresponds to the red-green channel, and b∗ to the blue-yellow channel. The conversion from the RGB space to theL∗a∗b∗ space is given in the appendix. The main propertyof the L∗a∗b∗ space is that color points at the same Eu-clidean distance are perceptually indistinguishable. On theother side, the perceptive distance between two colors is well
approximated by
∆E =√(∆L∗
)2 + (∆a∗)2 + (∆b∗)2, (14)
where∆E is the color error and∆L∗,∆a∗, and∆b∗ the differ-ence between the components L∗, a∗, and b∗ components,respectively, of two colors under consideration.
Thus, the perceptive uniformity of L∗a∗b∗ is here ex-ploited to evaluate the perceptual similarity between twocolor images, of dimension N1 ×N2, by computing the nor-malized color distance (NCD) [14] according to the followingformula:
NCD =∑N1−1n1=0
∑N2−1n2=0 ∆E
[n1, n2
]∑N1−1n1=0
∑N2−1n2=0 E
[n1, n2
] , (15)
Compressive Data Hiding: An Unconventional Approach for Improved Color Image Coding 161
Figure 7: For each row from left to right: original color image (24 bits/pixel), compressed image using the proposed approach (0.45 bits/pixel),compressed image using SPIHT (0.45 bits/pixel), and compressed image using the JPEG method (0.45 bits/pixel). First row:“Baboon,”secondrow: “Biked,” third row: “Lena,” fourth row: “GoldHill.”
where∆E[n1, n2] is given by (14) particularized to the colorsof the two pixels in position [n1, n2] of the two images underanalysis and
is the Euclidean norm of the pixel in position [n1, n2] be-longing to the uncompressed image.
This quantitative performance evaluation, at different bitrates (0.15 bpp, 0.30 bpp, and 0.45 bpp), is performed onthe images obtained by applying our compressive data hidingapproach, JPEG and SPIHT with respect to the original imageand the results are shown in Table 4.
The quantitative evaluation of the performance of ourmethod in comparison with JPEG and SPIHT is in agree-ment with a subjective evaluation performed on the images
displayed in Figures 5, 6, and 7, from which it is evident thatthe proposed method better performs than JPEG and outper-forms SPIHT.
The pyramidal wavelet-like decomposition described inSection 3.4.1 also provides a tool to device a progressive cod-ing strategy. In fact, in some applications such as supervisedhuman retrieval from still images databases, it is not alwaysnecessary to retrieve, in one shot, a fully detailed color im-age because the user can decide at a first glance, from a roughgray-scale reproduction of the image, whether it is of interest.Therefore, the progressive approach could lead to dramatic bitsaving in critical applications such as progressive display overa bandwidth limited wireless channel. Using our approacha first sketch of the image is obtained by transmitting onlythe YLL[n1, n2] component; further details can be added bymeans of the YHH[n1, n2] subband. Moreover, the transmis-sion of Y(e)HL [n1, n2] and Y(e)LH [n1, n2] allows, not only to add
162 EURASIP Journal on Applied Signal Processing
Table 4: NCD evaluation for the compression rates 0.15 bpps,0.30 bpps, 0.45 bpps.
Bit rate Data hiding SPIHT JPEG
0.15 0.1492 0.1985 0.19
Baboon 0.30 0.1368 0.1932 0.1793
0.45 0.1225 0.1896 0.1545
0.15 0.0859 0.1213 0.1202
Biked 0.30 0.0713 0.1112 0.0938
0.45 0.0661 0.1052 0.0780
0.15 0.0932 0.1650 0.1753
Lena 0.30 0.0807 0.1595 0.1504
0.45 0.0695 0.1563 0.0881
0.15 0.1076 0.1725 0.1630
GoldHill 0.30 0.0893 0.1030 0.1385
0.45 0.0798 0.0963 0.0930
more details to the reconstructed image but even to recoverthe color information.
To summarize, in this paper, a progressive data hiding-based compression scheme, properly designed in order totrade off between the goals of data hiding and perceptual cod-ing, is proposed. After having performed an unconventionaltwo level wavelet decomposition of the luminance compo-nent of a color image, the perceptually irrelevant subbandsare properly selected, zeroed, and replaced with a parsimo-nious representation of the chrominance components. Thisleads to a gray scale image in which the color information ispiggybacked without impairing the overall perceptual qualityof the embedded image. This gives the opportunity of view-ing the image progressively from a monochrome version, atdifferent details level, to a color one according to the user’sneeds. Moreover, our method allows to achieve better qual-ity with respect to well consolidated coding schemes such asJPEG and SPIHT at low bit rates.
APPENDIX
L∗a∗b∗ COLOR SPACE
The transform from the RGB space to the L∗a∗b∗ space isas follows [19]:
X = 0.607R + 0.174G + 0.200B,
Y = 0.299R + 0.587G + 0.114B,
Z = 0.000R + 0.066G + 1.116B.
L∗ = 25(
100YY0
)1/3− 16 1 ≤ 100Y ≤ 100,
a∗ = 500[(
XX0
)1/3−(YY0
)1/3],
b∗ = 200[(YY0
)1/3−(ZZ0
)1/3],
(A.1)
with reference to the white tristimulus values X0, Y0, Z0.
REFERENCES
[1] J. Lacy, S. R. Quackenbush, A. R. Reibman, D. Shur, and J. H.Snyder, “On combining watermarking with perceptual cod-ing,” in Proc. IEEE Int. Conf. Acoustics, Speech, Signal Processing,vol. 6, pp. 3725–3728, Seattle, Wash, USA, May 1998.
[2] H.-J. Wang and C.-C. J. Kuo, “An integrated progressive imagecoding and watermark system,” in Proc. IEEE Int. Conf. Acous-tics, Speech, Signal Processing, vol. 6, pp. 3721–3724, Seattle,Wash, USA, May 1998.
[3] J. Meng and S.-F. Chang, “Embedding visible video water-marks in the compressed domain,” in Proc. IEEE Int. Conf. onImage Processing, vol. 1,pp. 474–477,Chicago, Ill,USA,October1998.
[4] S. Bhattacharjee and M. Kutter, “Compression tolerant im-age authentication,” in Proc. IEEE International Conference onImage Processing, vol. 1, Chicago, Ill, USA, October 1998.
[5] T.-Y. Chung, M.-S. Hong, Y.-N. Oh, D.-H. Shin, and S.-H.Park, “Digital watermarking for copyright protection of mpeg2compressed video,” IEEE Transactions on Consumer Electronics,vol. 44, no. 3, pp. 895–901, 1998.
[6] J. Lacy, S. R. Quackenbush, A. R. Reibman, and J. H. Sny-der, “Intellectual property protection systems and digital wa-termarking,” Optics Express, vol. 3, no. 12, pp. 478–484, 1998.
[7] D. Kundur and D. Hatzinakos, “Mismatching perceptual mod-els for effective watermarking in the presence of compression,”in Proc. SPIE, Multimedia Systems and Applications II, vol. 3845,pp. 29–42, September 1999.
[8] D. Kundur, “Energy allocation principles for high capacity datahiding,” in Proc. IEEE Int. Conf. on Image Processing, vol. 1, pp.423–426, Vancouver, Canada, September 2000.
[9] C. Fei, D. Kundur, and R. H. Kwong, “The choice of watermarkdomain in the presence of compression,” in Proc. IEEE Int.Conf. on Information Technology: Coding and Computing, pp.79–84, April 2001.
[10] B. Zhu and A. H. Tewfik, “Media compression via data hid-ing,” in Thirty-First Asilomar Conf. on Signals, Systems, andComputers, vol. 1, pp. 647–650, 1997.
[11] N. Jayant, J. Johnston, and R. Safranek, “Signal compressionbased models of human perception,” Proceedings of the IEEE,vol. 81, pp. 1383–1422, October 1993.
[12] A. Said and W. A. Pearlman, “A new fast and efficient imagecodec based on set partitioning in hierarchical trees,” IEEETrans. Circuits and Systems for Video Technology, vol. 6, pp.243–250, 1996.
[13] S. Mallat, “Multifrequency channel decompositions of imagesand wavelet models,” IEEE Trans. Acoustics, Speech, and SignalProcessing, vol. 37, pp. 2091–2110, 1989.
[14] K. N. Plataniotis and A. N. Venetsanopoulos, Color ImageProcessing and Applications, Springer-Verlag, Berlin, 2000.
[15] M. Ramkumar and A. N. Akansu, “Theoretical capacity mea-sures for data hiding in compressed images,” in Proc. SPIE,Multimedia Systems and Applications, vol. 3528, pp. 482–492,Boston, Mass, USA, November 1998.
[16] S. Mallat, “Wavelet for a vision,” Proceedings of the IEEE, vol.84, no. 4, pp. 604–614, 1996.
[17] I. J. Cox, J. Kilian, T. Leighton, and T. Shamoon, “Secure spreadspectrum watermarking for multimedia,” IEEE Trans. ImageProcessing, vol. 6, no. 12, pp. 1673–1687, 1997.
[18] F. Hartung and M. Kutter, “Multimedia watermarking tech-niques,” Proceedings of IEEE, vol. 87, no. 7, pp. 1079–1107,1999.
[19] A. K. Jain, Fundamentals of Digital Image Processing, Prentice-Hall, Englewood Cliffs, NJ, USA, 1989.
[20] B. Chen and G. W. Wornell, “Quantization index modulation:a class of provably good methods for digital watermarking and
Compressive Data Hiding: An Unconventional Approach for Improved Color Image Coding 163
information embedding,” IEEE Transactions on InformationTheory, vol. 47, no. 4, pp. 1423–1443, 2001.
[21] T. Cover and J. Thomas, Elements of Information Theory, JohnWiley & Sons, Toronto, Canada, 1991.
[22] J. Li, P.-Y. Cheng, and C.-C. J. Kuo, “A wavelet transform ap-proach to video compression,” in Proc. SPIE Wavelet Applica-tions II, Orlando, Fla, USA, April 1995.
[23] A. K. Jain, “Color distance and geodesics in color 3 space,”Journal of the Optical Society of America, vol. 62, no. 11, pp.1287–1290, 1972.
Patrizio Campisi received the “Laurea”degree in Electrical Engineering, “summacum laude,” at the University of Roma “LaSapienza,” Roma, Italy, and received hisPh.D. degree in Electrical Engineering fromthe University of Roma “Roma Tre,” Roma,Italy, in 1995 and 1999, respectively. In 1997and 2000, he was a visiting research associateat the Communication Laboratory of theUniversity of Toronto, Canada. He presentlyholds an associate research position at the University of Roma“Roma Tre,” where he is also lecturer for the graduate course inSignal Theory. His research interests are in the area of digital signaland image processing with applications to wireless communicationsand multimedia.
Deepa Kundur is an Assistant Professor inthe Edward S. Rogers Sr. Department ofElectrical and Computer Engineering at theUniversity of Toronto. She holds the title ofBell Canada Junior Chair-holder in Multi-media and is also an Associate of the NortelInstitute for Telecommunications. She re-ceived her B.A.Sc., M.A.Sc., and Ph.D. de-grees from the Electrical and Computer En-gineering Department at the University ofToronto. Prof. Kundur’s research interests span the areas of multi-media security, data hiding and covert communications, and non-linear and adaptive communication algorithms. Deepa is a memberof the IEEE (Communications and Signal Processing Societies) andthe Professional Engineers of Ontario (PEO).
Dimitrios Hatzinakos received the Di-ploma degree from the University ofThessaloniki, Greece, in 1983, the M.A.Sc.degree from the University of Ottawa,Canada, in 1986 and the Ph.D. degree fromNortheastern University, Boston, M.A.,in 1990, all in Electrical Engineering. InSeptember 1990 he joined the Departmentof Electrical and Computer Engineering,University of Toronto, where now he holdsthe rank of Professor with tenure. Also, he serves as Chair of theCommunications Group of the Department since 1 July, 1999. Hisresearch interests are in the areas of digital communications andsignal processing with applications to wireless communications,image processing and multimedia. He has organized and taughtmany short courses on modern signal processing framework andapplications devoted to continuing engineering education andgiven numerous seminars in the area of blind signal deconvolution.He is author/co-author of more than 100 papers in technicaljournals and conference proceedings and he has contributed to 5books in his areas of interest. His experience includes consultingthrough Electrical Engineering Consociates Ltd., and contracts with
United Signals and Systems Inc., Burns and Fry Ltd., PipetronixLtd., Defense Research Establishment Ottawa (DREO), Vaytek Inc.,Nortel Networks, and Vivosonic Inc. He is an Associate Editor forthe IEEE Transactions on Signal Processing since July 1998. Also,he was the Guest Editor for the special issue of Signal Processing,Elsevier, on Signal Processing Technologies for Short Burst Wire-less Communications which appeared in October 2000. He was amember of the IEEE Statistical Signal and Array Processing Techni-cal Committee (SSAP) from 1992 till 1995 and Technical Programco-Chair of the 5th Workshop on Higher-Order Statistics in July1997. He is a senior member of the IEEE and member of EURASIP,the Professional Engineers of Ontario (PEO), and the TechnicalChamber of Greece.
Alessandro Neri was born in Viterbo (Italy)in 1954. In 1977 he received the Doc-toral Degree in Electronic Engineering fromthe University of Rome “La Sapienza.” In1978 he joined the Research and Devel-opment Department of Contraves ItalianaS.p.a. where he gained a specific expertisein the field of radar signal processing andin applied detection and estimation theory,becoming the chief of the advanced systemgroup. In 1987, he joined the INFOCOMDepartment of the University of Rome “La Sapienza” as AssociateProfessor in Signal and Information Theory. In November 1992,he joined the Electronic Department of the University of Rome“Roma Tre” as Associate Professor in Electrical Communications.Since September 2001 he is full professor in Telecommunicationsat the University of Rome “Roma Tre.” Since 1987, his researchactivity has mainly been focused on information theory, signal the-ory, and signal and image processing and their applications to bothtelecommunications systems and remote sensing.
The 2011 European Signal Processing Conference (EUSIPCO 2011) is thenineteenth in a series of conferences promoted by the European Association forSignal Processing (EURASIP, www.eurasip.org). This year edition will take placein Barcelona, capital city of Catalonia (Spain), and will be jointly organized by theCentre Tecnològic de Telecomunicacions de Catalunya (CTTC) and theUniversitat Politècnica de Catalunya (UPC).EUSIPCO 2011 will focus on key aspects of signal processing theory and
li ti li t d b l A t f b i i ill b b d lit
Organizing Committee
Honorary ChairMiguel A. Lagunas (CTTC)
General ChairAna I. Pérez Neira (UPC)
General Vice ChairCarles Antón Haro (CTTC)
Technical Program ChairXavier Mestre (CTTC)
Technical Program Co Chairsapplications as listed below. Acceptance of submissions will be based on quality,relevance and originality. Accepted papers will be published in the EUSIPCOproceedings and presented during the conference. Paper submissions, proposalsfor tutorials and proposals for special sessions are invited in, but not limited to,the following areas of interest.
Areas of Interest
• Audio and electro acoustics.• Design, implementation, and applications of signal processing systems.
l d l d d
Technical Program Co ChairsJavier Hernando (UPC)Montserrat Pardàs (UPC)
Plenary TalksFerran Marqués (UPC)Yonina Eldar (Technion)
Special SessionsIgnacio Santamaría (Unversidadde Cantabria)Mats Bengtsson (KTH)
FinancesMontserrat Nájar (UPC)• Multimedia signal processing and coding.
• Image and multidimensional signal processing.• Signal detection and estimation.• Sensor array and multi channel signal processing.• Sensor fusion in networked systems.• Signal processing for communications.• Medical imaging and image analysis.• Non stationary, non linear and non Gaussian signal processing.
Submissions
Montserrat Nájar (UPC)
TutorialsDaniel P. Palomar(Hong Kong UST)Beatrice Pesquet Popescu (ENST)
Procedures to submit a paper and proposals for special sessions and tutorials willbe detailed at www.eusipco2011.org. Submitted papers must be camera ready, nomore than 5 pages long, and conforming to the standard specified on theEUSIPCO 2011 web site. First authors who are registered students can participatein the best student paper competition.
Important Deadlines:
P l f i l i 15 D 2010
Industrial Liaison & ExhibitsAngeliki Alexiou(University of Piraeus)Albert Sitjà (CTTC)
International LiaisonJu Liu (Shandong University China)Jinhong Yuan (UNSW Australia)Tamas Sziranyi (SZTAKI Hungary)Rich Stern (CMU USA)Ricardo L. de Queiroz (UNB Brazil)
Webpage: www.eusipco2011.org
Proposals for special sessions 15 Dec 2010Proposals for tutorials 18 Feb 2011Electronic submission of full papers 21 Feb 2011Notification of acceptance 23 May 2011Submission of camera ready papers 6 Jun 2011