Top Banner

of 19

Reversible Color Image Watermarking in YCoCg Space

Jul 05, 2018

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • 8/16/2019 Reversible Color Image Watermarking in YCoCg Space

    1/19

    Reversible Color Image Watermarking

    in the YCoCg-R Color Space

    Aniket Roy1(B), Rajat Subhra Chakraborty1, and Ruchira Naskar2

    1 Secured Embedded Architecture Laboratory (SEAL),Department of Computer Science and Engineering, Indian Institute of Technology,

    Kharagpur 721302, West Bengal, [email protected], [email protected]

    2 Department of Computer Science and Engineering,National Institute of Technology, Rourkela 769008, Odisha, India

    [email protected]

    Abstract.   Reversible Image Watermarking   is a technique to losslesslyembed and retrieve information (in the form of a watermark) in a coverimage. We have proposed and implemented a reversible color imagewatermarking algorithm in the YCoCg-R color space, based on histogram bin shifting  of the prediction errors, using weighted mean based prediction technique to predict the pixel values. The motivations for choosing theYCoCg-R color space lies in the fact that its transformation from thetraditional RGB color space is reversible, with higher transform coding

    gain and near to optimal compression performance than the RGB andother reversible color spaces, resulting in considerably higher embeddingcapacity. We demonstrate through information theoretic analysis andexperimental results that reversible watermarking in the YCoCg-R colorspace results in higher embedding capacity at lower distortion than RGBand several other color space representations.

    1 Introduction

    Digital watermarking   [1] is an important technique adopted for copyright protec-tion and authentication. Digital watermarking is the act of hiding secret infor-mation (termed a “watermark”) into a digital “cover” medium (image, audioor video), such that this information may be extracted later for authentica-tion of the cover. However, the process of watermark embedding in the covermedium usually leads to distortion of the latter, even if it is perceptually negli-gible. Reversible watermarking   [2,3, 10] is a special class of digital watermarking,whereby after watermark extraction, both the watermark and the cover mediumremain unmodified, bit-by-bit. In traditional reversible watermarking schemes,

    the watermark to be embedded is usually generated as a cryptographic hash of the cover image. Reversible watermarking is most widely used in industries deal-ing with highly sensitive data, such as the military, medical and legal industries,where data integrity is the major concern for users [10]. In this paper we focuson reversible watermarking algorithms for digital images.

    c   Springer International Publishing Switzerland 2015S. Jajodia and C. Mazumdar (Eds.): ICISS 2015, LNCS 9478, pp. 480–498, 2015.DOI: 10.1007/978-3-319-26961-0 28

  • 8/16/2019 Reversible Color Image Watermarking in YCoCg Space

    2/19

    Reversible Color Image Watermarking in the YCoCg-R Color Space 481

    A large number of reversible image watermarking algorithms have been pre-viously proposed [2–4]. Most of them have been developed for grayscale images.Although the algorithms developed for grayscale images may trivially be modi-fied to work with for color images, most of the times the performance achieved

    by such trivial extension is not satisfactory. Relatively few works have been pro-posed for reversible color image watermarking. Moreover, in the existing litera-ture, almost all reversible color image watermarking algorithms [4–7] utilize theRGB color space. Tian et al. [2] introduced Difference Expansion  based reversiblewatermarking for grayscale images. Allatar used that concept for reversiblywatermarking color images using difference expansion of triplets   [5], quads  [6],and later formulated a generalised integer transform [4]. However, these schemesembed watermark into the individual color components of the RGB color space.Li et al. [7] proposed a prediction error expansion based color image watermark-

    ing algorithm where prediction accuracy is enhanced by exploiting the correla-tion between color components of the RGB color space. Published literature onreversible color image watermarking in other (non-RGB) color spaces are veryrare. Investigation of color image watermarking in a non-RGB color space issomething that we aim to investigate in this paper.

    In this paper, we propose a reversible watermarking technique, specificallymeant for color images, providing considerably high embedding capacity. bysystematically investigating the following questions:

    1. What theoretical considerations should determine selection of a color space

    for reversible watermarking of color images?2. Which color space is practically the best suited in this context?3. Is there any additional constraint for selecting color space to ensure reversibil-

    ity  of the watermarking scheme?

    Our key observation in this paper is that, instead of the tradi-tional RGB color space, if we choose a color space having highertransform coding gain (i.e., better compression performance), thenthe reversible watermarking capacity will be increased significantly.Moreover, better compression along color components increases intra-

    correlation of individual color components. Hence, prediction accu-racy of such prediction based watermarking scheme improves, whichadditionally enhances the embedding capacity of the reversible water-marking scheme.

    In this paper, we propose a reversible watermarking algorithm for colorimages, which utilizes the YCoCg-R color space (a modification of the YCoCgcolor space) having higher transform coding gain and near to optimal compres-sion performance. The transformation from RGB to YCoCg-R, and the reversetransformation from YCoCg-R to RGB, are integer-to-integer transforms which

    guarantee reversibility   [8], and are also implementable very efficiently. The pro-posed algorithm is based on the principle of   histogram–bin–shifting , which iscomputationally one of the simplest reversible watermarking technique. Specifi-cally, we use a newer and more efficient enhancement of histogram-bin-shifting,which performs histogram modification of pixel prediction errors   [2, 9]. In this

  • 8/16/2019 Reversible Color Image Watermarking in YCoCg Space

    3/19

    482 A. Roy et al.

    technique image pixel values are predicted from their neighbourhood pixel val-ues, and the prediction error histogram bins are shifted to embed the watermarkbits. This technique provides much higher embedding capacity compared to thetraditional frequency-histogram shifting.

    The rest of the paper is organised as follows. We investigate an informationtheoretic analysis of watermarking embedding capacity maximization in Sect. 2.The proposed reversible watermarking algorithm is presented in Sect. 3. Exper-imental results and related discussions are presented in Sect. 4.  We conclude inSect. 5 with some future research directions.

    2 Principle of Embedding Capacity Maximization

    Embedding capacity maximization is one of the major challenges in reversible

    watermarking, given the reversibility criterion. In this section, we explore suc-cessively two approaches to enhance embedding capacity:

    1. Selection of the target color space offering higher watermarking performance,and,

    2. Selection of the watermark embedding algorithm.

    2.1 Color Space Selection

    We consider the selection of the color space from three perspectives: informationtheory, reversibility and compressibility in the transformed color space, and easeof implementation of the color space transformation. We start with the reviewa relevant information theoretic result.

    Information Theoretic Justification.   The following theorem is of funda-mental importance:

    Theorem 1  (Sepain-Wolf Coding Theorem).  Given two correlated finite alpha-bet random sequences  X   and  Y , the theoretical bound for lossless coding rate for distributed coding of two sources are related by:

    RX  ≥ H (X |Y ),RY    ≥ H (Y |X ),RX  + RY    ≥ H (X, Y ).

    (1)

    Thus, ideally the minimum total rate ( RX,Y   ) necessary for lossless encoding of the two correlated random sequences   X   and   Y , is equal to their joint entropy ( H (X, Y )), i.e.  RX,Y    = H (X, Y ).

    The significance of the above result is that for three correlated randomsequences X ,  Y ,  Z , the total rate  RX,Y,Z  = H (X , Y , Z  ) is sufficient for an ideal

    lossless encoding. This theorem can be extended to a finite number of correlatedsources. It can be shown that the same result holds even for the i.i.d and ergodicprocesses [11].

    We make the following proposition related to the selection of color space onthe embedding capacity of reversible watermarking:

  • 8/16/2019 Reversible Color Image Watermarking in YCoCg Space

    4/19

    Reversible Color Image Watermarking in the YCoCg-R Color Space 483

    Fig.1. Venn diagram to explain the impact of color space transformation on entropy

    and mutual information.

    Proposition 1.  If the cover color image is (losslessly) converted into a different color space with higher coding gain (i.e. better compression performance) before watermark embedding, then the watermark embedding capacity in the transformed color space is greater than the original color space.

    Consider the color components for color images to be finite discrete randomvariables. Let  X ,Y ,Z  be three random variables as depicted in a Venn diagram

    in Fig. 1, where the area of each circle (corresponding to each random variable)is proportional to its entropy, and the areas of the intersecting segments areproportional to the corresponding mutual information values of the relevantrandom variables.

    Now consider a bijective transformation   T   applied to the point (X ,Y ,Z )in the original sample space, to transform it to another point (X ,Y ,Z ) inthe transformed sample space, corresponding to the three random variablesX ,Y ,Z :

    T : (X , Y , Z  ) → (X , Y , Z ) (2)

    such that the image in the transformed sample space has higher coding gain.Since higher coding gain implies better compression performance, hence, eachelement of  X ,Y  and Z  is the compressed version of the corresponding elementin   X ,   Y   and   Y   respectively. Moreover, let   T   be an invertible, lossless and itmaps integers to integers.

  • 8/16/2019 Reversible Color Image Watermarking in YCoCg Space

    5/19

    484 A. Roy et al.

    As a consequence of the properties of the transformation  T, both the samplespaces are discrete and contain the same number of points. The values of thepixels in the transformed color space (i.e.  X ,  Y  and  Z ) get “closer” to eachother, as these are the compressed version of the pixel color channel values in the

    original sample space (i.e.  X ,Y   and  Z ). This implies that the random variablescorresponding to the color channels in the transformed color space (X ,Y  andZ ), become more correlated among themselves than those in the original samplespace (X ,   Y   and   Z ). Since for individual random variables higher correlationbetween values implies lesser entropy   [11], the entropies of the variables  X ,  Y 

    and Z  in the transformed domain are relatively lesser compared to those of  X ,Y   and  Z . i.e.,

    H (X ) ≤ H (X )H (Y ) ≤ H (Y )

    H (Z 

    ) ≤ H (Z )

    (3)

    This is depicted by having the circles corresponding to  X ,  Y  and  Z  havelesser areas compared to the circles corresponding to   X ,   Y   and   Z   in Fig. 1.Joint entropy of  X ,  Y   and  Z , i.e.,  H (X , Y , Z  ) is depicted by the union of thethree circles corresponding to  X ,Y   and  Z . Now, as the circles corresponding toX ,  Y  and  Z  have lesser areas than those corresponding to  X ,  Y   and  Z , it isevident that area of the union of these circles corresponding  X , Y  and Z  (i.e.,H (X , Y , Z )), must be smaller than that corresponding to  X ,  Y   and Z , i.e.,

    H (X 

    , Y 

    , Z 

    ) ≤ H (X , Y , Z  )   (4)

    We can draw an analogy between lossless (reversible) watermarking and loss-less encoding, since in reversible watermarking, we have to losslessly encodethe cover image into the watermarked image such that the cover image can beretrieved bit by bit. So, in that sense we can apply the Sepian-Wolf Theoremto estimate the embedding capacity of the reversible watermarking scheme. Forlossless encoding of a color image  I  consisting of color channels  X ,Y   and Z , weneed a coding rate greater than equal to H (X , Y , Z  ). The total size of the coverimage is a constant, say   N   bits. Then, after an ideal lossless encoding of theimage I  which can encode it in H (X,Y,Z ) bits, there remains (N − H (X , Y , Z  ))bits of space for auxiliary data embedding. Hence, theoretical embedding capac-ity of the reversible watermarking schemes in the two color spaces are given by:

    C  = N  − H (X , Y , Z  ) (5)

    andC  = N  − H (X , Y , Z ) (6)

    Since  H (X , Y , Z ) ≤  H (X , Y , Z  ), hence we can conclude that  C  ≥ C . Hence,we conclude that a color space transformation T  with certain characteristics canresult in higher embedding capacity.

  • 8/16/2019 Reversible Color Image Watermarking in YCoCg Space

    6/19

    Reversible Color Image Watermarking in the YCoCg-R Color Space 485

    Compressibility in Transformed Color Space and Reversibility of Transformation.   When we transform the representation of a color image fromone color space to another, the  Transform Coding Gain  is defined as the ratio of the arithmetic mean to the geometric mean of the variances of the variables in

    the new transformed domain coordinates, scaled by the norms of the synthesisbasis functions for non-unitary transformations [8]. It is usually measured in dB.Transform coding gain is a metric to estimate compression performance [8] –higher transform coding gain implies more compression among the color chan-nels of a color image representation. In general, the  Karhunen-Loeve Transform (KL Transform), Principle Component Analysis  (PCA) etc. might also be used todecorrelate color channels. However, for reversible watermarking we needto choose an integer-to-integer linear transformation. If  C1  = (X , Y , Z  )

    denote the color components in the original color space, and  C2  = (X , Y , Z )T 

    denote the color components in the transformed color space after a linear trans-formation, then we can write C2  = TC1, where T   is the transformation matrix.Similarly, the reverse transformation is expressed as C1 = T

    −1C2, It is desirablethat det T = 1, which is a necessary condition for optimal lossless compressionperformance [8].

    Ease of Color Space Transformation.   Color space transformation dur-ing watermark embedding/extraction processes is a computational overhead.Another consideration that determines the selection of a candidate color space

    is the ease of implementation of the computations involved in the color spacetransformation, i.e. multiplication by the transformation matrix T. If the opera-tions involved are only integer addition/subtractions and shifts, the color spacetransformation can be implemented extremely efficiently in both software andhardware.

    From the discussion so far, our color space selection for performing thereversible watermarking operations is guided by the following criteria:

    – Lower correlation among the color channels.– Reversibility of transformation from the RGB color space.

    – Higher transform coding gain, and,– Ease of implementation of the transformation.

    Some of the reversible color space transformations available in the litera-ture [13, 14] are described below in brief.

    RCT Color Space.  Reversible Color Transform  (RCT) is used for lossless colortransformation in JPEG 2000 standard [14]. It is also known as “reversible YUVcolor space”. This color space transformation equations are simple, integer-to-integer and invertible:

    Y r  =

    R + 2G + B

    4

    U r  = R − G

    V r  = B − G

    ⇐⇒

    G =  Y r −

    U r + V r4

    R =  U r + G

    B = V r + G

    (7)

  • 8/16/2019 Reversible Color Image Watermarking in YCoCg Space

    7/19

    486 A. Roy et al.

    O1O2O3 Color Space.   This is another color space with higher compressionperformance, while maintaining integer-to-integer reversibility [13]. Here, the  R,G, and  B  color channels are transformed into  O1,  O2,  O3 color channels, andconversely:

    O1 =

    R + G + B

    3+ 0.5

    O2 =

    R−B

    2+ 0.5

    O3 = B − 2G + R

    ⇐⇒

    B   = O1−O2 +

    O3

    2+ 0.5

    O3

    2+ 0.5

    G = O1−

    O3

    3+ 0.5

    R = O1 + O2 + O3−

    O3

    2+ 0.5

    O3

    2+ 0.5

    (8)

    Our Selection: The YCoCg-R Color Space.  In our case, X , Y   and Z  corre-spond to the R, G  and  B  color channels of the RGB color space, and  X , Y  andZ  correspond to the Y , C o and C g  color channels in the YCoCg-R color space.The well-known YCoCg color space decomposes a color image into three compo-nents – Luminance  (Y ), Chrominance orange  (Co) and Chrominance green  (Cg)respectively. YCoCg-R is the integer to integer reversible version of YCoCg. Thetransformation   T  (for RGB to YCoCg-R), and the inverse transformation aregiven by [8]:

    Co =  R − B,t =  B  + Co

    2 ,

    Cg =  G − t,

    Y   = t +

    Cg

    2

    (9)and similarly,

    t =  Y   −

    Cg

    2

    ,

    G =  C g + t,B = t −

    Co

    2

    ,

    R =  B  + Co

    (10)

    Notice that rolling out the above transformation equations allows us to writethe direct transformation equations:CoCg

     =  T

    RG

    B

     =

    1 0   −1− 1

    2   1  −12

    14

    12

    14

    RG

    B

      (11)

    and hence det T = 1, which is desirable for achieving optimal compression ratio,as mentioned in Sect. 2.1. A close look would reveal that the transformations arenothing but repeated difference expansion of the color channels.

    To summarize, selection of the YCoCg-R color space has the following con-sequences:

    – Repeated difference expansion of the color channels makes the resultant colorchannels less correlated in the YCoCg-R color space. It is known that theYCoCg-R representation has higher coding gain [8].

  • 8/16/2019 Reversible Color Image Watermarking in YCoCg Space

    8/19

    Reversible Color Image Watermarking in the YCoCg-R Color Space 487

    – The RGB to YCoCg-R transformation is an integer-to-integer reversible trans-form.

    – YCoCg-R achieves close to optimal compression performance [8].– The arithmetic operations of the transformation are simple integer addi-

    tions/subtractions and shifts, and hence extremely efficiently implementablein hardware and software.

    We establish the superiority of our choice of the YCoCg-R color space overother color space representations through detailed experimental results in Sect. 4.We next discuss the impact of the embedding scheme on the embedding capacity.We justify the selection of a scheme used by us, which is a combination of thewell-known   histogram-bin-shifting   scheme with pixel prediction techniques.

    2.2 Embedding Scheme Selection for Capacity Enhancement

    Ni et al. [3] introduced the histogram-bin-shifting based reversible watermarkingscheme for grayscale images. In this scheme, first the statistical   mode   of thedistribution, i.e., the most frequently occurring grayscale value, is determinedfrom the frequency histogram of the pixel values, let us call the pixel value tobe the “peak point”. Now, the pixels with grayscale value greater than the peakvalue are searched, and their corresponding grayscale values are incremented byone. This is equivalent to right shifting the frequency histogram for the pixelshaving grayscale value greater than the peak point by one unit. Generally, allimages from natural sources have one of more pixel values which are absent in theimages, let us call these “zero points”. The existence of zero points ensure thatthe partial shift of the frequency histogram do not cause any irreversible changein the pixel values. The shift results in an empty frequency bin just next to thepeak point in the image frequency histogram. Next, the whole image is scannedsequentially and the watermark is embedded into the pixels having grayscalevalue equal to the peak point. When the watermark bit to be embedded is ‘1’,the watermarked pixel occupies the empty bin just next to the peak value inthe histogram, and when it is ‘0’, the watermarked pixel value is left unmodified

    at the peak point. The embedding capacity of the scheme is limited by thenumber of pixels having the peak grayscale value. Figure 2   shows an exampleof the classical histogram-bin-shifting based watermarking scheme for an 8-bitgrayscale image, where the peak point is 2 and the zero point is 7.

    To improve the embedding capacity histogram-bin-shifting is blended withpixel prediction method [9]. In the pixel prediction technique, some of the coverimage pixel values are predicted based on their neighbourhood pixel values. Suchprediction gives prediction errors with respect to the original cover image. Gen-erally, the frequency distribution of such prediction error resembles an Laplacian

    distribution [9], with peak value at zero as shown in Fig. 3. Watermarking bitsare embedded into the prediction errors by histogram shifting of bins “close tozero”, where the closeness is pre-defined with respect to some threshold. The binsthat are “close to zero” in the prediction error histogram can be both right orleft shifted to embed watermark bits. This two-way histogram shifting enhances

  • 8/16/2019 Reversible Color Image Watermarking in YCoCg Space

    9/19

    488 A. Roy et al.

    Fig. 2.   Operations in the   Histogram-bin-shifting   reversible scheme proposed by Ni.et. al [3]: (a) histogram before shifting with peak point=2 and zero point=7; (b) his-togram after shifting the pixels; (c) histogram after watermark embedding.

    the capacity of the scheme significantly, compared to the classical histogram-bin-shifting case. The embedding in error histogram is shown in Fig. 3. Duringextraction, prediction errors are computed from the watermarked image, andthe watermark bits are extracted from the errors. After watermark extraction,the error histogram bins are shifted back to their original position. The retrievederrors are combined with the predicted pixel values to get back the original coverimage losslessly.

    3 Proposed Algorithm

    Our proposed algorithm consists of the following main steps:

    1. Transformation of the cover color image from RGB color space to the YCoCg-R color space, using transformation-(9).

    2. Pixel prediction based watermark embedding in the YCoCg-R color space.3. Watermark extraction and lossless retrieval of original cover image.4. Reconversion from YCoCg-R color space to RGB color space, using

    transformation-(10).

    The first and the last steps have already been discussed. We now describethe remaining steps.

  • 8/16/2019 Reversible Color Image Watermarking in YCoCg Space

    10/19

    Reversible Color Image Watermarking in the YCoCg-R Color Space 489

    Fig.3.   Steps of watermark embedding using histogram shifting of prediction error:(a) prediction error histogram; (b) histogram shifting; (c) watermark embedding.

    3.1 Pixel Prediction Based Watermark Embedding

    We use   Weighted Mean based Prediction   [2] in the proposed scheme. In thisscheme two levels of predicted pixel values are calculated exploiting the correla-tion between the neighboring pixels. One out of every four pixels in the originalcover image is chosen as a “base pixel”, as shown in Fig. 4, and the values of these

    pixels are used to predict the values of their neighboring pixels. Positions of nextlevels of predicted pixels are also shown in Fig. 4. Neighborhood of the pixels arepartitioned into two directional subsets which are orthogonal to each other. Wecalculate the “first level predicted pixels” and the “second level predicted pixels”by interpolating the “base pixels” along two orthogonal directions: the 45◦ diag-onal and the 135◦ diagonal as shown in Fig. 5. The first level predicted pixels,occupying coordinates (2i, 2 j) are computed as follows:

    1. First, interpolated values along directions 45◦ and 135◦ are calculated. Letthese values be denoted by p45  and p

    135, and calculated as shown in Fig. 5:

     p45 = ( p(i, j + 1) + p(i + 1, j))/2 p135 = ( p(i, j) + p(i + 1, j + 1))/2

      (12)

  • 8/16/2019 Reversible Color Image Watermarking in YCoCg Space

    11/19

    490 A. Roy et al.

    Fig. 4.   Locations of (a) base pixels (‘0’s), (b) predicted first set of pixels (‘1’s),(c) predicted second set of pixels (‘2’s).

    Fig. 5. (a) Prediction along 45◦ and 135◦ diagonal direction; (b) Prediction along 0◦

    and 90◦ diagonal direction.

    2. Interpolation error corresponding to the pixel at position (2i, 2 j) along direc-

    tions 45◦

    and 135◦

    are calculated as:e45(2i, 2 j) = p

    45 − p(2i, 2 j)

    e135(2i, 2 j) = p135 − p(2i, 2 j)

      (13)

    3. Sets  S 45  and S 135  contain the neighbouring pixels of the first level predictedpixel along the 45◦ and 135◦ directions respectively, i.e.,

    S 45 = { p(i, j + 1), p45, p(i + 1, j)}

    S 135 = { p(i, j), p135, p(i + 1, j + 1)}

      (14)

    4. The mean value of the base pixels around the pixel to be predicted, is denotedby u, and calculated as:

    u =   p(i,j)+ p(i+1,j)+ p(i,j+1)+ p(i+1,j+1)4   (15)

    5. In the weighted mean based prediction, weights of the means are calculatedusing variance along both diagonal direction. Variance along 45◦ and 135◦

    are denoted as  σ(e45) and σ(e135), and calculated as:

    σ(e45) =  1

    3

    3

    k=1

    (S 45(k) − u)2 (16)

    and

    σ(e135) =  1

    3

    3k=1

    (S 135(k) − u)2 (17)

  • 8/16/2019 Reversible Color Image Watermarking in YCoCg Space

    12/19

    Reversible Color Image Watermarking in the YCoCg-R Color Space 491

    6. Weights of the means along 45◦ and 135◦ directions are denoted by  w45  andw135, and calculated as

    w45 =  σ(e135)

    σ(e45+σ(e135))

    w135 = 1 − w45(18)

    7. We estimate the first level predicted pixel value p, as a weighted mean of thediagonal interpolation terms p45  and  p

    135:

     p = round (w45 · p45 + w135 · p

    135)   (19)

    Once the first level pixel values are predicted, the values of the second levelpixels can be computed from the base pixels and the first level predicted pixels.A similar procedure as described above is used, but now pixel values along thehorizontal and vertical directions are used for prediction, i.e. the values along

    the 0◦

    and 90◦

    directions are used, as shown in Fig. 5. In this way, we can predictthe entire image (other than the base pixels) using interpolation and weightedmean of interpolated pixels along two mutually orthogonal directions.

    Embedding Algorithm.   After the given color cover image is transformedinto the YCoCg-R color space, the given watermark bits are embedded into thecolor channels C o, C g  and Y  in order. We preferentially embed watermarks intothe   Chroma  components (Co  and  Cg), and then to the  Luma  component (Y ),

    Algorithm 1. EMBED WATERMARK 

    /* Embed watermark bits into the prediction errors */Input:   Color cover image of size   M  × N   pixels in YCoCg-R color space (I ), Watermark bits (W ),

    Embedding Threshold (T )Output:   Watermarked image   I wm  in the YCoCg-R color space1:   for  Color channels   P   ∈ {C o , C g , Y    }  in order   do2:   if    W   is not empty   then3:   for   i  = 1   to M   do4:   for   j  = 1   to N   do5:   if    P (i, j) is not a base pixel   then6:   P (i, j) ←  P redictweightedmeanP (i, j)7:   Compute prediction error   eP (i, j) =  P (i, j) −  P (i, j)8:   if   eP (i, j) ≥  0   then9:   sign(eP (i, j)) ←  1

    10:   else11:   sign(eP (i, j))  ← −112:   end if 13:   if   |eP (i, j)| ≤  T    then14:   eP (i, j) ←  sign(eP (i, j)) × [2 × |eP (i, j)|  + next bit of   W ]15:   else16:   eP (i, j) ←  sign(eP (i, j)) × [|eP (i, j)| + T   + 1]17:   end if 18:   P wm(i, j) ←  P (i, j) +  eP (i, j)19:   else20:   P wm(i, j) ←  P (i, j)

    21:   end if 22:   end for23:   end for24:   end if 25:   end for26:  Obtain watermarked image  I wm  by combining the watermarked color channels  Y  wm,  C owm  and

    Cgwm.

  • 8/16/2019 Reversible Color Image Watermarking in YCoCg Space

    13/19

    492 A. Roy et al.

    to minimize the visual distortion. Moreover, as human vision is least sensitiveto changes in the blue color   [12], so among the chroma components, the   Cocomponent (mainly combination of orange and blue) is embedded first, and thenwe embed in the  Cg  component (mainly combination of green and violet).

    In each of the color channels, we apply the weighted mean based pixel pre-diction technique separately. Let   P (i, j) denote the value of the color channelat coordinate (i, j) with  P   ∈ {Co,Cg,Y }, and let  P (i, j) be the correspondingpredicted value of  P (i, j):

    P (i, j) ← Predictweightedmean(P (i, j))   (20)

    Then, the prediction error at the (i, j) pixel position for the  P  color channelis given by:

    eP (i, j) = P (i, j) − P 

    (i, j), where  P, P 

    ∈ {C o, C g, Y }   (21)

    Next the frequency histograms of the prediction errors are constructed. Forwatermark embedding, prediction errors which are close to zero are selectedconsidering a threshold T  ≥ 0. Hence, the frequency histogram of the predictionerrors in the range [−T, T ] are histogram-bin-shifted to embed the watermarkbits. Rest of the histogram-bins are shifted away from zero by a constant amountof (T  + 1) to avoid any overlap of absolute error values.

    For embedding watermark bits, prediction errors eP (i, j) are modified due tohistogram shifting to  eP (i, j) according to the following equation:

    eP (i, j) =

    sign(eP (i, j)) × [2 × |eP (i, j)| + b] if   |eP (i, j)| ≤ T 

    sign(eP (i, j)) × [|eP (i, j)| + T  + 1] otherwise(22)

    where b  [0, 1] is the next watermarking bit to be embedded, and sign(eP (i, j))is defined as:

    sign(eP (i, j)) =

    +1 if    eP (i, j) ≥ 0

    −1 otherwise(23)

    Finally, the modified prediction errors   eP (i, j) are combined with the pre-

    dicted pixels P (i, j) in the corresponding color space to obtain the watermarkedpixels  P wm(i, j):

    P wm(i, j) = P (i, j) + eP (i, j)   (24)

    The same procedure is applied in the three color channels (C o,   C g,   Y ) of YCoCg-R color space. Hence, YCoCg-R color channels are watermarked. Nowwe transform P wm  from Y CoCg−R to RGB  losslessly by Eq. 10 to finally obtainthe watermarked image  I wm.

    The proposed watermark embedding algorithm is presented as Algorithm 1.

    3.2 Extraction Algorithm

    The extraction algorithm just reverses the steps of the embedding algorithm.Watermark extraction is done in order from the  Co,  Cg   and  Y   color channels

  • 8/16/2019 Reversible Color Image Watermarking in YCoCg Space

    14/19

    Reversible Color Image Watermarking in the YCoCg-R Color Space 493

    respectively as used for embedding. In the extraction phase, we also predict thepixels except the base pixels for each color channel   P   ∈ {Co,Cg,Y }. At eachpixel position (i, j) of color channel   P   of the watermarked image,   P wm(i, j) iscalculated to be the predicted value of  P wm(i, j):

    P wm(i, j) ← Predictweightedmean (P wm(i, j)))   (25)

    Then prediction error at (i, j)-th position of the  P  color channel is denotedby eP wm(i, j). Then,

    eP wm(i, j) = P 

    wm(i, j) − P wm(i, j)   (26)

    Then the prediction error frequency histogram is generated and the water-mark bits are extracted from the frequency histogram bins close to zero, asdefined by the embedding threshold T :

    |eP wm(i, j)| ≤ (2T  + 1)   (27)

    Hence, the watermark bit  b  is extracted as:

    b = |eP wm(i, j)| − 2 × |eP wm (i,j)|

    2     if   |eP wm(i, j)| ≤ (2T  + 1)   (28)

    After extraction, all bins are shifted back to their original positions, so theprediction errors in their original form are restored as given in following equation:

    eP wm(i, j) =

    sign(eP wm(i, j)) × |eP wm (i,j)|

    2

      if   |eP wm(i, j)| ≤ (2T  + 1)sign(eP wm(i, j)) × (|eP wm(i, j)| − T  − 1) otherwise

    (29)

    where the restored error eP wm(i, j) is exactly same as the prediction error eP (i, j).Next, the predicted pixels (P wm(i, j)) are combined with the restored errors

    (eP wm(i, j)) to obtain each of the retrieved color channels (P ret(i, j)) losslessly,

    P ret(i, j) = P 

    wm(i, j) + eP wm

    (i, j) = P wm(i, j) + eP (i, j) = P (i, j) (30)

    where   P   ∈ {Co,C g, Y }. After we retrieve the color channels   Y ,   Co   and   Cglosslessly, we transform the cover image to the RGB color space by the loss-less YCoCg-R to RGB transformation. The extraction algorithm is presented asAlgorithm 2.

    3.3 Handling of Overflow and Underflow

    An overflow or underflow is said to have occurred if the watermark pixel P wm(i, j)as obtained in Eq. 24 is such that P wm(i, j)  /∈ {0, 255}. The underflow condition

    is:   P wm(i, j)   <   0 and the overflow condition is :   P wm(i, j)   >   255. In embed-ding phase, we do not embed watermark into such above stated pixels to avoidoverflow and underflow.

    In extraction phase, we first find out which of the pixels cause overflow andunderflow. These pixels indicate two types of possibilities:

  • 8/16/2019 Reversible Color Image Watermarking in YCoCg Space

    15/19

    494 A. Roy et al.

    Algorithm 2. EXTRACT WATERMARK 

    /* Embed watermark bits into the prediction errors */Input:   Color watermarked image of size   M  × N  pixels in YCoCg-R color space (I wm), Embedding

    Threshold (T )Output:   Retrieved cover image (I ret), Watermark (W )

    1:   for  Color channels   P   ∈ {C o , C g , Y    }  in order   do2:   for   i  = 1   to M   do3:   for   j  = 1   to N   do4:   if    P wm(i, j) is not a base pixel   then5:   P wm(i, j) ←  P redictweightedmeanP wm(i, j)6:   Compute prediction error   eP wm (i, j) ←  P wm(i, j) −  P 

    wm(i, j)

    7:   if   eP wm (i, j) ≥  0   then8:   sign(eP wm (i, j))  ←  19:   else

    10:   sign(eP wm (i, j))  ← −111:   end if 12:   if   |eP wm (i, j)| ≤ (2T   + 1)   then

    13:   (Next bit of   W )  ← |eP wm (i, j)| −  2 × |eP wm

    (i,j)|

    2  

    14:   eP wm (i, j) ←  sign(eP wm (i, j)) × |eP 

    wm

    (i,j)|

    2   15:   else16:   eP wm (i, j) =  sign(eP wm (i, j)) × [|eP wm (i, j)| −  T   − 1]17:   end if 18:   P ret(i, j) =  P wm(i, j) +  e

    P wm

    (i, j)

    19:   else20:   P ret(i, j) =  P wm(i, j)21:   end if 22:   end for23:   end for24:   end for25:   Obtain original cover image   I ret   in YCoCg-R color space by combining the   Y  ret,   Coret   and

    Cgret   color components

    Fig. 6. Test images used in our experiments: (a) Bird; (b) Cap; (c) Cycle; (d) House;(e) Sea; and (f) Nature.

    1. During embedding, it caused overflow or underflow, and hence was not used

    for embedding.2. Previously the pixel did not causes overflow or underflow, hence watermark

    bit was embedded. However, after watermark embedding the pixel causesoverflow or underflow.

  • 8/16/2019 Reversible Color Image Watermarking in YCoCg Space

    16/19

    Reversible Color Image Watermarking in the YCoCg-R Color Space 495

    To correctly distinguish between which one of the cases have occurred, abinary bit stream, called a  location map   is generally used [9, 10]. We assign ‘0’for the first case and ‘1’ for the second case respectively, in the location map. If none of the cases occur the location map remains empty. Now during extraction,

    if a pixel with overflow or underflow occurs we check the next location map. If the location map bit is ‘0’, we do not use the corresponding pixel for extractionand it remains unchanged. On the other hand, if the location map bit is ‘1’, weuse the corresponding pixel for extraction using Algorithm 2. Size of the locationmap is generally small and we can further reduce the size of the location mapusing lossless compression. The compressed location map is then inserted intothe LSBs of the base pixels starting from the last base pixel. The original basepixel LSBs are concatenated at the beginning of the watermark and embeddedinto the cover image, before replacement with the location map bits.

    4 Results and Discussion

    The proposed algorithm was implemented in MATLAB and tested on severalimages from the  Kodak Image Database   [15]:  Bird ,  Cap,  Cycle ,  House ,  Sea   andNature , as shown in Fig. 6.   The performance measurement for our proposedscheme is done with respect to the following:

    1. Maximum embedding capacity, and,

    2. distortion of the watermarked image with respect to the original cover image.

    Maximum embedding capacity can be estimated as the number of pure water-mark bits that can be embedded into the original cover image. To make thecomparison independent of the size of the cover image, we normalized the embed-ding capacity with respect to the size of the cover image, and report it as theaverage number of bits that can be embedded per pixel, measured in units of 

    Fig.7.   Comparison of embedding capacity in different color space for several testimages.

  • 8/16/2019 Reversible Color Image Watermarking in YCoCg Space

    17/19

    496 A. Roy et al.

    Fig. 8. Distortion characteristics of test images: (a) Bird; (b) Cap; (c) Cycle; (d) House;(e) Sea; and (f) Nature.

    bits-per-pixel (bpp). Distortion of the watermarked image is estimated by the“Peak-Signal-to-Noise-Ratio” (PSNR), which is defined as:

    PSN R = 10log10

    MAX 2

    M SE 

    dB (31)

    where M AX  represent the maximum possible pixel value. “Mean Square Error”(MSE) for color images is defined as:

    M SE  =  1

    3 · M  · N 

    M i=1

    N j=1

    [(R(i, j) − R(i, j))2

    + (G(i, j) − G(i, j))2

    + (B(i, j) − B(i, j))2

    ] (32)

    where  R(i, j),   G(i, j) and   B(i, j) represent the red, green and blue color com-

    ponent values at location (i, j) of the original cover image;  R(i, j) , G(i, j) andB(i, j) represent the corresponding color component values of the watermarkedimage, and the color image is of size  M  × N .

    The result of watermarking in the YCoCg-R color space using the proposedalgorithm, and those obtained by watermarking using the same prediction-based

  • 8/16/2019 Reversible Color Image Watermarking in YCoCg Space

    18/19

    Reversible Color Image Watermarking in the YCoCg-R Color Space 497

    histogram-bin-shifting scheme in the RGB, RCT [14] and O1O2O3 [13] colorspace representations are compared for the test images, as given in Fig. 7. Thecomparison clearly demonstrates that the embedding capacity is higher in theYCoCg-R color space representation than the RGB, RCT and O1O2O3 color

    spaces.Distortion characteristics (i.e., variation of PSNR vs. Embedded bpp) for sev-

    eral test images are shown in Fig. 8. Note that the maximum bpp value attemptedfor each color space corresponds to their embedding capacity. The plots alsosuggest that the distortion of the images with increasing amount of embeddedwatermark bits is the least for the YCoCg-R color space representation in mostcases. Since no color space representation can reach the embedding capacity of the YCoCg-R representation, overall we can conclude that the YCoCg-R colorspace is the best choice for reversible watermarking of color images. This observa-

    tion was found to hold for most of the images in the Kodak image database [15].

    5 Conclusions

    In this paper we have proposed a novel reversible watermarking scheme for colorimages using histogram-bin-shifting of prediction errors in the YCoCg-R colorspace. We used a weighted mean based prediction scheme to predict the pixel val-ues, and watermark bits were embedded by histogram-bin-shifting of the predic-tion errors in each color channel of the YCoCg-R color space. The motivations for

    the choice of the YCoCg-R color space over other color space representationswere justified through detailed theoretical arguments and experimental resultsfor several standard test images. Our future work would be directed towardsexploiting other color space representations, and comparison of watermarkingperformance among them through theoretical and empirical techniques.

    References

    1. Cox, I.J., Miller, M.L., Bloom, J.A., Fridrich, J., Kalker, T.: Digital Watermarkingand Steganography. Morgan Kaufmann Publishers, San Francisc (2008)2. Tian, J.: Reversible data embedding using a difference expansion. IEEE Trans.

    Circuits Syst. Video Technol.  13(8), 890–896 (2003)3. Ni, Z., Shi, Y.-Q., Ansari, N., Su, W.: Reversible data hiding. IEEE Trans. Circuits

    Syst. Video Technol.  16(3), 354–362 (2006)4. Alattar, A.M.: Reversible watermark using the difference expansion of a generalized

    integer transform. IEEE Trans. Image Process.  13(8), 1147–1156 (2004)5. Alattar, A.M.: Reversible watermark using difference expansion of triplets. In:

    Proceedings of International Conference on Image Processing, vol. 1 (2003)6. Alattar, A.M.: Reversible watermark using difference expansion of quads. In: Pro-

    ceedings of International Conference on Acoustics, Speech, and Signal Processing,vol. 3 (2004)

    7. Li, J., Li, Xi., Yang, B.: A new PEE-based reversible watermarking algorithmfor color image. In: Proceedings of International Conference on Image Processing(2012)

  • 8/16/2019 Reversible Color Image Watermarking in YCoCg Space

    19/19

    498 A. Roy et al.

    8. Malvar, H.S., Sullivan, G.J., Srinivasan, S.: Lifting-based reversible color transfor-mations for image compression. In: Proceedings of Optical Engineering and Appli-cations (2008)

    9. Naskar, R., Chakraborty, R.S.: Histogram-bin-shifting-based reversible watermark-

    ing for colour images. IET Image Process. 7

    (2), 99–110 (2013)10. Naskar, R., Chakraborty, R.S.: Fuzzy inference rule based reversible watermarkingfor digital images. In: Venkatakrishnan, V., Goswami, D. (eds.) ICISS 2012. LNCS,vol. 7671, pp. 149–163. Springer, Heidelberg (2012)

    11. Cover, T.M., Thomas, J.A.: Elements of Information Theory. Wiley, Hoboken(2012)

    12. Kandel, E.R., Schwartz, J.H., Jessell, T.M.: Principles of Neural Science. McGraw-Hill, New York (2000)

    13. Nakachi, T., Fujii, T., Suzuki, J.: Lossless and near-lossless compression of stillcolor images. In: Proceedings of International Conference on Image Processing,vol. 1. IEEE (1999)

    14. Acharya, T., Tsai, P.-S.: JPEG2000 Standard for Image Compression: Concepts,Algorithms and VLSI Architectures. Wiley-Interscience, New York (2004)

    15. Kodak lossless true color image suite. http://r0k.us/graphics/kodak/

    http://r0k.us/graphics/kodak/http://r0k.us/graphics/kodak/