This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Blind Image Watermark Detection Algorithm based on DiscreteShearlet Transform Using Statistical Decision Theory
Ahmaderaghi, B., Kurugollu, F., Martinez del Rincon, J., & Bouridane, A. (2018). Blind Image WatermarkDetection Algorithm based on Discrete Shearlet Transform Using Statistical Decision Theory. IEEE Transactionson Computational Imaging, 4(1), 46-59. https://doi.org/10.1109/TCI.2018.2794065
Published in:IEEE Transactions on Computational Imaging
Document Version:Peer reviewed version
Queen's University Belfast - Research Portal:Link to publication record in Queen's University Belfast Research Portal
General rightsCopyright for the publications made accessible via the Queen's University Belfast Research Portal is retained by the author(s) and / or othercopyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associatedwith these rights.
Take down policyThe Research Portal is Queen's institutional repository that provides access to Queen's research output. Every effort has been made toensure that content in the Research Portal does not infringe any person's rights, or applicable UK laws. If you discover content in theResearch Portal that you believe breaches copyright or violates any law, please contact [email protected].
Decision TheoryBaharak Ahmaderaghi, Fatih Kurugollu, Senior Member, IEEE, Jesus Martinez Del Rincon
and Ahmed Bouridane, Senior Member, IEEE
Abstract—Blind watermarking targets the challenging recoveryof the watermark when the host is not available during thedetection stage.This paper proposes Discrete Shearlet Transformas a new embedding domain for blind image watermarking.Our novel DST blind watermark detection system uses a non-additive scheme based on the statistical decision theory. It firstcomputes the probability density function (PDF) of the DSTcoefficients modelled as a Laplacian distribution. The resultinglikelihood ratio is compared with a decision threshold calculatedusing Neyman-Pearson criterion to minimise the missed detec-tion subject to a fixed false alarm probability. Our method isevaluated in terms of imperceptibility, robustness and payloadagainst different attacks (Gaussian noise, Blurring, Cropping,Compression and Rotation) using 30 standard grayscale imagescovering different characteristics (smooth, more complex witha lot of edges and high detail textured regions). The proposedmethod shows greater windowing flexibility with more sensitiveto directional and anisotropic features when compared againstDiscrete Wavelet and Contourlets.
Index Terms—Digital image watermarking, Frequency domain,Discrete Shearlet Transform (DST), Discrete Wavelet Transform(DWT), Contourlet Transform (CT) , Laplacian distribution.
I. INTRODUCTION
IN the current globally-connected society, where access anddistribution of digital multimedia files is ubiquitous and
pervasive, virtual opportunities to pirate copyrighted files arein a permanent rise. As a consequence, finding protectionmethods to block or detect any unauthorized access and keepdata transmission safe and secure has become one of themost important challenge during the past decades. Digitalwatermarking is one method that has been developed inorder to protect ownership of data, digital content protectionand transaction tracking so that illegal use, modification anddistribution of the content can be detected. In this regard,the purpose of digital watermarking is to embed or hidesome invisible additional information, called watermark, intoanother signal such as image, audio or video, known as ahost or cover where the visual quality of the embedded hostsignal should not be significantly degraded. To be effective,watermark detection and extraction should be possible afterapplying a variety of manipulations and attacks while meetingsome criteria in terms of imperceptibility, robustness, securityand payload, which are often interdependent.
In general terms, the imperceptibility of a watermark refersto the perceptual similarity between the original and water-marked version of the host data. This is important so as to
keep the degradation of host quality to a minimum, so noobvious difference in the fidelity between the original andwatermarked hosts can be noticed [1]. Robustness is a measureof the watermarking methods resistance against different typesof attacks, for instance, compression, additive noise, etc.,are the types of attacks accrue in digital signal processing[1]. Payload refers to the total amount of information thatcan be hidden within the digital media [2]. The purpose ofincreasing watermarking payload is to find how transmit moreinformation while satisfying both watermarking robustnessand imperceptibility requirements [3]. In particular, the mostchallenging issue is how to address the trade-off betweenrobustness and imperceptibility, since enhancing robustnessimplies necessarily increasing the watermark strength andtherefore produces a loss of transparency [4].Finding suchan optimized solution still remains a challenge within thewatermarking community.
This paper describes a new framework for robust water-marking of image content due to the fact that digital imagesconstitute a major component of digital multimedia files. Awatermarking system can be divided into two main processes:embedding and extracting. Current watermarking techniquesare broadly classified according to the embedding domain:spatial and transform domains. Although spatial domain basedmethods are easy to implement, such techniques suffer fromsome disadvantages, including failure to achieve better robust-ness against various attacks. For instance, in [5], since the wa-termark information is embedded in the least significant bits,the effects of simple manipulations like lossy compression,adding noise and filtering are severe and impair the detectionof the watermark.
In contrast, imperceptibility and robustness requirementsto a variety of attacks can be achieved more efficiently inwatermarking systems, based on various transform domains,since watermarking information is spread out over the entirehost image [4]. In this regard, watermarking algorithms basedon different transform domains such as the DFT (DiscreteFourier Transform) [6], DCT (Discrete Cosine transform)[7],DWT(Discrete Wavelet Transform)[8], Contourlet Transform[9] and others have been proposed [10]. ORuanaidh et al [11]initially proposed in the use of DFT phase for watermarking.In their proposed method, the watermark is embedded in themost significant frequency components of an image whereonly the DFT phase is used for embedding. Extraction iscarried out using a statistical model. Zou et al.[12], developed
a watermarking method based on combining DFT and Houghtransform which results in a more robust system that canendure severe attacks such as printing-scanning, scaling androtating. However, as its main drawback, DFT based schemessuffer against cropping attacks and the watermark cannot sur-vive if aspect ratio changes, since these changes significantlyaffect the frequency content of the image.
DCT was first applied for watermarking by Koch and Zhao[4]. During the embedding process, some of the host imageregions are selected randomly to embed the watermark. Theseregions are transformed using DCT and then some mediumfrequency coefficients are modified. In their seminal paper,Cox et al. [13] proposed a spread spectrum based embeddingalgorithm selecting the most perceptually significant featureswhich is represented by DCT coefficients of the given image.In this algorithm, a Gaussian watermark sequence is embeddedinto the 1000 highest magnitude DCT coefficients while lowfrequency regions around the upper-left corner are not usedto preserve invisibility. On the other hand, a combination ofDCT and single value decomposition (SVD) was proposedfor watermarking [14] in order to increase the imperceptibilitywhile obtaining the highest possible robustness. In this methodSVD is applied so that the singular values of the watermark areembedded into the DCT coefficients of the original image. Theauthors argue that better imperceptibility can be achieved byembedding only the singular values of the watermark into theoriginal image. Moreover, better robustness can be obtainedby embedding the highest singular values having the highestenergy of the watermark into the DC components of theoriginal image. However, the main drawbacks of DCT-basedwatermarking techniques relate to shortcomings in robustnessagainst high compression levels and have performed poorlyfor de-synchronization based attacks such as geometric distor-tions.
DWT transform based schemes were proposed in order toovercome some of the drawbacks of DCT- and DFT-basedsystems by using multi-resolution techniques. A few water-marking schemes were also proposed based on combiningDCT and DWT in order to provide better performance againstsome attacks [15]. Other works have been carried out to furtherdevelop the DWT-based watermarking methods. In [16] SVDwas applied to the watermark and original image coefficientsin all the frequency bands of DWT. During the embeddingstage, the original image was first decomposed into 4 sub-bands using DWT, and then the SVD was applied on eachband by modifying their singular values.
In [17] the coefficients of the original image are quantized inthe wavelet domain and the binary watermark is embedded intothe wavelet-blocks that can be obtained by grouping four co-efficients at different sub-bands at corresponding coordinates.The method has shown promising results against various typesof attack, including the geometric and non-geometric attacks.In spite of the success of DWT and its different variants, suchas the dual tree complex wavelets transform (DTCWT) [18],and the non-redundant complex wavelets transform (NRCWT)[19], multi-resolution transforms based on DWT suffer oflimited directionality in their filtering structure [19].
Images to be watermarked usually contain sharp transitions
between objects in the scene such as lines, edges and cornersor textural regions. These structures are formed in multipleand fine grain directions and orientations. The coefficientsin DWT based transforms cannot accurately represent thesestructures because of their limited directionality. AlthoughDTCWT-based methods exhibit relevant advantages in com-parison with the previous transform domains in this regardby having improved directionality with more orientations andapproximate Shift Invariance, it is difficult to design it withperfect reconstruction properties and good filter characteristicsto solve line-like edges discontinuities across curves (curvesingularities) and geometrical smoothness issues [18].
To overcome this limitation, a variety of transforms such asRidgelets, Curvelets [20] and Contourlet (CT) [21] have beendeployed to provide a better framework for capturing the direc-tionality and the geometry of the scene using multiresolutiondecomposition. Curvelets and ridgelets, same as DWT, theirconstruction is not associated with a multiresolution analysis.This and other issues make the discrete implementation ofcurvelets very challenging as claimed in [22], therefore twodifferent implementations of it have been suggested [20] and[23]. In an attempt to provide a better discrete implementation,The Contourlet transform was developed as an improvementover wavelet and Curvelet and ridgelets [21].
Zaboli and Moin [24] proposed a CT based watermarkingusing human visual system characteristics. In their method,the host image is first decomposed using CT into four levels.In order to add the watermark, a binary logo is scrambledthrough a well-known PN sequence in order to enhance thesystem security and provides a random distribution of originalimage. A more recent research is carried out based on acombination of SVD and CT [25], where the eigenvalues ofa QR watermark matrix are embedded into the eigenvalues ofthe original images coefficients in the Contourlet domain. Thismethod has shown an improved robustness against varioustypes of attacks such as scaling, compression and filtering.Moreover, the proposed method has better imperceptibilitywhen compared with other Contourlet based watermarkingtechniques. Although Contourlet aims to better capture thedirectionality of the image features, this is still insufficientand causing visual artifacts into the host image, which is nota desirable property in applications such as watermarking [23].
Watermarking techniques also can be classified based on theusage of the original image during extraction process. If duringthe extracting procedure the original image is required this iscalled non-blind watermarking [13], whereas a technique iscalled blind if it works under the assumption that the originalimage will not be available at extraction. In this paper, themain focus is on blind digital image watermarking. In a blindschema the watermark extraction can be obtained by applyingstatistical methods. Cheng and Huang [26] pointed out thatthe watermark detection problem can be viewed as a statisticalhypothesis testing problem. Therefore, this type of detectionrequires a suitable modelling for the probability distributionfunction (pdf) of the host image. Barni et al.[27], appliedWeibull pdf in order to model the magnitude of a set offull-frame discrete Fourier transform coefficients. The DWTcoefficients modelled using Generalizes Gaussian (GG) [28]
or Laplacian pdf [29].In this paper we propose a new transform domain using
Discrete Shearlet Transform (DST) to the problem of imageblind watermarking.The DST shows promising results in im-age processing applications such as edge detection [22] andimage denoising [30], in compare with other transforms suchas the DWT and CT, both visually and with respect to PSNR.This leads us to conclude that its directional properties havepotential in watermarking. As previously explained, complexstructures present in images, such as curves, edges and texturalregions are not easy to capture. The Shearlet transform has theability to capture image features more precisely. For example,edges can be more accurately captured due to the efficientmulti-resolution filter which produces more specific directionallocalization for a higher number of directional components.This means that some features that might remain undetectedin one resolution can be spotted in another resolution. Thiscan potentially increase the data embedding capacity for wa-termarking while preserving the imperceptibility requirementsand providing higher robustness. This can be achieved byembedding more information in the edges of the image asthe human visual system is less sensitive to changes near theedges. The DST transform offers a plethora of advantagesfor watermarking problems, namely; (i) it captures directionalfeatures more precisely, (ii) it has no restrictions on the numberof directions and no constraints on the size of the supportsin its filter structure in comparison with previous transforms[31]. This leads to produce better watermark adaptation to thehost image under consideration. By taking into account theseadvantages, we explore the usage of DST for image water-marking in order to achieve high levels of imperceptibilityand robustness while still increasing payload.
In our earlier works, we already proposed DST as thetransform domain for a non-blind watermarking frameworkbased on spread spectrum [32] and a further refinement of itusing perceptual models based on the human visual system[33]. While these earlier works showed the potential of DSTfor watermarking, they were both limited for their non-blindnature, requiring the original image during extraction. Toovercome this limitation, this paper proposed a new frameworkon blind watermarking using DST.
The novel contributions of this work can be summarized asfollows:
• Novel use of DST for blind watermarking applications.This transform has not been used in watermarking ap-plications before, according to the best knowledge of theauthors. The only exceptions are its use in our conferencepapers [32] for a basic non-blind watermarking frame-work as part of our preliminary work and conferencepapers [33] for a basic non-blind perceptual watermarkingmodel.
• A fully new framework on blind watermarking usingDST. This method is derived based on the statistical deci-sion theory, Bayes decision theory, the Neyman-Pearsoncriterion, and the distribution of the DST coefficients inthe case of grey scale images.
• The pdf of the DST coefficients is estimated as a Lapla-
cian distribution. This approach is evaluated against alldifferent attacks using a variety of images (30 images)having different image content and characteristics.
The rest of the paper is organized as follows: Section IIprovides a brief description of the Discrete Shearlet Transform.The proposed watermarking system is described in Section III.Section IV covers the implementation details of the proposedmethod, where results and comparative evaluations againstdifferent attacks are given. Finally, Section V concludes thepaper.
II. BACKGROUND: THE DISCRETE SHEARLETTRANSFORM
Shearlet transform is an affine function containing a singlemother Shearlet function that is parameterized by scaling,shear and translation parameters with the shear parametercapturing the direction of the singularities [31]. An importantadvantage of this transform over other transforms is due to thefact that there are no restrictions on the number of directionsfor the shearing. There are also no constraints on the size ofthe supports for the shearing, unlike, for instance, directionalfilter banks [22] where using a small window size wouldresult in a performance loss. Therefore, the Shearlet transformis designed to deal with directional and anisotropic features,typically present in images, and has the ability to effectivelycapture the geometric information of edges.
The Shearlet transform is implemented by applying a Lapla-cian pyramid scheme and directional filtering [22]. Shearletsare formed by dilating, shearing and translating the motherfunction ψ ∈ L2
(R2)[34]. Discrete Shearlet transform is
obtained by sampling continuous Shearlet transform on adiscrete subset of the Shearlet group S, which are associated toan orthonormal basis for L2
(R2)
[31]. The Discrete Shearlettransform (DST) for a mother function ψ is defined as below:
SH{ψj,k,l = 23/2jψ
(BkA2j − L
): j, k ∈ Z,L ∈ Z2
}(1)
where j,k,l are the scale, orientation and location indexes and
A =
(4 00 2
),B =
(1 10 1
)are the dilation matrix and the shear matrix respectively. Fora given image f(Nrows × Ncolums), the Discrete Shearlettransform can be expressed as [34]:
< f,ψdj,l,m >
= 23/2j∫R2
f(ξ) {V(2−2jξ
)wdj,l
(ξ)e2πiξA
−jd B−ld m
}(2)
where
V(ξ1, ξ2
)= ψ1
(ξ1)XD0
(ξ1, ξ2
)+ ψ1
(ξ2)XD1
(ξ1, ξ2
)(3)
and X denotes the indicator function of the set D,D0 and D1
are the horizontal and vertical trapezoids, respectively, d ∈{0, 1}
, ξ =(ξ1, ξ2
)∈ <2 ,j ≥ 0, l =
(− 2j ..., 2j−1
)is
the junction of the horizontal trapezoids, wdj,l(ξ)
function localized on a pair of trapezoids and V is the pseudo-polar coordinates.
D0 ={(ξ1, ξ2
)∈ <2 : |ξ1| ≥
1
8, |ξ2ξ1| ≤ 1
}D1 =
{(ξ1, ξ2
)∈ <2 : |ξ2| ≥
1
8, |ξ1ξ2| ≤ 1
}.
(4)
Thus the Shearlet coefficients can be obtained as
X =
∫ ∫2−3/2jgj
(u, v)(w(2jv − l
)exp(2πi(n1 + n2
4jξ1+
n22jξ2))dξ1dξ2.
(5)where gj
(u, v)(w(2jv − l
)= f
(ξ) {V(2−2jξ
)wdj,l
(ξ), is the
discrete samples on a pseudo-polar grid. W is a windowfunction localized on a pair of trapezoids, gj
(n1, n2
)are the
values of the DFT on a pseudo-polar grid, n1 and n2 arenite sequence of values for a given image Nrows × Ncolums[31] and u,v are the pseudo-polar coordinates
(u, v)∈ <2 as
follows: (u, v)=(ξ1,
ξ2ξ1
)if(ξ1, ξ2
)∈ D0(
u, v)=(ξ2,
ξ1ξ2
)if(ξ1, ξ2
)∈ D1.
(6)
In other words, a DST applies filtering to a given image usingthe Laplacian pyramid algorithm [35], which is implementedin the spatial domain. This is accomplished in the multiscalepartition by decomposing an image into a low-pass and ahigh-pass filtered image and then downsampling the resultby 4. In order to extract the frequency components of theinput image, directional localization for different directionalcomponents is obtained by translating a window function W.Depending on the chosen shearing filter size, the first leveldecomposition generates 4 or 8 sub-bands. An illustrationof the frequency-domain implemented Shearlet support for 4scales is shown in Figure 1. Figure 2 shows the structure ofthe orientations corresponding to each DST sub-bands and thecorresponding coefficients for an example image. It is worthnoticing that different sub-images have the same size, however;for illustrative purposes in Figure 2(b) they are shown with thedifferent sizes.
Fig. 1. Frequency support of the basis functions corresponding to the Shearletfourth level decomposition with 16 directions orientations
III. DST-BASED BLIND WATERMARKING
In relation to its application for image watermarking, theDST ability to better represent directional features as claimedin [36], may allow watermark embedding to adapt to thediagonal features in the host image more efficiently. In thissection, a new DST-based watermarking framework for blindwatermarking is developed in order to explore the possibleimprovements on DST performance against signal processing,geometric and compression based attacks. In addition, thisproposed new blind watermark detection scheme for DSTcoefficients is optimal for non-additive schemes relying on thestatistical decision theory.
A. Digital Image Statistical Watermark Detection Based OnDiscrete Shearlet Transform Domain
Non blind watermarking systems, such as the one proposedin [32], are limited in their application field, since theyrequire access to the host image during the detection process.However, this is not always the case for some applications suchas image authentication [4]. As alternative blind watermarking,targets the recovery of the watermark when the host (in thiscase an image) is not available during the detection stage.This makes blind watermarking systems more complicated,but more practical since the original image is not requiredin the receiver side. In order to reconstruct the watermark,blind schemas assume that original and watermarked coef-ficients are strongly correlated [37]. Under this assumption,the watermark detection problem can be viewed as a sta-tistical hypothesis testing problem [37]. Thus, the statisticalbehaviour of the noisy transformed coefficients can be usedto derive a decision rule which decides whether a candidatewatermark is actually embedded in the data (hypothesis H1)or not (hypothesis H0). In this section a new blind watermarkdetection scheme for DST coefficients is proposed as optimalfor non-additive schemes relying on the statistical decisiontheory. The proposed method is derived according to the Bayes
Fig. 2. (a) Original grayscale image Lena. (b) An illustration of the DSTtransforms coefficients (the coefficients are multiplied by 30 to enhance thecontrast for the sake of visualization), (c) The angles covered in DST-sub-bands.
decision theory, the Neyman-Pearson criterion which is usedto minimize the missing detection probability subject to a fixedfalse alarm probability (PFA) [38], and the probability densityfunction (pdf) distribution of the DST coefficients.
B. DST coefficient probability distribution function
In order to apply the decision theory and derive the op-timum behaviour of the ML (Maximum-likelihood) detector,a suitable distribution model for the probability distributionfunction (PDF) of the DST coefficients is required as a firststep. We have estimated the PDF of DST coefficients for thirtyimages (see experimental section for more details on the data)for all five resolutions and 49 sub-bands.
Fig. 3. PDF of DST transformed coefficients for Lena image. Graphs representthe coefficient pdfs corresponding to all 8 sub-bands at the second resolution.
It can be noticed, as shown in Figure 3, that the statisticalmodel of the Shearlet approximates a Laplacian distribution,therefore this model was chosen can be better modelled as aNormal Inverse Gaussian (NIG). The Laplacian distribution isdefined as follows:
f(χ) =λ
2exp(−λ|χ|) (7)
The Laplacian is symmetrical about zero, and it can bereadily matched to the sample DST distribution by findingthe appropriate parameter for λ. It is also worth to noticethat the statistical model of the Shearlet coefficients can bemodelled as a Normal Inverse Gaussian (NIG) [39]. However,in our case, this distribution is not best choice due to have highcomputational cost caused by complexity of the mathematicalstructure of this distribution (it contains four variables thatneed to be estimated simultaneously) which leads to difficultyin order to apply the central limit theorem [40]. This isrequired in our blind watermarking framework, in order tocalculate PFA. Therefore, and as the most suited distributionmodel, the Laplacian distribution, was chosen. An exampleis shown in Figure 4 where the DST coefficients distributionaveraged for both all thirty images and all the fourth level sub-bands are illustrated and compared with a Laplacian, Gaussiandistribution and NIG approximations.
In order to validate the previous findings, the similar-ity between the real DST coefficient distribution and thehypothetical distribution models using NIG, Laplacian andGaussian, are estimated using Relative Entropy (Kullback-Leibler divergence). The Relative Entropy, D, measured howwell our hypothetical distribution Q fills the observation ofthe real distribution P between the DST coefficients and theestimated one Q, and is obtained as below, where achievingsmaller value for D implies greater similarity between two
The D value obtained 12, 17 and 25 for NIG, Laplacianand Gaussian distribution, respectively. These results confirmthat NIG is the nearest distribution to the real one while theLaplacian distribution remains a good approximation to theNIG model.
C. Hypothesis Testing Problem and Formulation
Given an image I , the aim is to verify whether theimage I contains the watermark W ∗(chosen from thesequence of possible watermark W ) or not. By applyingstatistical detection theory the following hypotheses are underconsideration [40]:
Hypothesis H0:
• Case 1: The DST coefficients, Y , do not contain anywatermark.
• Case 2 : The DST coefficients, Y , contain a watermarkother than W ∗. For notation purpose, we will denotethat the DST coefficients Y; contain a watermark w0,where w0 is another random watermark selected from aset W of watermarks different from W ∗.
Hypothesis H1:
• The DST coefficients, Y, contain the watermark W ∗.
The embedding rule adopted in this paper is multiplicative(non-additive) embedding due to its adaptation with frequencydomain and the fact that it fulfils invisibility constraints thusincreasing system security [41]:
yi = xi(1 + αw∗i ) (9)
where x = (x1, ..., xN ) is a sequence of the original DSTcoefficients of image I , w∗ = (w∗1 , ..., w
∗N ) is the watermark
sequence that is uniformly distributed in [-1, 1], a is a gain fac-tor controlling the watermark strength, and (y1, ..., yN ) is thesequence of watermarked DST coefficients of the watermarkedimage, I
′. By relying on the decision theory, the observation
variables are the vector Y of possibly marked coefficients. Thelikelihood ratio of these coefficients to be watermarked l(Y )is obtained as:
l(Y ) =fy(Y |w∗)fy(Y |w0)
≶ T (10)
where fy(y|w) is pdf of the vector Y conditioned to w and Tis the decision threshold. Note that, for Hypothesis H0, Case1 and Case 2 can be treated together under the assumptionthat w0 is allowed to include the null sequence.
As we deal with image watermarking in this paper, thereforefollowing assumptions are read for sake of mathematical
calculation.
Lemma 1: The components of Y are independent of eachother and Y satisfies fy(Y |w0) > 0 by considering Hypothe-ses H0 and H1 and equation (15), it can be shown that:
H0 = case1 : yi = xi (11)
H0 = case2 : yi = xi(1 + αw0i)⇒ xi =yi
1 + αw0i(12)
To further calculate the likelihood ratio, the pdf of DSTcoefficients is required. By assuming the previously justifiedLaplacian distribution as the pdf of the DST coefficient:
f(xi) =
√2
2σiexp(
−√2
σi|xi − µi|) (13)
which is equivalent to the following expression when using√2σi
= λ
f(xi) =λ
2exp(−λ|xi − µi|) (14)
where µi and σ2i are the mean and variance of the sub-band
to which the coefficients belong.Lemma 2: Barni and Bartolini [40] formulated that, under
the assumption of an imperceptible watermark, i.e. when theembedding strength is set to be much smaller than one (α�1), then:
P (y|w) ≈ P (y|0) (15)
In this case the integral is very small and centered at yi,therefore the component can be linearly approximated usingTaylors theorem. By applying the previous change of notationand a new Lemma 2, l(y) is defined as follows:
l(y) =
∏Ni=1(
λ2 e−λ| yi
1+αw∗i−µixi |)∏N
i=1(λ2 e−λ|yi−µixi |)
(16)
T2 = (1
2)NT
The detector decide H1 if Lnl(y) > LnT2The detector decide H0 if Lnl(y) < LnT2
Fig. 4. Distribution of DST transformed coefficients for all thirty images in all fourth level sub-bands coefficients fitted with NIG, Laplacian and Gaussiandistributions curve.
By simplifying, gi = (|yi−µixi |−|1+αiw∗i |−1|yi−µixi−µixiαiw
∗i |) the decision rule is obtained as follows:
Z(y) =
N∑i=1
gi > T3 (18)
D. Decision Threshold
By analysing the decision rule obtained from the previoussection, it can be seen that the detector operates by comparingthe likelihood ratio against a detection threshold:
T =p0(l | H0)
p1(l | H1)(19)
where p0 and p1 are the prior probability of hypothesesH0 and H1, respectively. In a desirable system, the thresholdshould be set to minimize the overall error probability Pe.Thiscan be achieved by setting the missed detection probabilityPm (failure to detect the presence of the watermark in animage that contains one) and the false alarm probability PFA(detection of watermark in an image when it does not actuallycontain one) to be equal. However, in the case of an attack,the threshold selected to minimize the error probability Pe
will not be suitable since the missed detection probabilityPm becomes higher than the false alarm probability PFA.In order to address this issue, the Neyman-Pearson criterioncan be used to obtain the threshold T in such a way that themissed detection probability is minimized, subject to a fixedfalse alarm probability [38].
D = (H1|R = H0)
PFA = P (D)
= P (Z(y) > T |w0) = P (Z(y) > T )
=
∫ ∞T
fzxZ(x)dzx
(20)
where
Z(x) = Z(y)|y=x= P (Z(y) > T |w0) = P (Z(y) > T )
=
√2
σi(|xi − µixi | − |1 + αiw
∗i |−1|xi − µixi − µixiαiw∗i |)
(21)
By applying the central limit theorem, the PDF of Z(x)can be assumed to be a normal distribution [38] with mean
where Q is the Q-function or tail probability of the standardnormal distribution of Z(x):
PFA = Q(T − µzσz
)⇒ Q−1(PFA) =T − µzσz
(26)
Finally, the threshold will be obtained as below:
T = σzQ−1(PFA) + µz. (27)
The embedding and detecting framework proposed for ourblind watermarking system, is depicted in Figure 5. Dur-ing the embedding process[Upper block ], first the originalimageNxN , is decomposed to 5 levels using Discrete ShearletTransform (DST), then the watermark consists of a sequenceof random real numbers uniformly distributed in the range[-1,1] of length N is generated and embedded into theoriginal image I . Once the watermark is embedded into theDiscrete Shearlet coefficients, the image is recomposed tocreate the watermarked image I
′. The watermarked image is
then passed through the attack channel [lower block] wheresome distortions are applied in order to remove the watermark.This produces the attacked image I
′′that is then passed to
Fig. 5. Proposed Watermarking System. Upper block describes the water-marking process while the lower block depicts the detection process.
the detecting stage. It is important to remember that in thisblind schema, the original image is not available during thedetection stage. Instead, a statistical model is used during thedecision stage and calculated directly from the watermarkedand possibly attacked images.
IV. PERFORMANCE EVALUATION
To verify the effectiveness of the proposed algorithm, aseries of experiments were conducted.
A. Dataset
In our experiments, thirty 512 × 512 sized well-knowngrayscale images were used as host images. A set of standardtest images which are used frequently in the literature wereselected from a wide range of image processing databases[42]to represent different image features (Figure 6). Some of theseimages are smooth with a lack of detailed features, othersare more complex with a lot of edges and some texturedregions. The rest contains high detail textured regions. Thisset is selected from the following references [10],[19].
B. Blind Watermarking
In this section, the performance of the blind statisticaldetector described in Section III.B is tested on the thirtystandard greyscale 512× 512 images (Figure 6). The originalimage is not available during the detection stage. Instead, a sta-tistical model is used during the decision stage and calculateddirectly from watermarked and possibly corrupted images.Each image is transformed using DST and the watermarkconsists of a sequence of random real numbers uniformlydistributed in the range [-1, 1]. The watermark is embedded inthe most significant coefficient through all DST levels at the5th level of resolution and sub-bands of the host image.The
Fig. 6. Set of images used for embedding watermark
watermark detection is performed in the transform domainusing maximum-likelihood detection, whereby the decisionthreshold is calculated using the Neyman-Pearson criterion.Inorder to investigate the performance of our proposed method,the results from all the different method are compared underthe same conditions. DWT coefficients were selected from the3rd level of resolution as suggested by[43]. CT coefficientswere selected from the sub-band on the 3rd levels of res-olution as suggested in [3] to optimise imperceptibility androbustness. Three performance metrics were taken into accountduring this analysis: The imperceptibility of the watermarkby using the Peak Signal-to-Noise Ratio (PSNR), the Root-mean squared error (RMSE), and Structural similarity (SSIM)as fidelity measurements,the probability of false alarm andthe probability of missed detection and the robustness of thewatermark against a number of commonly used attacks. Inparticular SSIM measures the quality of the image using aninitial distortion-free image as reference. SSIM is designed toimprove traditional methods such as PSNR and MSE whichhave been proved to be inconsistent with the human eyeperception [44]. The resulting SSIM index is a decimal valuebetween -1 and 1, where 1 is only reachable in the case oftwo identical sets of data.
By comparing the results using SSIM (Figure 7) andRMSE (Figure 8) as metrics, it is concluded that the proposedalgorithm based on DST has a better imperceptibility asreflected in having smaller RMSE (which indicates that thewatermarked image is close to the original one on a pixel-by-
Fig. 7. SSIM distortions between original and watermarked for all images
Fig. 8. Average RMSE distortions for all images
pixel basis) and higher similarity SSIM, where, more closer to1 indicates the watermarked image is more similar to originalone. Among the reasons for this improved imperceptibility, wecan cite: the smaller sizes of the shearing filters representedby (eq.2) in comparison with the directional filters used byDWT and CT [22], having the greater windowing flexibility asrepresented by (eq.3,4) as claimed in [22] that can be utilizedand makes possible Incorporating sub-sampling and providingadditional directional information. On the other word, bychoosing smaller size of filters we can represent edges moreprecisely and by having greater windowing flexibility we candevelop a variety of alternative implementations. This is morenoticeable by considering each transform reaction based onimage characteristic. For example, DST is more adapted withimages having a lot of edges and textured regions (Barbara).For images having smooth areas with a lack of detailedfeatures (Bunny), DST adaptation is still better than CT andDWT. DST also adapted perfectly for images contain mostlyhigh detail textured regions such as Baboon.
1) Robustness: To investigate the effects of attacks on theblind watermarking algorithm, different tests were carried outto evaluate its performance. The results are compared againstan equivalent DWT and CT blind watermarking schemas, asit was shown that the DWT and CT coefficient distributionscan be also expressed using Laplacian model [43]. In orderto ensure a fair comparison, given that every method has a
different imperceptibility/robustness balance, all the methodswere tuned to provide an approximately 43db PSNR valuebefore the attack [43]. In this regard, the alpha value is setto 0.25 for DWT, 0.2 for CT and 0.2 for DST. During blinddetection, the parameters of the proposed model are directlyestimated from the DST, CT and DWT coefficients of thewatermarked image to fulfil the assumption that watermarkedimage is close to the original one if strength parameter,α,is much less than 1 (α << 1). It is to be noted that,in practice, our chosen strength parameter values 0.2 and0.25 will be acceptable [43] under this approximation whileproviding acceptable levels of robustness. The embedding wasperformed in all the coefficients obtained from the 4rd levelof decomposition for DST and 3rd level for DWT and CT inorder to provide the better resolution and therefore the biggestpayload that each method allows.
Table I shows the performance of the false alarm rate (FA),which represents the watermarks that were detected when thatwatermark was not actually embedded and missed detection(MD), embedded watermarks that were not detected. Theresults were obtained based on the number of the images whereeach of these errors occurred. These results were computed bydetecting the watermark w chosen from a set of 100 randomlygenerated watermarks in each image (based on 3000 trials) forthe same distribution model with the PFA = 10−3. PFA isnormally between 10−3 to 10−12 based on different attacksand applications[43].
TABLE IFA AND MD FOR THE SAME VALUE OF PFA USING LAPLACIAN MODELFOR DST , CT AND DWT TRANSFORM BASED ON 30 IMAGES AND 3000
The effect of five attacks, including Additive White Gaus-sian Noise (AWGN), Compression, Blurring, Cropping andRotation are tested on the all 30 watermarked images. Foreach attack, the detector responses were related to the actualembedded watermark. Table II-VI contains the number ofsuccessful detection for most commonly used attacks on eachindividual watermarked image as well as the global averageand average of False Alarm rate and Missed detection. It isworth noting these results were obtained based on 3000 trials.
In the first attack, Gaussian noise is added to the water-marked image with zero mean and standard deviations 0.01,0.05 and 0.09. It can be possible to add bigger values than0.09, this will make the image out of focus which wouldmake the quality of the image so distorted that the imagewould not valuable for the attacker. From these experimentalresults represented in Table II, it is found that DST providescomparable robustness to its counterpart in terms of AWGNattack, consistently better than CT and DWT.
In the second attack, Gaussian low pass filter is applied tothe watermarked image to analyse the effect of blurring usingstandard deviation varied from 0.3, 0.5 and 0.8 and 3×3 spatial
TABLE IINUMBER OF SUCCESSFUL DETECTIONS (TRUE POSITIVES)AND AVERAGE OF FA AND MD FOR ALL 30 IMAGES AFTER
APPLYING GAUSSIAN NOISE WITH STANDARD DEVIATIONS0.01, 0.05 AND 0.09 FOR DWT,CT AND DST FOR 3000 TRIALS
filter. From these experimental results represented in TableIII, it is found that DST also performs better against blurringattacks in when compared against DWT and CT counterpart.
TABLE IIINUMBER OF SUCCESSFUL DETECTIONS (TRUE POSITIVES)AND AVERAGE OF FA AND MD FOR ALL 30 IMAGES AFTER
APPLYING BLURRING ATTACK WITH STANDARD DEVIATIONS0.3, 0.5 AND 0.8 FOR DWT, CT AND DST FOR 3000 TRIALS
In the third attack, the watermarked images are cropped bycutting off 25% ,50% and 75% of some random part of theimages. To extract the watermark, the missing part(s) of theimage should be replaced with those parts of the original nonwatermarked image. The results are shown in Table IV. Fromthese experimental results it is found that DST provides goodrobustness against cropping attack in comparison with DWTand CT.
TABLE IVNUMBER OF SUCCESSFUL DETECTIONS (TRUE POSITIVES)AND AVERAGE OF FA AND MD FOR ALL 30 IMAGES AFTER
APPLYING CROPPING ATTACK BY CUTTING OFF 25%,50% AND75% FOR DWT, CT AND DST FOR 3000 TRIALS
In the fourth attack, the watermarked image is compressedto provide an output quality of 50%, 70% and 90% of theoriginal images. No smoothing is applied. According to TableV, it can be concluded that DST performs very well againstJPEG compression in comparison with DWT, but in termsof the severe compression attack CT provides slightly betterresults than DST.
Finally, the watermarked image is slightly rotated andcropped to discard areas of the image that contain less usefulinformation, such as black areas resulting from the rotation byapplying 1, 2, 5 and 7 degrees rotation in a counter clockwisedirection. According to Table VI, it can be concluded thatDST provides very good robustness against rotation attacks incomparison with DWT and CT. More precisely, this is dueto DST improved property to capture more directions andhaving shift-invariant structure that allows Shearlet to capturethe image more efficiently.
Based on the results obtained as described above, it can beconcluded that the proposed DST blind watermarking methodprovides better results, in terms of robustness, compared toDWT and CT watermarking based techniques using the samestatistical model (Laplacian). This is due to the fact that DSThas a greater windowing flexibility that can be utilized to
TABLE VNUMBER OF SUCCESSFUL DETECTIONS (TRUE POSITIVES)AND AVERAGE OF FA AND MD FOR ALL 30 IMAGES AFTERAPPLYING JPEG ATTACK USING QUALITY OF 50%, 70% AND
TABLE VINUMBER OF SUCCESSFUL DETECTIONS (TRUE POSITIVES)AND AVERAGE OF FA AND MD FOR ALL 30 IMAGES AFTERAPPLYING ROTATION ATTACK 1, 2, 5 AND 7 DEGREES FOR
capture the image characteristics like curve and edges. Thisis more noticeable by considering each transform reactionbased on image characteristic. For example, DST is moreadapted with images having a lot of edges and textured regions(Barbara). For images having smooth areas with a lack ofdetailed features (Bunny), DST adaptation is still better thanDWT and CT. DST also adapted perfectly for images containmostly high detail textured regions such as Baboon.
In this paper we have proposed a novel blind watermark-ing framework based on the discrete Shearlet transform forblind image watermarking. This idea is justied through itsstructure and potential to provide higher payload and betterimperceptibility. A blind system framework was implementedto test the suitability of DST for watermarking based ondecision theory. This system presents theoretical noveltiesin the lter structure and the probabilistic model in order toallow DST to be integrated. As a main advantage this blindwatermarking method does not require the transmission ofthe original clean image. To achieve this, the distribution ofthe Discrete Shearlet Transforms coefficients for different sub-bands and resolutions are investigated. Thus, the PDF obtainedfrom DST coefficients is modeled using a Laplacian channel.This model has proved to be effective and simpler, allowing thecorresponding mathematical description of the full framework.Finally, a maximum likelihood detection scheme based onLaplacian modelling of the DST coefficients is implementedunder a hypothesis condition using detection rules based on theNeyman-Pearson criterion in order to improve the robustnessas well as adapting the watermark strength to the host imageby considering the visual sensitivity. The proposed method isless sensitive to fine parameter tuning in comparison with non-blind methods [33], i.e. parameters can remain unchanged evenunder different attacks and the original image is not requiredduring the detection stage. From the experimental results it isfound that the DST based embedding provides a good imper-ceptibility and an improved payload as predicted. In terms ofrobustness, the results demonstrate superior robustness againstcommon image processing manipulations compared to DWTand CT. This is more obvious in compression, noise androtation attacks.
REFERENCES
[1] A. Al-Gindy, A. M. Zorrilla, and B. Beyrouti, “Dct watermarkingtechnique using image normalization,” in Developments of E-SystemsEngineering (DeSE), 2015 International Conference on. IEEE, 2015,pp. 145–149.
[2] C. Namratha and S. Kareemulla, “Multi image watermarking usinglagrangian support vector regression,” in Recent Trends in Electronics,Information & Communication Technology (RTEICT), IEEE Interna-tional Conference on. IEEE, 2016, pp. 513–516.
[3] H. Sadreazami, M. O. Ahmad, and M. Swamy, “Multiplicative water-mark decoder in contourlet domain using the normal inverse gaussiandistribution,” IEEE Transactions on Multimedia, vol. 18, no. 2, pp. 196–207, 2016.
[4] I. Cox, M. Miller, J. Bloom, J. Fridrich, and T. Kalker, Digital water-marking and steganography. Morgan Kaufmann, 2007.
[5] T. Chen and H. Lu, “Robust spatial lsb watermarking of color imagesagainst jpeg compression,” in Advanced Computational Intelligence(ICACI), 2012 IEEE Fifth International Conference on. IEEE, 2012,pp. 872–875.
[6] C.-M. Pun, “A novel dft-based digital watermarking system for images,”in 2006 8th international Conference on Signal Processing, vol. 2.IEEE, 2006.
[7] T. K. Das, S. Maitra, and J. Zhou, “Cryptanalysis of chu’s dct basedwatermarking scheme,” IEEE transactions on multimedia, vol. 8, no. 3,pp. 629–632, 2006.
[8] G. Tianming and W. Yanjie, “Dwt-based digital image watermarkingalgorithm,” in Electronic Measurement & Instruments (ICEMI), 201110th International Conference on, vol. 3. IEEE, 2011, pp. 163–166.
[9] S. R. Chalamala, K. R. Kakkirala, and R. G. B. Mallikarjuna, “Anal-ysis of wavelet and contourlet transform based image watermarkingtechniques,” in Advance Computing Conference (IACC), 2014 IEEEInternational. IEEE, 2014, pp. 1122–1126.
[10] L. Ghouti, A. Bouridane, M. K. Ibrahim, and S. Boussakta, “Digitalimage watermarking using balanced multiwavelets,” IEEE transactionson signal processing, vol. 54, no. 4, pp. 1519–1536, 2006.
[11] J. Ruanaidh, W. Dowling, and F. M. Boland, “Phase watermarking ofdigital images,” in Image Processing, 1996. Proceedings., InternationalConference on, vol. 3. IEEE, 1996, pp. 239–242.
[12] J. Zou, X. Yang, and S. Niu, “A novel robust watermarking methodfor certificates based on dft and hough transforms,” in IntelligentInformation Hiding and Multimedia Signal Processing (IIH-MSP), 2010Sixth International Conference on. IEEE, 2010, pp. 438–441.
[13] I. J. Cox, J. Kilian, F. T. Leighton, and T. Shamoon, “Secure spreadspectrum watermarking for multimedia,” IEEE transactions on imageprocessing, vol. 6, no. 12, pp. 1673–1687, 1997.
[14] F. Huang and Z.-H. Guan, “A hybrid svd-dct watermarking method basedon lpsnr,” Pattern Recognition Letters, vol. 25, no. 15, pp. 1769–1775,2004.
[15] W. Huai-bin, Y. Hong-liang, W. Chun-dong, and W. Shao-ming, “A newwatermarking algorithm based on dct and dwt fusion,” in Electricaland Control Engineering (ICECE), 2010 International Conference on.IEEE, 2010, pp. 2614–2617.
[16] A. Furqan and M. Kumar, “Study and analysis of robust dwt-svddomain based digital image watermarking technique using matlab,” inComputational Intelligence & Communication Technology (CICT), 2015IEEE International Conference on. IEEE, 2015, pp. 638–644.
[17] T. Huynh-The, S. Lee, P.-C. Hieu, and T. Le-Tien, “A dwt-based imagewatermarking approach using quantization on filtered blocks,” in 2014International Conference on Advanced Technologies for Communica-tions (ATC 2014). IEEE, 2014, pp. 280–285.
[18] I. W. Selesnick, R. G. Baraniuk, and N. C. Kingsbury, “The dual-treecomplex wavelet transform,” IEEE signal processing magazine, vol. 22,no. 6, pp. 123–151, 2005.
[19] A. I. Thompson, A. Bouridane, F. Kurugollu, and C. Tanougast, “Wa-termarking for multimedia security using complex wavelets,” Journal ofMultimedia, vol. 5, no. 5, pp. 443–457, 2010.
[20] J.-L. Starck, E. J. Candes, and D. L. Donoho, “The curvelet transformfor image denoising,” IEEE Transactions on image processing, vol. 11,no. 6, pp. 670–684, 2002.
[21] M. N. Do and M. Vetterli, “The contourlet transform: an efficientdirectional multiresolution image representation,” IEEE Transactions onimage processing, vol. 14, no. 12, pp. 2091–2106, 2005.
[22] G. Easley, D. Labate, and W.-Q. Lim, “Sparse directional image repre-sentations using the discrete shearlet transform,” Applied and Computa-tional Harmonic Analysis, vol. 25, no. 1, pp. 25–46, 2008.
[23] H. Shan, J. Ma, and H. Yang, “Comparisons of wavelets, contourlets andcurvelets in seismic denoising,” Journal of Applied Geophysics, vol. 69,no. 2, pp. 103–115, 2009.
[24] S. Zaboli and M. S. Moin, “Cew: A non-blind adaptive image water-marking approach based on entropy in contourlet domain,” in 2007 IEEEInternational Symposium on Industrial Electronics. IEEE, 2007, pp.1687–1692.
[25] P. Mitra, R. Gunjan, and M. S. Gaur, “A multi-resolution watermarkingbased on contourlet transform using svd and qr decomposition,” inRecent Advances in Computing and Software Systems (RACSS), 2012International Conference on. IEEE, 2012, pp. 135–140.
[26] Q. Cheng and T. S. Huang, “Blind digital watermarking for images andvideos and performance analysis,” in Multimedia and Expo, 2000. ICME2000. 2000 IEEE International Conference on, vol. 1. IEEE, 2000, pp.389–392.
[27] M. Barni, F. Bartolini, A. De Rosa, and A. Piva, “A new decoder forthe optimum recovery of nonadditive watermarks,” IEEE Transactionson Image Processing, vol. 10, no. 5, pp. 755–766, 2001.
[28] T. M. Ng and H. K. Garg, “Wavelet domain watermarking usingmaximum-likelihood detection,” in Electronic Imaging 2004. Inter-national Society for Optics and Photonics, 2004, pp. 816–826.
[29] T. Ng and H. Garg, “Maximum-likelihood detection in dwt domainimage watermarking using laplacian modeling,” IEEE Signal ProcessingLetters, vol. 12, no. 4, pp. 285–288, 2005.
[30] W.-Q. Lim, “The discrete shearlet transform: A new directional trans-form and compactly supported shearlet frames,” IEEE Transactions onImage Processing, vol. 19, no. 5, pp. 1166–1180, 2010.
[31] G. Kutyniok et al., Shearlets: Multiscale analysis for multivariate data.Springer Science & Business Media, 2012.
[32] B. Ahmederahgi, F. Kurugollu, P. Milligan, and A. Bouridane, “Spreadspectrum image watermarking based on the discrete shearlet transform,”in Visual Information Processing (EUVIP), 2013 4th European Work-shop on. IEEE, 2013, pp. 178–183.
[33] B. Ahmaderaghi, J. M. Del Rincon, F. Kurugollu, and A. Bouridane,“Perceptual watermarking for discrete shearlet transform,” in VisualInformation Processing (EUVIP), 2014 5th European Workshop on.IEEE, 2014, pp. 1–6.
[35] Y. Teng, F. Liu, and R. Wu, “The research of image detail en-hancement algorithm with laplacian pyramid,” in Green Computingand Communications (GreenCom), 2013 IEEE and Internet of Things(iThings/CPSCom), IEEE International Conference on and IEEE Cyber,Physical and Social Computing. IEEE, 2013, pp. 2205–2209.
[36] S. Yi, D. Labate, G. R. Easley, and H. Krim, “A shearlet approach toedge analysis and detection,” IEEE Transactions on Image Processing,vol. 18, no. 5, pp. 929–941, 2009.
[37] Y. Chen and J. Chen, “A novel blind watermarking scheme based onneural networks for image,” in Information Theory and InformationSecurity (ICITIS), 2010 IEEE International Conference on. IEEE, 2010,pp. 548–552.
[38] T. S. Ferguson, Mathematical statistics: A decision theoretic approach.Academic press, 2014, vol. 1.
[39] O. E. Barndorff-Nielsen, “Normal inverse gaussian distributions andstochastic volatility modelling,” Scandinavian Journal of statistics,vol. 24, no. 1, pp. 1–13, 1997.
[40] M. Barni and F. Bartolini, Watermarking systems engineering: enablingdigital assets security and other applications. CRC Press, 2004.
[41] Q. Cheng and T. S. Huang, “Robust optimum detection of transformdomain multiplicative watermarks,” IEEE Transactions on Signal Pro-cessing, vol. 51, no. 4, pp. 906–924, 2003.
[42] R. Gonzalez, R. Woods, and S. Eddins, “image Database,” http://www.imageprocessingplace.com/root files V3/image databases.htm/, 2004,[Online].
[43] K. Zebbiche, F. Khelifi, and A. Bouridane, “Maximum-likelihood water-marking detection on fingerprint images,” in Bio-inspired, Learning, andIntelligent Systems for Security, 2007. BLISS 2007. ECSIS Symposiumon. IEEE, 2007, pp. 15–18.
[44] Y. A. Al-Najjar, D. C. Soong et al., “Comparison of image qualityassessment: Psnr, hvs, ssim, uiqi,” International Journal of Scientific& Engineering Research, vol. 3, no. 8, p. 1, 2012.