Top Banner
Robust image watermarking using local invariant features Hae-Yeoun Lee Korea Advanced Institute of Science and Technology Department of Electrical Engineering and Computer Science 373-1 Guseong-dong Yuseong-gu Daejeon, Republic of Korea E-mail: [email protected] Hyungshin Kim Chungnam National University Department of Computer Science and Engineering 220 Gung-dong Yuseong-gu Daejeon, Republic of Korea Heung-Kyu Lee Korea Advanced Institute of Science and Technology Department of Electrical Engineering and Computer Science 373-1 Guseong-dong Yuseong-gu Daejeon, Republic of Korea Abstract. This paper addresses a novel robust watermarking method for digital images using local invariant features. Most previous water- marking algorithms are unable to resist geometric distortions that desyn- chronize the location where copyright information is inserted. We pro- pose a watermarking method that is robust to geometric distortions. In order to resist geometric distortions, we use a local invariant feature of the image called the scale-invariant feature transform SIFT, which is invariant to translation and scaling distortions. The watermark is inserted into the circular patches generated by the SIFT. Rotation invariance is achieved using the translation property of the polar-mapped circular patches. Our method belongs to the blind watermark, because we do not need the original image during detection. We have performed an inten- sive simulation to show the robustness of the proposed method. The simulation results support the contention that our method is robust against geometric distortion attacks as well as signal-processing attacks. We have compared our results with those of other methods, and our method outperforms them. © 2006 Society of Photo-Optical Instrumentation Engineers. DOI: 10.1117/1.2181887 Subject terms: robust watermarking; watermark synchronization; local invariant features. Paper 040984R received Dec. 22, 2004; revised manuscript received Jul. 4, 2005; accepted for publication Jul. 21, 2005; published online Mar. 20, 2006. 1 Introduction Following developments in computer technology that have made digital imaging techniques widely available, a variety of multimedia contents have been digitalized. Digital mul- timedia have many advantages over analog media: they can be easily accessed, copied, and distributed multiple times without degradation of quality. However, the widespread use of digital multimedia has also brought with it problems regarding the preservation of copyright. Digital watermarking is an efficient solution for copy- right protection, which inserts copyright information, the watermark, into the contents themselves. Ownership of the contents can be established by retrieving the inserted wa- termark. Various attacks have been reported to be effective against watermarking methods. 1 Among them, geometric distortion is known as one of the most difficult attacks to resist. Geometric distortion desynchronizes the location of the watermark and hence causes incorrect watermark detec- tion. In such cases, the watermark synchronization process is required to calculate the watermark location before wa- termark insertion and detection and is crucial for the ro- bustness of the watermarking system. A significant amount of research related to watermark synchronization has been conducted. The use of a periodic sequence, 2 the use of templates, 3 and the use of an invariant transform 4,5 have been reported. The use of media contents is another solution for watermark synchronization, 6 and our method belongs to this approach. Media contents represent an invariant reference for geometric distortions so that re- ferring to content can solve the problem of watermark syn- chronization, i.e., the location of the watermark is not re- lated to image coordinates, but to image semantics. 7 In what follows, we refer to the location for watermark inser- tion and detection as the patch. Bas et al. 6 proposed a content-based synchronization method, in which they first extract salient feature points and then decompose the image into a set of disjoint triangles through Delaunay tessellation. The sets of triangles the patches are used to insert and detect the watermark in the spatial domain. The drawback of this method is that the extracted feature points from the original image and dis- torted images are not matched. Therefore, the sets of tri- angles generated during watermark insertion and detection are different, and the resulting patches do not match. Nikolaidis and Pitas 8 proposed an image-segmentation- based synchronization method. By applying an adaptive k-mean clustering technique, they segment images, select several largest regions, and fit those regions as ellipsoids. The bounding rectangles of the ellipsoids are used as the patches. The problem with this method is that the image segmentation depends on the image contents, so that image distortions severely affect the segmentation results. Tang and Hang 9 introduced a synchronization method 0091-3286/2006/$22.00 © 2006 SPIE Optical Engineering 453, 037002 March 2006 Optical Engineering March 2006/Vol. 453 037002-1
11

Robust image watermarking using local invariant featureshklee.kaist.ac.kr/publications/optical engineering(2006-3).pdf · Robust image watermarking using local invariant ... Most

Mar 22, 2018

Download

Documents

vokiet
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Robust image watermarking using local invariant featureshklee.kaist.ac.kr/publications/optical engineering(2006-3).pdf · Robust image watermarking using local invariant ... Most

Optical Engineering 45�3�, 037002 �March 2006�

Robust image watermarking using local invariantfeatures

Hae-Yeoun LeeKorea Advanced Institute of Science

and TechnologyDepartment of Electrical Engineering

and Computer Science373-1 Guseong-dongYuseong-guDaejeon, Republic of KoreaE-mail: [email protected]

Hyungshin KimChungnam National UniversityDepartment of Computer Science

and Engineering220 Gung-dongYuseong-guDaejeon, Republic of Korea

Heung-Kyu LeeKorea Advanced Institute of Science

and TechnologyDepartment of Electrical Engineering

and Computer Science373-1 Guseong-dongYuseong-gu

Abstract. This paper addresses a novel robust watermarking methodfor digital images using local invariant features. Most previous water-marking algorithms are unable to resist geometric distortions that desyn-chronize the location where copyright information is inserted. We pro-pose a watermarking method that is robust to geometric distortions. Inorder to resist geometric distortions, we use a local invariant feature ofthe image called the scale-invariant feature transform �SIFT�, which isinvariant to translation and scaling distortions. The watermark is insertedinto the circular patches generated by the SIFT. Rotation invariance isachieved using the translation property of the polar-mapped circularpatches. Our method belongs to the blind watermark, because we do notneed the original image during detection. We have performed an inten-sive simulation to show the robustness of the proposed method. Thesimulation results support the contention that our method is robustagainst geometric distortion attacks as well as signal-processing attacks.We have compared our results with those of other methods, and ourmethod outperforms them. © 2006 Society of Photo-Optical InstrumentationEngineers. �DOI: 10.1117/1.2181887�

Subject terms: robust watermarking; watermark synchronization; local invariantfeatures.

Paper 040984R received Dec. 22, 2004; revised manuscript received Jul. 4,2005; accepted for publication Jul. 21, 2005; published online Mar. 20, 2006.

Daejeon, Republic of Korea

timafclwt

mttpsetaa

bksTpsd

1 Introduction

Following developments in computer technology that havemade digital imaging techniques widely available, a varietyof multimedia contents have been digitalized. Digital mul-timedia have many advantages over analog media: they canbe easily accessed, copied, and distributed multiple timeswithout degradation of quality. However, the widespreaduse of digital multimedia has also brought with it problemsregarding the preservation of copyright.

Digital watermarking is an efficient solution for copy-right protection, which inserts copyright information, thewatermark, into the contents themselves. Ownership of thecontents can be established by retrieving the inserted wa-termark. Various attacks have been reported to be effectiveagainst watermarking methods.1 Among them, geometricdistortion is known as one of the most difficult attacks toresist. Geometric distortion desynchronizes the location ofthe watermark and hence causes incorrect watermark detec-tion. In such cases, the watermark synchronization processis required to calculate the watermark location before wa-termark insertion and detection and is crucial for the ro-bustness of the watermarking system.

A significant amount of research related to watermarksynchronization has been conducted. The use of a periodicsequence,2 the use of templates,3 and the use of an invariant

0091-3286/2006/$22.00 © 2006 SPIE

Optical Engineering 037002-1

ransform4,5 have been reported. The use of media contentss another solution for watermark synchronization,6 and ourethod belongs to this approach. Media contents represent

n invariant reference for geometric distortions so that re-erring to content can solve the problem of watermark syn-hronization, i.e., the location of the watermark is not re-ated to image coordinates, but to image semantics.7 Inhat follows, we refer to the location for watermark inser-

ion and detection as the patch.Bas et al.6 proposed a content-based synchronization

ethod, in which they first extract salient feature points andhen decompose the image into a set of disjoint triangleshrough Delaunay tessellation. The sets of triangles �theatches� are used to insert and detect the watermark in thepatial domain. The drawback of this method is that thextracted feature points from the original image and dis-orted images are not matched. Therefore, the sets of tri-ngles generated during watermark insertion and detectionre different, and the resulting patches do not match.

Nikolaidis and Pitas8 proposed an image-segmentation-ased synchronization method. By applying an adaptive-mean clustering technique, they segment images, selecteveral largest regions, and fit those regions as ellipsoids.he bounding rectangles of the ellipsoids are used as theatches. The problem with this method is that the imageegmentation depends on the image contents, so that imageistortions severely affect the segmentation results.

9

Tang and Hang introduced a synchronization method

March 2006/Vol. 45�3�

Page 2: Robust image watermarking using local invariant featureshklee.kaist.ac.kr/publications/optical engineering(2006-3).pdf · Robust image watermarking using local invariant ... Most

attvo�smsTooafp

maapsf

s

Hats

aGSschf

2Ipdn

Fn

Lee, Kim and Lee: Robust image watermarking using local invariant features

that uses intensity-based feature extraction and image nor-malization. They extract feature points by Mexican hatwavelet scale interaction, and the disks of fixed radius cen-tered at each feature point are normalized. These normal-ized disks are used as the patches. However, the normaliza-tion is sensitive to the image contents used, so therobustness of these patches will decrease when the image isdistorted.

The selection of features is important for robust water-marking in content-based synchronization methods. We be-lieve that local image characteristics are more useful thanglobal ones. The scale-invariant feature transform �SIFT�extracts features by considering local image properties andis invariant to rotation, scaling, translation, and partial illu-mination changes.10 In this paper, we propose a watermark-ing method, using the SIFT, that is robust to geometricdistortions. Using the SIFT, we generate circular patchesthat are invariant to translation and scaling distortions. Thewatermark is inserted into the circular patches in an addi-tive way in the spatial domain. Rotation invariance isachieved using the translation property of the polar-mappedcircular patches. We have performed an intensive simula-tion to show the robustness of the proposed method with100 test images. The simulation results confirm that ourmethod is robust against geometric distortion attacks aswell as signal-processing attacks.

This paper is organized as follows. In Sec. 2, we explainthe SIFT and how to extract circular patches using it. InSec. 3, our watermarking method is described with theanalysis of the detector performance. Simulation results areshown in Sec. 4. Section 5 concludes.

2 Local Invariant FeaturesAffine-invariant local features in object recognition and im-age retrieval applications have recently been studied.10–12

These local invariant features are highly distinctive andmatched with a high probability against large image distor-tions. In content-based watermark synchronization, the ro-bust extraction of patches is related to the robustness ofwatermarking systems, and the consideration of local imagecharacteristics will aid in the reliable extraction of patches.We propose a new synchronization method based on theSIFT.

2.1 The SIFTThe SIFT was proposed by Lowe10 and has proved to beinvariant to image rotation, scaling, translation, partial illu-mination changes, and projective transformations. Consid-ering local image characteristics, the SIFT descriptor ex-tracts features and their properties, such as the location�t1 , t2�, the scale s, and the orientation �.

The basic idea of the SIFT is to extract features througha staged filtering that identifies stable points in the scalespace: �1� select candidates for features by searching forpeaks in the scale space of the difference-of-Gaussians�DoG� function, �2� localize each feature using measures ofits stability, and �3� assign orientations based on local im-age gradient directions.

In order to extract candidate locations for features, thescale space D�x ,y ,�� is computed using a DoG function.

As shown in Fig. 1, they successively smooth original im- b

Optical Engineering 037002-2

ges with a variable-scale ��1, �2, and �3� Gaussian func-ion and calculate the scale-space images by subtractingwo successive smoothed images. The parameter � is aariance �called a scale� of the Gaussian function. The scalef the scale-space images is determined by the nearby scale�1, �2, or �3� of the Gaussian-smoothed image. In thesecale-space images, they retrieve all local maxima andinima by checking the closest eight neighbors in the same

cale and nine neighbors in the scales above and below.hese extrema determine the location �t1 , t2� and the scale sf the SIFT features, which are invariant to the scale andrientation change of images. In our experiment, to gener-te the scale-space images, we apply scales of the Gaussianunction from 1.0 to 32.0 and increase the scale by multi-lying by a constant factor �2.

After candidate locations have been found, a detailedodel is fitted by a 3-D quadratic function to determine

ccurately the location �t1 , t2� and scale s of each feature. Inddition, candidate locations that have a low contrast or areoorly localized along edges are removed by measuring thetability of each feature using a 2-by-2 Hessian matrix H asollows:

tability =�Dxx + Dyy�2

DxxDyy − Dxy2 �

�e + 1�2

e,

where H = �Dxx Dxy

Dxy Dyy� . �1�

ere e is the ratio of the largest to the smallest eigenvaluend is used to control stability. They use e=10. The quan-ities Dxx, Dxy, and Dyy are the derivatives of the scale-pace images.

In order to achieve invariance to image rotation, theyssign a consistent orientation to each feature. In theaussian-smoothed image with the scale of the extractedIFT features, they calculate gradient orientations of allample points within a circular window about a feature lo-ation and form an orientation histogram. The peak in thisistogram corresponds to the dominant direction of thateature.

.2 Modifications for Watermarkingn this section, we describe how to formulate circularatches for watermark insertion and detection using thisescriptor. The local features from the SIFT descriptor areot directly applicable to watermarking, because the num-

ig. 1 Scale space by using difference-of-Gaussians function andeighbors of a pixel.

er and distribution of the features are dependent on image

March 2006/Vol. 45�3�

Page 3: Robust image watermarking using local invariant featureshklee.kaist.ac.kr/publications/optical engineering(2006-3).pdf · Robust image watermarking using local invariant ... Most

wutfept

D

Trtt

cnotd

3Ttstwwm

3WGbtri

Fi�a

Lee, Kim and Lee: Robust image watermarking using local invariant features

contents and textures. Moreover, the SIFT descriptor wasoriginally devised for image-matching applications, so itextracts many features that have dense distribution over thewhole image. Therefore, we adjust the number, distribution,and scale of the features and remove those features that aresusceptible to watermark attacks.

The SIFT descriptor extracts features with such proper-ties as their location �t1 , t2�, scale s, and orientation �. Inpractice, the orientation property of the SIFT descriptor ofthe original image and distorted images do not match pre-cisely. Hence, we make a circular patch by using only thelocation �t1 , t2� and scale s of extracted SIFT features, asfollows:

�x − t1�2 + �y − t2�2 = �ks�2, �2�

where k is a magnification factor to control the radius of thecircular patches. The way in which this factor is determinedis explained in the fourth paragraph of this subsection.These patches are invariant to image scaling and translationas well as spatial modifications. By applying a prefilter,such as a Gaussian filter, before feature extraction, we canreduce the interference of noise and increase the robustnessof extracted circular patches.

The scale of features derived from the SIFT descriptor isrelated to the scaling factor of the Gaussian function in thescale space. In our analysis, features whose scale is smallhave a low probability of being redetected, because theydisappear easily when image contents are modified. Fea-tures whose scale is large also have a low probability ofbeing redetected in distorted images, because they moveeasily to other locations. Moreover, using large-scale fea-tures means that patches will overlap each other, which willresult in degradation of the perceptual quality of water-marked images. Therefore, we remove features whose scaleis below a or above b. In our experiments, we set a and bat 2.0 and 10.0, respectively.

The SIFT descriptor considers image contents so thatextracted SIFT features have different properties in thescale s, depending on image contents. Watermark insertionand detection necessarily require interpolation to transformthe rectangular watermark to match the shape of patches,and vice versa. In order to minimize the distortion of thewatermark through interpolation, the size �radius� of thepatches must be similar to, or larger than, the size of thewatermark. The scale s of extracted SIFT features variesfrom 2.0 to 10.0. Therefore, we divide the scale of featuresinto two ranges and apply different magnification factors k1and k2, which are determined empirically on the assump-tion that the size of the watermarked images will not bechanged excessively. Although the features whose size isnear to the boundary of the range may be susceptible toscaling attacks, there are a number of circular patches in animage, so that the effect of these features on the watermark-ing is small.

The distribution of local features is related to the perfor-mance of watermarking systems. In other words, the dis-tance between adjacent features must be determined care-fully. If the distance is small, patches will overlap in largeareas, and if the distance is large, the number of patcheswill not be sufficient for the effective insertion of the wa-

termark. To control the distribution of extracted features, c

Optical Engineering 037002-3

e apply a circular neighborhood constraint similar to thatsed by Bas et al.,6 in which the features whose strength ishe largest are used to generate circular patches. The valuerom the DoG function is used to measure the strength ofach feature. The distance D between adjacent features de-ends on the dimensions of the image and is quantized byhe r value as follows:

=w + h

r. �3�

he width and height of the image are denoted by w and h,espectively. The r value is a constant to control the dis-ance between adjacent features and is set at 16 and 32 inhe insertion and detection processes, respectively.

Figure 2 shows a circular patch from our proposed syn-hronization method in spatial filters, additive uniformoise, rotation, and scaling of the image. For conveniencef identification, we represent only one patch and find thathe patch is formulated robustly, even when the image isistorted.

Watermarking Schemehis section describes our watermarking scheme. We ex-

ract circular patches by the synchronization method de-cribed in Sec. 2. The 2-D watermark is generated andransformed into circular form for each patch. The circularatermarks are added to the patches. We first describe theatermark generation procedure and then explain water-ark insertion and detection.

.1 Watermark Generatione generate a 2-D rectangular watermark that follows aaussian distribution, using a random number generator. Toe inserted into circular patches, this watermark should beransformed so that its shape is circular. We consider theectangular watermark to be a polar-mapped watermark andnversely polar-map it to assign the insertion location of the

ig. 2 Circular patch from our proposed method in �a� the originalmage, �b� the mean-filtered image, �c� the median-filtered image,d� the additive uniform noise image, �e� the 10-deg-rotated image,nd �f� the 1.2� scaled image.

ircular patches. In this way, a rotation attack is mapped as

March 2006/Vol. 45�3�

Page 4: Robust image watermarking using local invariant featureshklee.kaist.ac.kr/publications/optical engineering(2006-3).pdf · Robust image watermarking using local invariant ... Most

emrta

3Tcsp

sap

titsomcr

tiip

wrrl

N

wms

tmef

v�

Hctm

3Smp

Lee, Kim and Lee: Robust image watermarking using local invariant features

a translation of the rectangular watermark, and the water-mark still can be detected using the correlation detector.Note that the size of circular patches differs, so we shouldgenerate a separate circular watermark for each patch.

Let the x and y dimensions of the rectangular watermarkbe denoted by M and N, respectively. Let r be the radius�size� of a circular patch. As shown in Fig. 3, we divide acircular patch into homocentric regions. To generate thecircular watermark, the x- and the y-axis of the rectangularwatermark are inversely polar-mapped into the radius andangle directions of the patch. The relation between the co-ordinates of the rectangular watermark and the circular wa-termark is represented as follows:

x =ri − r0

rM − r0· M, y =

�· N if 0 � � � � ,

x =ri − r0

rM − r0· M, y =

� − �

�· N if � � � � 2� , �4�

where x and y are the rectangular watermark coordinates, riand � are the coordinates of the circular watermark, rM isequal to the radius of the patch, and r0 is a fixed fraction ofrM. In our experiments, we set r0 to rM /4. For effectivetransformation, r0 should be larger than M /�, and the dif-ference between rM and r0 should be larger than N. If theseconstraints are not satisfied, the rectangular watermarkmust be sampled. As a result, it is difficult to transformefficiently.

To increase the robustness and invisibility of the insertedwatermark, we transform the rectangular watermark to bemapped to only the upper half of the patch, i.e., the y-axisof the rectangular watermark is scaled by the angle of a halfcircle ���, not the angle of a full circle �2��. The lower halfof the patch is set symmetrically with respect to the upperhalf �see Fig. 3�.

In aspect of the image, watermarks constitute a kind ofnoise. When noise of similar strength gathers together, wecan perceive it. In our scheme, a pixel in the rectangularwatermark is mapped to adjacent several pixels in the cir-cular watermark during polar mapping. In other words, thesame noise is inserted into the homocentric region of acircular patch. Therefore, if the size of the homocentric

Fig. 3 Polar mapping between the rectangular watermark and thecircular watermark.

region is large, the inserted watermark is visible �as an I

Optical Engineering 037002-4

mbossing effect�. Through symmetrical mapping, we canake the size of the homocentric region small and thus

ender the watermark invisible. Moreover, we can increasehe likelihood that the watermark will survive attacks suchs cropping.

.2 Watermark Insertionhe first step in watermark insertion is to analyze imageontents to extract the patches. Then, the watermark is in-erted repeatedly into all patches. Our watermark insertionrocess is shown in Fig. 4�A�.

Step a. To extract circular patches, we use the SIFT de-criptor, as explained in Sec. 2. A single image may contain

number of patches. We insert the watermark into allatches to increase the robustness of our scheme.

Step b.1. We generate a circular watermark dependent onhe radius �size� of each patch, using the method describedn Sec. 3.1. We have endeavored to construct the patches sohat their radius is similar to, or larger than, the x and yizes of the rectangular watermark; thus, during extractionf the patches, a pixel w in the rectangular watermark isapped to several pixels wc in the circular watermark. This

ompensates for errors in alignment of the circular patchesegarding location and scale during watermark detection.

Step b.2. The insertion of the watermark must not affecthe perceptual quality of images. This constraint has a bear-ng on the insertion strength of the watermark, inasmuch ast must be imperceptible to the human eye. We apply theerceptual mask as follows13:

= � · �1 − NVF� + · NVF, �5�

here � is the lower bound of visibility in flat and smoothegions and is the upper bound in edged and textureegions. The noise visibility function is calculated as fol-ows:

VF�i, j� =1

1 + � · �x2�i, j�

, � =D

�x max2 , �6�

here �x2�ij� and �x max

2 denote the local variance and maxi-um of neighboring pixels within five pixels, and D is a

caling constant.Step b.3. Finally, we insert this circular watermark addi-

ively into the spatial domain. The insertion of the water-ark is represented as the spatial addition between the pix-

ls of images and the pixels of the circular watermark asollows:

i = vi + �iwci, where wci � N�0,1� . �7�

ere vi and wci denote the pixels of images and of theircular watermark, respectively, and � denotes the percep-ual mask that controls the insertion strength of the water-ark.

.3 Watermark Detectionimilarly to watermark insertion, the first step for water-ark detection is analyzing the image contents to extract

atches. The watermark is then detected from the patches.

f the watermark is detected correctly from at least one

March 2006/Vol. 45�3�

Page 5: Robust image watermarking using local invariant featureshklee.kaist.ac.kr/publications/optical engineering(2006-3).pdf · Robust image watermarking using local invariant ... Most

gtmdt

etsi

Lee, Kim and Lee: Robust image watermarking using local invariant features

patch, we can prove ownership successfully. Our water-mark detection process is shown in Fig. 4�B�.

Step a. To extract circular patches, we use the SIFT de-scriptor, as described in Sec. 2. There are several patches inan image, and we try to detect the watermark from allpatches.

Step b.1. The additive watermarking method in the spa-tial domain inserts the watermark into the image contentsas noise. Therefore, we first apply a Wiener filter to extractthis noise by calculating the difference between the water-

Fig. 4 Frameworks for wa

marked image and its Wiener-filtered image, and then re- t

Optical Engineering 037002-5

ard that difference as the retrieved watermark.6 As withhe watermark insertion process, we compensate for theodification by perceptual masks, but such compensation

oes not greatly affect the performance of watermark de-ection.

Step b.2. To measure the similarity between the refer-nce watermark generated during watermark insertion andhe retrieved watermark, the retrieved circular watermarkhould be converted into a rectangular watermark by apply-ng the polar-mapping introduced in Sec. 3.1. Considering

k insertion and detection.

termar

he fact that the watermark is inserted symmetrically, we

March 2006/Vol. 45�3�

Page 6: Robust image watermarking using local invariant featureshklee.kaist.ac.kr/publications/optical engineering(2006-3).pdf · Robust image watermarking using local invariant ... Most

tmrttos

ort1tTTa

pvdsgvd

wam

E

tvTgd

Lee, Kim and Lee: Robust image watermarking using local invariant features

take the mean value from the two semicircular areas. Bythis mapping, the rotation of circular patches is representedas a translation, and hence we achieve rotation invariancefor our watermarking scheme.

Step b.3. We apply circular convolution to the referencewatermark and the retrieved watermark. The degree ofsimilarity between the two, called the response of the wa-termark detector, is represented by the maximum value ofcircular convolution as follows:

similarity = max�w�m,n�w * �m,n − r��w�m,n�w�m,n�1/2 for r = �0,n ,

�8�

where w is the reference watermark and w* is the retrievedwatermark. The range of similarity values is from −1.0 to1.0. We can identify the rotation angle �� /r� of the patchesby finding the r with the maximum value. If the similarityexceeds a predefined threshold, we can be satisfied that thereference watermark has been inserted. The method of de-termining the threshold is described in the following sec-tion.

Step c. As mentioned, there are several circular patchesin an image. Therefore, if the watermark is detected from atleast one patch, ownership is proved, and not otherwise.The fact that we insert the watermark into several circularpatches, rather than just one, makes it highly likely that theproposed scheme will detect the watermark, even after im-age distortions.

Our watermarking scheme is robust against geometricdistortion attacks as well as signal-processing attacks. Scal-ing and translation invariance is achieved by extracting cir-cular patches from the SIFT descriptor. Rotation invarianceis achieved by using the translation property of the polar-mapped circular patches.

3.4 Probability of ErrorSince ownership is verified by deciding whether or not thesimilarity exceeds a predefined threshold, the probabilitythat our watermark detector will generate errors depends onwhat threshold is selected. We should consider both false-positive and false-negative error rates. The false-positive

Fig. 5 Histogram of the similar

error rate is the probability that the watermark will be de- m

Optical Engineering 037002-6

ected when images are not watermarked or are water-arked with other watermarks. The false-negative error

ate is the probability that the method will fail to retrievehe inserted watermark from watermarked images. In prac-ice, it is difficult to analyze false-negative errors becausef the wide variety of possible attacks. It is thus common toelect the threshold based on the false-positive error rate.

In order to estimate the false-positive error probabilityf our watermark detector, we attempted to retrieve 100andom watermarks from 100 randomly collected imageshat included natural scenes and portraits. Altogether68,500 circular patches were processed to detect the wa-ermark, because each image contained several patches.he size of the rectangular watermark was 32 by 32 pixels.he histogram of similarity values �normalized correlation�nd its gamma distribution are shown in Fig. 5.

In most cases, the simplest way to estimate the errorrobability is to assume that the distribution of detectionalues follows a Gaussian, or an approximate Gaussian,istribution model.14 However, the distribution of the re-ponse of our watermark detector follows more nearly aamma distribution model, since we take the maximumalue from circular convolution. The gamma distribution isefined as follows:

f�x;�,� = � 1

����x�−1 exp�− x/� , x � 0,

0 otherwise,

�9�

here x is a continuous random variable. The parameters �nd satisfy ��0 and �0 and are calculated using theean and variance of the random variable x as follows:

�x� = x = �, V�x� = �x2 = �2. �10�

Based on the results, the mean and variance of the de-ector response x were 0.1012 and 1.2191 e−004, and thealues of � and were 83.9865 and 0.0012, respectively.able 1 shows the probability of our watermark detectorenerating a false-positive error when using this gammaistribution model and the chosen threshold of the water-

es and its gamma distribution.

ity valu

ark detector.

March 2006/Vol. 45�3�

Page 7: Robust image watermarking using local invariant featureshklee.kaist.ac.kr/publications/optical engineering(2006-3).pdf · Robust image watermarking using local invariant ... Most

Tf

A

M

M

M

J

J

J

J

J

J

G

A

C5

C1

C1

C2

C2

R�

R�

R�

R�

R�

Lee, Kim and Lee: Robust image watermarking using local invariant features

4 Simulation ResultsThis section reports on the performance of the proposedwatermarking scheme. Two experiments were carried out,to measure �1� the performance of the watermark synchro-nization based on the SIFT descriptor and �2� the robust-ness of our watermarking scheme. We used 100 randomlycollected 512�512 images from the internet that includedsuch images as “Lena,” “Baboon,” “Lake,” and “Bridge.”

4.1 Performance of the SIFT DescriptorWe show the performance of our synchronization methodbased on the SIFT descriptor by comparing it with themethod of Bas et al.6 During watermark detection, the re-detection of patches that have been extracted during water-mark insertion is important for robustness. In order to mea-sure the redetection ratio, we first extracted circular patchesfrom both the original image and the attacked images, andthen compared the locations and radii of the patches fromthe original image with those of the patches from the at-tacked images. Prior to the comparison, we transformed thelocations and radii of circular patches from images sub-jected to geometric attack to those of patches in the originalimage. If the difference between circular patches from theoriginal image and those from attacked images was below 2pixels, we regarded the patches as having been redetectedcorrectly.

We applied various attacks: median filter �2�2, 3�3,and 4�4�, JPEG compression �quality factor 40, 50, 60,70, 80, and 90�, Gaussian filtering �3�3�, additive uniformnoise, centered cropping �5%, 10%, 15%, 20%, and 25%�,rotation+cropping �0.25, 0.5, 1, 5, 10, and 30 deg�, andscaling �0.75�, 0.9�, 1.1�, 1.3�, and 1.5��.

The average number of extracted circular patches from100 original images was 22.60 in Bas et al.’s approach and16.85 in our approach. Comparison results for attacks areshown in Table 2. Detection ratio refers to the ratio of thenumber of extracted patches from original images to thenumber of correctly redetected patches from attacked im-ages. The detection ratio increases when watermark syn-chronization is performed more strongly. Detection failurerefers to the number of images, among the 100, in which nopatch is redetected. As shown in the table, the detectionratio fell in proportion to the strength of attacks for the twoapproaches, but our synchronization method performs bet-ter than Bas et al.’s approach for all distortions. In particu-

Table 1 Error probability of our watermark detector and itsthreshold.

Error probability Threshold

10−6 0.1617

10−7 0.1685

10−8 0.1750

10−9 0.1812

10−10 0.1871

lar, as the strength of attacks becomes greater, the feature

Optical Engineering 037002-7

able 2 Fraction of correctly redetected patches and number ofailure images under various attacks.

Bas et al.6 Proposed approach

ttackDetection

ratioDetectionfailures

Detectionratio

Detectionfailures

edian filter 2�2 37.6% 2 85.6% 0

edian filter 3�3 35.1% 2 83.8% 0

edian filter 4�4 24.7% 9 81.4% 0

PEG compression 40 70.1% 0 92.3% 0

PEG compression 50 69.6% 0 92.6% 0

PEG compression 60 74.7% 0 93.1% 0

PEG compression 70 74.6% 0 93.7% 0

PEG compression 80 77.6% 0 95.8% 0

PEG compression 90 81.4% 0 96.3% 0

aussian filter 3�3 59.7% 0 89.3% 0

dditive uniform noise 56.5% 0 88.5% 0

entered cropping%

49.3% 0 84.7% 0

entered cropping0%

41.3% 0 75.4% 0

entered cropping5%

36.1% 0 64.0% 0

entered cropping0%

26.2% 2 56.2% 0

entered cropping5%

20.9% 5 47.4% 0

otation 0.25 degcropping

58.7% 0 86.0% 0

otation 0.5 degcropping

53.6% 0 85.7% 0

otation 1 degcropping

47.8% 0 83.8% 0

otation 5 degcropping

40.4% 0 75.2% 0

otation 10 degcropping

33.1% 2 71.5% 0

March 2006/Vol. 45�3�

Page 8: Robust image watermarking using local invariant featureshklee.kaist.ac.kr/publications/optical engineering(2006-3).pdf · Robust image watermarking using local invariant ... Most

�t

abrtntsc

doaecmpCvip

Toa

Sa

W

M

M

M

J

J

J

J

J

J

G

A

Lee, Kim and Lee: Robust image watermarking using local invariant features

points of Bas et al.’s approach are removed or newly cre-ated. The detection ratio of patches generated by consider-ing the relation among the feature points drastically de-creased, and the number of failure images increased.6

However, our method performs reliably, because we con-sider only local image characteristics.

These results support the contention that the SIFT de-scriptor is a reliable technique for extracting features�patches� and is useful for robust watermarking againstsignal-processing attacks and geometric distortion attacks.

4.2 Performance of the Watermarking SchemeWe tested the performance of our watermarking scheme.The size of the rectangular watermark was 32 by 32 pixels,and the weighting factors � and of the noise visibilityfunction were set at 5.0 and 1.0, respectively. We achieveda 10−9 error probability �reliability� for the proposedscheme by setting the threshold at 0.182 and tried to detectthe inserted watermark by adjusting the radius of circularpatches in the range r−2 to r+2 to compensate for errors ofalignment induced by our synchronization method. Thereare several overlap areas between circular patches, but theinserted watermark in these areas is invisible to the nakedeye, because we refined the modification of the pixel byperceptual masking.

The overall PSNR values between the original imageand the watermarked image were greater than 40 dB. Weinserted the watermark into circular patches, and imageswere modified only in parts. As a result, our schemeachieved high PSNR values. In highly textured images,such as the well-known “Baboon” image, PSNR valueswere relatively low because we inserted the watermarkstrongly in view of the fact that the noise was impercep-tible.

We applied most of the attacks listed in Stirmark 3.1:median filter, JPEG compression, Gaussian filtering, addi-tive uniform noise, linear geometric transformation, ran-dom bending, row-column removal, shearing, centered

Table 2 �Continued.�

Bas et al.6 Proposed approach

AttackDetection

ratioDetectionfailures

Detectionratio

Detectionfailures

Rotation 30 deg�cropping

20.1% 7 63.7% 0

Scaling 0.75� 14.2% 22 62.4% 0

Scaling 0.90� 39.6% 3 77.2% 0

Scaling 1.1� 46.7% 1 76.9% 0

Scaling 1.3� 19.1% 10 65.8% 0

Scaling 1.5� 8.7% 35 56.0% 0

cropping, rotation�cropping, scaling, and rotation�scaling a

Optical Engineering 037002-8

cropping. These attacks attempt to remove or attenuatehe inserted watermark and desynchronize its location.

Simulation results under attacks are shown in Table 3nd Table 4. Detection ratio refers to the ratio of the num-er of watermarked patches inserted to the number of cor-ectly detected watermarked patches in watermark detec-ion. Among 100 images, detection failure refers to theumber of images where the inserted watermark is not de-ected from any patches and hence we fail to prove owner-hip. Similarity is the average of similarity values fromorrectly detected watermarked patches.

In most of the attacks, our watermarking scheme couldetect the inserted watermark from a considerable numberf circular patches, and the similarity between the insertednd the detected watermark was high enough to prove own-rship. When we transform a rectangular watermark into aircular watermark, a pixel of the rectangular watermark isapped to a homocentric region of a circle, and that com-

ensates for small alignment errors of the circular patches.onsequently, our watermarking scheme was able to sur-ive even in linear geometric transformation, random bend-ng, and shearing attacks. Since we only consider the localroperties of features, our scheme could survive cropping

able 3 Fraction of correctly detected watermark patches, numberf failure images, and similarity under common signal-processingttacks.

ignal-processingttack

Detectionratio

Detectionfailures Similarity

atermarked image �no attack� 90.2% 0 0.727

edian filter 2�2 70.1% 0 0.611

edian filter 3�3 70.6% 0 0.629

edian filter 4�4 66.1% 0 0.609

PEG compression 40 43.4% 1 0.477

PEG compression 50 54.4% 0 0.532

PEG compression 60 65.7% 0 0.579

PEG compression 70 76.3% 0 0.623

PEG compression 80 84.7% 0 0.660

PEG compression 90 88.7% 0 0.698

aussian filter 3�3 74.8% 0 0.671

dditive uniform noise 56.0% 0 0.532

ttacks. In rotation�cropping and rotation�scaling

March 2006/Vol. 45�3�

Page 9: Robust image watermarking using local invariant featureshklee.kaist.ac.kr/publications/optical engineering(2006-3).pdf · Robust image watermarking using local invariant ... Most

�it

wJpdtpco

Ga

R

R

S

S

S

S

S

S

S

S

S

R+

R+

R+

R+

R+

R+

R+

R+

Lee, Kim and Lee: Robust image watermarking using local invariant features

Optical Engineering 037002-9

cropping attacks, the strength of cropping and scalingncreased in proportion to the degree of rotation, and hencehe detection ratio fell.

Although watermark synchronization performs well,hen images are attacked strongly by such methods as

PEG compression 40, scaling 0.70�, and centered crop-ing 50%, the additive watermarking method in the spatialomain fails to survive in several images. In these attacks,he inserted watermark is almost removed, attenuated, orartly cropped by distortions. The performances in theseircumstances is, nevertheless, more robust than that of

Table 4 �Continued.�

eometric distortionttack

Detectionratio

Detectionfailures Similarity

otation 30 deg+cropping 48.6% 0 0.514

otation 45 deg+cropping 45.9% 0 0.550

caling 0.70� 37.5% 1 0.472

caling 0.75� 45.9% 0 0.516

caling 0.80� 51.7% 0 0.539

caling 0.90� 65.3% 0 0.605

caling 1.1� 58.9% 0 0.606

caling 1.2� 66.1% 0 0.614

caling 1.3� 61.5% 0 0.594

caling 1.4� 54.7% 0 0.564

caling 1.5� 47.8% 0 0.524

otation 0.25 deg+scalingcropping

79.8% 0 0.681

otation 0.5 deg+scalingcropping

76.7% 0 0.661

otation 0.75 deg+scalingcropping

73.7% 0 0.644

otation 1 deg+scalingcropping

70.5% 0 0.629

otation 2 deg+scalingcropping

56.8% 0 0.531

otation 5 deg+scalingcropping

53.4% 0 0.589

otation 10 deg+scalingcropping

46.4% 0 0.533

otation 15 deg+scalingcropping

34.5% 3 0.446

Table 4 Fraction of correctly detected watermark patches, numberof failure images, and similarity under geometric distortion attacks.

Geometric distortionattack

Detectionratio

Detectionfailures Similarity

Linear geometric transform 1.008 71.3% 0 0.624

Linear geometric transform 1.011 71.7% 0 0.617

Linear geometric transform 1.012 73.2% 0 0.629

Random bending 58.8% 0 0.563

Row 1 and column 1 removal 81.4% 0 0.705

Row 1 and column 5 removal 77.6% 0 0.674

Row 5 and column 1 removal 76.0% 0 0.666

Row 17 and column 5 removal 61.2% 0 0.555

Row 5 and column 17 removal 61.4% 0 0.558

Shearing x 0% y 1% 80.1% 0 0.691

Shearing x 0% y 5% 43.9% 0 0.417

Shearing x 1% y 0% 81.6% 0 0.686

Shearing x 5% y 0% 39.8% 0 0.389

Shearing x 1% y 1% 72.6% 0 0.645

Centered cropping 5% 73.8% 0 0.703

Centered cropping 10% 64.3% 0 0.683

Centered cropping 15% 55.7% 0 0.665

Centered cropping 20% 49.0% 0 0.645

Centered cropping 25% 41.4% 0 0.621

Centered cropping 50% 16.2% 2 0.446

Rotation 0.25 deg+cropping 78.8% 0 0.682

Rotation 0.5 deg+cropping 76.9% 0 0.673

Rotation 0.75 deg+cropping 75.4% 0 0.662

Rotation 1 deg+cropping 73.8% 0 0.643

Rotation 2 deg+cropping 63.0% 0 0.550

Rotation 5 deg+cropping 68.0% 0 0.641

Rotation 10 deg+cropping 61.8% 0 0.600

Rotation 15 deg+cropping 55.9% 0 0.538

ther representative content-based schemes such as those of

March 2006/Vol. 45�3�

Page 10: Robust image watermarking using local invariant featureshklee.kaist.ac.kr/publications/optical engineering(2006-3).pdf · Robust image watermarking using local invariant ... Most

sots

ATnmM��t

R

Ff

Lee, Kim and Lee: Robust image watermarking using local invariant features

Bas et al.6 and Tang and Hang.9 As described in Sec. 4.1,the synchronization method of Bas et al. shows lower per-formance than our synchronization method. Hence, theirwatermarking scheme will not perform well. According tothe results of Tang and Hang, their approach could surviveseveral signal-processing attacks, such as JPEG compres-sion, Gaussian filtering, and additive uniform noise. How-ever, when images are geometrically distorted or pixels ofimages are removed by median filtering, row-column re-moval, or cropping, the detection ratio falls considerablyand the probability of detection failure is high.

These simulation results support the contention that ourproposed watermarking scheme would be resilient to vari-ous image attacks. A particular merit of our scheme is thatwe insert the watermark into an image multiple times,which results in survival of the watermark under attack andhence in proof of ownership.

5 Discussion and ConclusionsThe difference of matching items in Tables 2–4 representsthe number of circular patches that are well synchronizedbut for which the additive watermarking method fails todetect the inserted watermark. As mentioned before, ourwatermark synchronization, based on the SIFT descriptor,can extract patches correctly even after image attacks andin complex textured images. However, it is unlikely that theadditive watermarking method in the spatial domain candetect the inserted watermark successfully.

Figure 6 shows several original images, watermarkedimages, and residuals between the original and water-marked images. We have modified the histogram of re-sidual images for convenience. As may be seen from Fig.6�a� and 6�b�, we inserted the watermark into the images soas not to be visible to the naked eye. As shown in Fig. 6�c�,residuals show the locations and radii of the circularpatches, how the rectangular watermark is shaped to thehomocentric circle, and how the additive watermarkingmethod inserts the watermark in the spatial domain. Ourscheme achieves high PSNR values because images areonly partly modified. When the image is well textured, thepatches are scattered all over it, which allows the water-mark to be inserted over the whole image. However, if thetexture of images is simple, for example the water area in“Milk” or the sky area in “Lake,” the watermark is concen-trated in the area near to the object.

Drawbacks of the proposed watermarking scheme arerelated to its vulnerability to large distortion of the aspectratio. In addition, due to the computation time for the SIFTdescriptor and for the compensation of alignment errors,our scheme cannot be used effectively in real-time applica-tions. Future work will focus on eliminating these draw-backs.

Our major contribution is that we have proposed a ro-bust watermarking scheme that uses local invariant fea-tures. In order to resist geometric distortions, we extractedcircular patches using the SIFT descriptor, which is invari-ant to translation and scaling distortion. These patches werewatermarked additively in the spatial domain. Rotation in-variance was achieved using the translation property of thepolar-mapped circular patches. We performed an intensivesimulation, and the results showed that our method would

be robust against geometric distortion attacks as well as

Optical Engineering 037002-1

ignal-processing attacks. We believe that the considerationf local features is important for the design of robust wa-ermarking schemes, and our method is a solution that usesuch features.

cknowledgmentshis work was supported by the Korea Science and Engi-eering Foundation �KOSEF� through the Advanced Infor-ation Technology Research Center �AITrc� and by theinistry of Education and Human Resources Development

MOE�, the Ministry of Commerce, Industry, and EnergyMOCIE�, and the Ministry of Labor �MOLAB� throughhe fostering project of the Lab of Excellency.

eferences

1. F. A. P. Petitcolas, R. J. Anderson, and M. G. Kuhn, “Attacks oncopyright marking systems,” in Proc. Int. Workshop on InformationHiding, pp. 218–238, Springer-Verlag �1998�.

2. M. Kutter, “Watermarking resisting to translation, rotation and scal-ing,” Proc. SPIE 3528, 423–431 �1998�.

3. S. Pereira and T. Pun, “Robust template matching for affine resistantimage watermark,” IEEE Trans. Image Process. 9�6�, 1123–1129�2000�.

4. C.-Y. Lin and I. J. Cox, “Rotation, scale and translation resilientwatermarking for images,” IEEE Trans. Image Process. 10�5�, 767–782 �2001�.

5. J. J. K. O Ruanaidh and T. Pun, “Rotation, scale and translationinvariant spread spectrum digital image watermarking,” Signal Pro-cess. 66�3�, 303–317 �1998�.

ig. 6 �a� Original image, �b� watermarked image, and �c� residualor “Lake,” “Baboon,” “Milk,” and “Bridge” images.

6. P. Bas, J.-M. Chassery, and B. Macq, “Geometrically invariant wa-

March 2006/Vol. 45�3�0

Page 11: Robust image watermarking using local invariant featureshklee.kaist.ac.kr/publications/optical engineering(2006-3).pdf · Robust image watermarking using local invariant ... Most

mrsp

vtcctndm

Lee, Kim and Lee: Robust image watermarking using local invariant features

termarking using feature points,” IEEE Trans. Image Process. 11�9�,1014–1028 �2002�.

7. M. Kutter, S. K. Bhattacharjee, and T. Ebrahimi, “Toward secondgeneration watermarking schemes,” in IEEE Int. Conf. on Image Pro-cessing, Vol. 1, pp. 320–323 �1999�.

8. A. Nikolaidis and I. Pitas, “Region-based image watermarking,”IEEE Trans. Image Process. 10�11�, 1726–1740 �2001�.

9. C.-W. Tang and H.-M. Hang, “A feature-based robust digital imagewatermarking scheme,” IEEE Trans. Signal Process. 51�4�, 950–959�2003�.

10. D. G. Lowe, “Distinctive image features from scale-invariant key-points,” Int. J. Comput. Vis. 60�2�, 91–110 �2004�.

11. K. Mikolajczyk and C. Schmid, “Scale and affine invariant interestpoint detectors,” Int. J. Comput. Vis. 60�1�, 63–86 �2004�.

12. T. Tuytelaars and L. V. Gool, “Matching widely separated viewsbased on affine invariant regions,” Int. J. Comput. Vis. 59�1�, 61–85�2004�.

13. S. Voloshynovskiy, A. Herrigel, N. Baumgartner, and T. Pun, “Astochastic approach to content adaptive digital image watermarking,”in Proc. Int. Workshop on Information Hiding, pp. 212–236,Springer-Verlag �1999�.

14. I. J. Cox, M. L. Miller, and J. A. Bloom, Digital Watermarking,Chap. 5, Morgan Kaufmann, San Francisco �2002�.

Hae-Yeoun Lee received his BS degree ininformation engineering from Sung KyunKwan University, Seoul, Korea, in 1993, andhis MS degree in computer science fromKorea Advanced Institute of Science andTechnology �KAIST�, Korea, in 1997. From1997 to 2001, he was with the SatelliteTechnology Research Center �SaTReC�,KAIST, as a research student. From 2001 to2005, he was with the Satrec Initiative, acompany in Korea, as a senior researcher.

He is currently pursuing a PhD degree in computer science atKAIST. His research interests include digital watermarking, multime-dia processing, image processing, remote sensing, and digital rights

management.

Optical Engineering 037002-1

Hyungshin Kim received his BS degree incomputer science from Korea Advanced In-stitute of Science and Technology �KAIST�,in 1990, and his MS degree in satellite com-munication engineering from the Universityof Surrey, UK, in 1990. From 1992 to 2001,he was with the Satellite Technology Re-search Center �SaTReC�, KAIST, as a se-nior researcher. Since joining the real-timecomputing laboratory at KAIST in 1994, hisresearch has focused on problems in multi-

edia signal processing and digital rights management. He is cur-ently a professor at Chungnam National University, Korea. His re-earch interests include digital watermarking, multimedia signalrocessing, embedded systems, and digital rights management.

Heung-Kyu Lee received a BS degree inelectronics engineering from Seoul NationalUniversity, Seoul, Korea, in 1978, and MSand PhD degrees in computer science fromKorea Advanced Institute of Science andTechnology �KAIST�, Korea, in 1981 and1984, respectively. Since 1986 he has beena professor in the Department of ComputerScience, KAIST. He has authored or coau-thored more than 100 international journaland conference papers. He has been a re-

iewer for many international journals, including the Journal of Elec-ronic Imaging, Real-Time Imaging, and IEEE Transactions on Cir-uits and Systems for Video Technology. He was also a programhairman at many international conferences, including the Interna-ional Workshop on Digital Watermarking �IWDW� in 2004. He isow a director of the Korean DRM Forum. His major interests areigital watermarking, digital fingerprinting, and digital rights manage-ent.

March 2006/Vol. 45�3�1