Top Banner
Content based segmentation of patterned wafers Pierrick Bourgeat Oak Ridge National Laboratory P.O. Box 2008 Oak Ridge, Tennessee 37831-6011 and University of Burgundy Le2i Laboratory 12 rue de la fonderie 71200 Le Creusot, France Fabrice Meriaudeau University of Burgundy Le2i Laboratory 12 rue de la fonderie 71200 Le Creusot, France Kenneth W. Tobin Oak Ridge National Laboratory P.O. Box 2008 Oak Ridge, Tennessee 37831-6011 Patrick Gorria University of Burgundy Le2i Laboratory 12 rue de la fonderie 71200 Le Creusot, France Abstract. We extend our previous work on the image segmentation of electronic structures on patterned wafers to improve the defect detection process on optical inspection tools. Die-to-die wafer in- spection is based on the comparison of the same area on two neigh- boring dies. The dissimilarities between the images are a result of defects in this area of one of the dies. The noise level can vary from one structure to the other, within the same image. Therefore, seg- mentation is required to create a mask and apply an optimal thresh- old in each region. Contrast variation on the texture can affect the response of the parameters used for the segmentation. We show a method to anticipate these variations with a limited number of train- ing samples, and modify the classifier accordingly to improve the segmentation results. © 2004 SPIE and IS&T. [DOI: 10.1117/1.1762518] 1 Introduction As semiconductor device density and wafer area continue to increase, faster and more sensitive automatic inspection tools are required. The size of the defects is becoming smaller, and harder to detect. 1–4 This paper introduces an improvement of our previous work 5,6 on the image segmen- tation of electronic structures on patterned wafers to im- prove the defect detection process on optical inspection tools. Die-to-die wafer inspection is based on the comparison of the same area on two neighboring dies using the assump- tion that they are identical, except for the defects. The dis- similarities between the images are a result of defects in this area on one of the dies. The two images are subtracted, and a threshold level is selected to locate any abnormality. This threshold is established on the noise level in the dif- ference image, to improve the SNR. The noise level can vary from one structure to another within the same image, since multiple structures coexist in the field of view. There- fore, the measure of noise within the whole image is not relevant for each individual type of structure. Segmentation is required to create a mask of these different regions. This mask is then used to produce a measure of noise for each structure in the difference image, and we apply an indi- vidual threshold in each region. For this work, segmentation is performed using the dis- crete wavelet transform 7 and the ‘‘a ` trous’’ algorithm. 8–10 This algorithm is well adapted to discriminate local fre- quencies of the repetitive pattern, and it is restricted to principal directions that correspond to the geometric pat- terns found on integrated circuits. The weakness of this Paper ORNL-015 received Aug. 6, 2003; accepted for publication Feb. 27, 2004. 1017-9909/2004/$15.00 © 2004 SPIE and IS&T. Journal of Electronic Imaging 13(3), 428435 (July 2004). 428 / Journal of Electronic Imaging / July 2004 / Vol. 13(3) Downloaded from SPIE Digital Library on 24 Feb 2010 to 128.219.49.9. Terms of Use: http://spiedl.org/terms
8

Content based segmentation of patterned wafers

May 12, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Content based segmentation of patterned wafers

Journal of Electronic Imaging 13(3), 428–435 (July 2004).

Content based segmentation of patterned wafersPierrick Bourgeat

Oak Ridge National LaboratoryP.O. Box 2008

Oak Ridge, Tennessee 37831-6011and

University of BurgundyLe2i Laboratory

12 rue de la fonderie71200 Le Creusot, France

Fabrice MeriaudeauUniversity of Burgundy

Le2i Laboratory12 rue de la fonderie

71200 Le Creusot, France

Kenneth W. TobinOak Ridge National Laboratory

P.O. Box 2008Oak Ridge, Tennessee 37831-6011

Patrick GorriaUniversity of Burgundy

Le2i Laboratory12 rue de la fonderie

71200 Le Creusot, France

nutioing

n-

im-tion

onmp-is-in

ted,lity.dif-an

ge,e-ot

ionhisachdi-

is-

e-to

at-his

4.

Abstract. We extend our previous work on the image segmentationof electronic structures on patterned wafers to improve the defectdetection process on optical inspection tools. Die-to-die wafer in-spection is based on the comparison of the same area on two neigh-boring dies. The dissimilarities between the images are a result ofdefects in this area of one of the dies. The noise level can vary fromone structure to the other, within the same image. Therefore, seg-mentation is required to create a mask and apply an optimal thresh-old in each region. Contrast variation on the texture can affect theresponse of the parameters used for the segmentation. We show amethod to anticipate these variations with a limited number of train-ing samples, and modify the classifier accordingly to improve thesegmentation results. © 2004 SPIE and IS&T.[DOI: 10.1117/1.1762518]

1 Introduction

As semiconductor device density and wafer area contito increase, faster and more sensitive automatic inspectools are required. The size of the defects is becomsmaller, and harder to detect.1–4 This paper introduces aimprovement of our previous work5,6 on the image segmen

Paper ORNL-015 received Aug. 6, 2003; accepted for publication Feb. 27, 2001017-9909/2004/$15.00 © 2004 SPIE and IS&T.

428 / Journal of Electronic Imaging / July 2004 / Vol. 13(3)

Downloaded from SPIE Digital Library on 24 Feb 20

en

tation of electronic structures on patterned wafers toprove the defect detection process on optical inspectools.

Die-to-die wafer inspection is based on the comparisof the same area on two neighboring dies using the assution that they are identical, except for the defects. The dsimilarities between the images are a result of defectsthis area on one of the dies. The two images are subtracand a threshold level is selected to locate any abnormaThis threshold is established on the noise level in theference image, to improve the SNR. The noise level cvary from one structure to another within the same imasince multiple structures coexist in the field of view. Therfore, the measure of noise within the whole image is nrelevant for each individual type of structure. Segmentatis required to create a mask of these different regions. Tmask is then used to produce a measure of noise for estructure in the difference image, and we apply an invidual threshold in each region.

For this work, segmentation is performed using the dcrete wavelet transform7 and the ‘‘a trous’’ algorithm.8–10

This algorithm is well adapted to discriminate local frquencies of the repetitive pattern, and it is restrictedprincipal directions that correspond to the geometric pterns found on integrated circuits. The weakness of t

10 to 128.219.49.9. Terms of Use: http://spiedl.org/terms

Page 2: Content based segmentation of patterned wafers

Patterned wafer segmentation

Fig. 1 Image correction with (a) the raw image and (b) the corrected image.

x-

inherrgeus

the

isn-

hoorlewifienoithhee ae, ira-r-ce

iony trreityprothe

e

tanroresbye,ureibl

o-is

on-here ofgesgh,or-

thetoedi-

tion

ex-dican

al

let

ub-

rs’effi-adsri-a-

de-ow

m-

method is its sensitivity to contrast variation and small teture variation. In our previous work,5 a local correction wasapplied to remove the nonuniformities. This is sufficientthe case of small variations. However, in some cases wthe variations become very important, such as with laprocess variation or bad focus selection, the classifier mbe trained with many different samples that cover allvariations contained within the die.

The usual way to train a classifier on this type of dataan empirical approach. The classifier is trained with radomly selected samples, and then is tested over the wset of data. The areas where the classifier performs poare used to extract new training samples. These nsamples are added to the original set to retrain the classuntil the best performances are obtained. This method isrealistic in the in-line inspection process when dealing wa huge amount of data. It would require storing all timages of a die to process them off-line, and thereforhuge amount of memory would be necessary. Meanwhilis time consuming for an operator to go through the itetive cycle of training the classifier and testing its perfomances until it becomes acceptable. This paper introduan original method to anticipate the impact of the variaton each feature and to modify the classifier consequentlaccommodate these variations. First, we discuss the cotion applied to the images to remove the nonuniforminduced by the imaging system and the semiconductorcess variations, followed by feature selections usingwavelet transform and the ‘‘a` trous’’ algorithm. Next, thestress polytopes classifier11,12 is described as well as thmodifications introduced to correct the variations.

2 Image Correction and Feature Extraction

The selection of good features is one of the most imporparts of the segmentation work. The wavelet transform pduces good discrimination between the different structubut is also extremely sensitive to the variations inducedthe imaging system and the process variations. Thereforis very important to correct the images before the featextraction, to remove these variations as much as poss

Downloaded from SPIE Digital Library on 24 Feb 20

e

t

ley

rt

t

s

oc-

-

t-,

it

e.

2.1 Image Preprocessing

Bright field illumination used in the imaging system prduces a Gaussian-type illumination, where the imagebrighter in its center and darker near the borders. This nuniform illumination creates contrast variations over timage that can change the wavelet response. The textuthe electronic structures is also affected by focus chanand process variation over the wafer. When small enouthey produce a contrast variation that can be partially crected.

The normalization is performed using the statistics inneighborhood of each pixel. The mean value is usedcorrect the illumination, and the standard deviation is usto partially correct the contrast variation. Each pixel is dvided by the sum of the mean and the standard deviavalues in a 535 sliding window. The illumination variationas well as some of the contrast variation is removed~Fig.1!.

2.2 Wavelet Transform

Once the images are normalized, the features can betracted. The wavelet transform7 is based on horizontal anvertical filtering that is well designed for the geometrstructures found on semiconductor devices. It enablesefficient discrimination of the horizontal and vertical locfrequencies.

The ‘‘a trous’’ algorithm is used to process a fast wavetransform that is translation invariant.8–10 Unlike the clas-sical wavelet decomposition, where the image is ssampled at each decomposition level, the ‘‘a` trous’’ algo-rithm works with a constant image size, and the filtekernel is up-sampled by adding zeros between the cocients. It creates an overcomplete decomposition that leto some redundancy in the information, but also an invaance in translation, which is a requirement in this appliction. The image size is kept constant during the wholecomposition, and there is a good spatial localization of lfrequencies.

Two filters are necessary to perform the wavelet decoposition: a low-pass filter~LP! and a high-pass filter~HP!.

Journal of Electronic Imaging / July 2004 / Vol. 13(3) / 429

10 to 128.219.49.9. Terms of Use: http://spiedl.org/terms

Page 3: Content based segmentation of patterned wafers

upforef--ane isde

eoorfousrryse

tsca1areuit

peorlus

pelasbyer-o

the

ssheced

hy-nlyingfor

e:

hee. Iner-fer-nt

twoach

istheec-on,sed;the

Bourgeat et al.

Coifman’s orthonormal filters are used.13 Figure 2 showsone level of the wavelet decomposition.

Between each decomposition level, the filters aresampled to match different frequencies. Practically,each level, 2level21 zeros are inserted between each coficient ~Fig. 3!. Thus, the number of meaningful coefficients required to convolve with the image is kept constduring the whole decomposition, so the processing timalmost the same for each level, enabling fast waveletcomposition.

Semiconductor images are mostly a composite of gmetric structures with horizontal or vertical orientation. Fthat reason, only horizontal and vertical details are keptthe segmentation. Diagonal details are ignored, becathey are not good discriminating features, and they camost of the noise. Three levels of decomposition are uto produce six different features.

Because of their waveform, the wavelet coefficiencannot be used directly in a classifier. That is why the loestimate of the wavelet standard deviation is used in a317 neighborhood as a texture feature. This is a good wto smooth the wavelet response and obtain a uniformsponse when a frequency is matched, which is more sable for the classifier.

3 Classification

The classification is performed using the stress-polytoclassifier.11,12 This is a statistical classifier designed fhigh-data-flow segmentation. The parameter space is ctered in a small set of hypercubes called stress-polytowhere each hypercube contains samples of a single cThus, the classification of a new point is performedcomparing its features with the boundaries of the hypcubes to verify a membership relation. When the sethypercubes is small, this is a very fast way to performclassification.

3.1 Training 1

Once the features are extracted, each training point is aciated with 6-D vector. These vectors form the input to tclassifier for the training. The 6-D parameter space is sli

Fig. 2 Wavelet decomposition.

Fig. 3 Wavelets kernels for three-level decomposition.

430 / Journal of Electronic Imaging / July 2004 / Vol. 13(3)

Downloaded from SPIE Digital Library on 24 Feb 20

-

t

-

-

re

d

l7y--

s

-s,s.

f

o-

in hypercubes corresponding to the different classes. Apercube is created around each training point, so that it oincludes points of the same class. This is performed usthe distance to the closest neighbor of dissimilar classeach side of the hypercube. The distancel (X,Y) betweentwo vectorsX(x1 ,...,xN) andY(y1 ,...,yN) is measured us-ing theL-infinite norm, also known as maximum distanc

l ~X,Y!5iX2Yi`5max1<p<Nuxp2ypu ~1!

For each training point, the distancel is measured withevery other point of dissimilar class. In each direction, tshortest distances set the boundaries of the hypercubN-D space, 2N boundaries are enough to define a hypcube. To prevent any overlap between hypercubes of difent classes, these distances are multiplied by a coefficieRthat must satisfy the condition

0,R, 12 . ~2!

The hypercubes are fully described once there areboundaries in each dimension defined by a neighbor in edirection. This means that withN parameters, there are 2Ndirections in which to look for a neighbor. If no neighborfound in one direction but there is a closest neighbor inopposite direction, then this distance is used in both dirtions. Otherwise, if there are neighbors in one dimensithen the shortest distance in every other dimension is uas a result, limits cannot go to the infinite and saturate

Fig. 4 Representation of one parameter versus the standard devia-tion corresponding to four samples of the same texture taken on fourdifferent images.

Fig. 5 Representation of one parameter versus the standard devia-tion corresponding to four samples of the same texture taken on fourdifferent images.

10 to 128.219.49.9. Terms of Use: http://spiedl.org/terms

Page 4: Content based segmentation of patterned wafers

tveul

xed

lesex-se

ns,ffetheoferge

paria

heheif-lestraas

ingassonseetessheasenlfto

etee

begion.het of

there isistsformustthecor-

ne

los-or inwotheeffi-

t

Patterned wafer segmentation

parameter space. Using Eqs.~1! and ~2! guarantees thathere is only one class per hypercube, and there is no olap between the hypercubes of different classes. This resin clustering the parameter space in hypercubes with fiboundaries.

Dealing with fixed boundaries in the classifier enabprocessing the data very quickly, but it removes any flibility. It is not a real problem when the parameter responis consistent for a given texture, but in most applicatiounexpected variations in the parameter response can athe performance. This is usually resolved by increasingnumber of training samples and going through a cycleretraining and testing the classifier until the expected pformance is reached. In the particular application of imasegmentation of electronic structures, variation of therameters can be correlated with the residual contrast vations that have not been completely corrected. In otwords, for a given texture, some parameters will follow tcontrast variation, but not all of them. It is technically dficult to train the classifier with a very large set of sampsince there is no guarantee that the whole range of convariation will be represented. Therefore, a technique wdeveloped to modify the hypercube’s boundary accordto the anticipated variations of the parameter for each cl

Since the contrast variation is produced by the variatiinduced by the system, the measure of correlation betwthe contrast and a particular feature is a good way to demine if the feature will be sensitive to focus and procevariations. This measure will then be used to modify tboundaries of the hypercubes. When a parameter hstrong correlation with the standard deviation for a givclass, it will be more likely to be subject to variations itse~Fig. 4!. That is why the corresponding hypercube needsbe enlarged in the direction corresponding to this paramto allow more variation of the parameter. Similarly, if th

Fig. 6 Define the hypercubes using the distance to the neighbor ofa different class.

Fig. 7 Correct the hypercubes using the correlation information.

Downloaded from SPIE Digital Library on 24 Feb 20

r-ts

ct

-

--

r

st

.

nr-

a

r

correlation is weak, the parameter will be less likely tosubject to variations~Fig. 5!. Therefore, the correspondinhypercube can be reduced in the corresponding directFigures 4 and 5 show the difference of correlation with tstandard deviation for two parameters on the same sefour different samples.

This correction can be accomplished by makingR afunction of the correlation factors, the parameters, andclasses. This must be done carefully to ensure that theonly one class per hypercube, and that no overlap exbetween the hypercubes of different classes. Actually,each parameter of each class, the expansion/retractionbe done according to the correlation coefficients ofneighbors of other classes. For the same parameter, therelation coefficient can be extremely different from oclass to the other one.

During the hypercube creation step, the class of the cest neighbor is recorded. Thus, the class of each neighbeach dimension is known. The boundary between tneighboring points in one dimension is defined usingcentroid of mass between the respective correlation cocient.

Considering a vectorX of classCX , and a vectorY ofclassCY , Eq. ~2! can be rewritten to fulfill the requiremenwith the corrected coefficientRC(xp ,yp):

0,RC~xp ,yp!1RC~yp ,xp!,1,;p, ~3!

Fig. 8 Merge the hypercubes within each class.

Fig. 9 DRAM area.

Journal of Electronic Imaging / July 2004 / Vol. 13(3) / 431

10 to 128.219.49.9. Terms of Use: http://spiedl.org/terms

Page 5: Content based segmentation of patterned wafers

or

r

er-onnto

ec-ies,

Bourgeat et al.

whereRC(xp ,yp) is the coefficient for the parameterp ofclassCX with its closest neighbor of classCY .

To perform the correction, a weightWC(wc1,...,wcN

) isassigned to each class. This weight is inversely proptional to the correlation with the standard deviation:

wcP5

1

ur cPu, ~4!

wherer cPis the correlation factor between the parametep

of classC and the standard deviation (0,ur cPu,1).

Given a vectorX of classCX and a vectorY of classCY , for each parameterp, the distance ratiod(xp ,yp) be-tweenxp and the centroid ofxp andyp can be defined as

d~xp ,yp!5wyP

wxP1wyP

, ~5!

Fig. 10 Logic area.

Fig. 11 Blank area.

432 / Journal of Electronic Imaging / July 2004 / Vol. 13(3)

Downloaded from SPIE Digital Library on 24 Feb 20

-

⇒d~xp ,yp!5ur xP

u

ur xPu1ur yP

u. ~6!

It is easy to show that

d~xp ,yp!1d~yp ,xp!51. ~7!

Requirement~3! can be fulfilled using Eq.~7! and R de-fined as in Eq.~2!. The corrected coefficientRC(xp ,yp) isdescribed as

RC~xp ,yp!52Rd~xp ,yp!, ~8!

RC~xp ,yp!52Rur xP

u

ur xPu1ur yP

u. ~9!

The coefficientRC(xp ,yp) provides a way to control theexpansion/retraction of the hypercubes so that two hypcubes of different classes will not overlap. Any expansiin a given direction is followed by a proportional retractioin the same direction of the hypercubes correspondingthe neighboring points of another class. Since the corrtions affect only the coefficient used to set the boundar

Fig. 12 Inspection layout.

Fig. 13 Example of the pixelwise segmentation on one image.

10 to 128.219.49.9. Terms of Use: http://spiedl.org/terms

Page 6: Content based segmentation of patterned wafers

Patterned wafer segmentation

Table 1 Misclassification rate on the segmentation on one column and one row.

Misclassification Rate

Column Row

Training 1 Training 2 Training 1 and 2 Training 1 Training 2 Training 1 and 2

Without correction 2.45% 7.85% 2.29% 3.04% 2.97% 2.95%

With correction 2.32% 2.36% 2.49% 2.94% 2.69% 2.87%

Improvement 0.13% 5.49% 20.20% 0.09% 0.29% 0.08%

eyex-nts

rgeerhyab

heer

rst,

as

be

ithhy-ome

anheThe

wa-

of

e

t is

the hypercubes will not be reduced to a point where thwould exclude training points of the same class, orpanded to a point where they would include training poiof another class.

Once all the hypercubes are created, they are meinside each class. Two hypercubes are merged togeththe newly created hypercube does not overlap with apercube of another class. This step reduces managenumber of hypercubes.

The full process is illustrated in Figs. 6, 7, and 8 for ttwo dimensional and two classes’ case. First, the hypcubes are defined for both classes using Eq.~1! and Eq.~2!,as shown in Fig. 6. Then in the case where the parametep1

of the classC1 has a strong correlation with the contrabut at the same time, the parameterp1 of the classC2 hasa weak correlation with the contrast, hypercubes of clC1 can be extended in thep1 direction, while hypercubesof classC2 can be retracted in the same direction~Fig. 7!.Thus, the parameters of classC1 can afford more variationin the p1 direction. After this step, the hypercubes canmerged inside each class~Fig. 8!.

Fig. 14 Segmentation on one row without correction.

Downloaded from SPIE Digital Library on 24 Feb 20

dif-le

-

s

3.2 Segmentation

During the classification process, each point is tested wthe hypercubes to verify the membership relation. Thepercubes do not cluster in the whole space; therefore, spoints do not belong to any hypercubes. In that case,artificial point for each class, which corresponds to tmean of the training points of the same class, is used.closest neighbor sets the class of the point.

4 Results

The segmentation tests were performed on a memoryfer, where three different areas must be segmented:

• the dynamic random access memory~DRAM! ~Fig. 9!area, which is a fine regular texture that shows a lotprocess variations

• the logic area~Fig. 10!, which is a composite of coarstextures

• the blank area~Fig. 11!, which does not contain anystructure

Since the DRAM shows most of the variations, the tes

Fig. 15 Segmentation on one row with correction.

Journal of Electronic Imaging / July 2004 / Vol. 13(3) / 433

10 to 128.219.49.9. Terms of Use: http://spiedl.org/terms

Page 7: Content based segmentation of patterned wafers

ainntis

reaino

ment-

tedto

ec-heingen-eg-rastsults

setn iss.

.asut

entus-the

the

e,ter-ea-rateer-ionsetssedthee soonhy-ifica-on

n ise athesk

x-x toityataheusrfor-

r-y

gy

u-

Bourgeat et al.

done by comparing the results obtained with the same tring sample for the logic and the flat area, but differesamples for the DRAM in locations where the contrastdifferent. In each case, a single sample on the DRAM ais used. Two samples of the logic area are required toclude two different types of structures and one samplethe blank area. Images for the segmentation come frocolumn of 170 images and a row of 170 images takacross three dies~Fig. 12! to include the maximum contrasvariation. Each image is 4723472 pixels, and the segmentation is performed pixelwise~Fig. 13!. The results for the

Fig. 16 Segmentation on one column without correction.

Fig. 17 Segmentation on one column with correction.

434 / Journal of Electronic Imaging / July 2004 / Vol. 13(3)

Downloaded from SPIE Digital Library on 24 Feb 20

-

-fa

two training sets with and without correction are presenin Table 1. Training sets 1 and 2 are also combinedprovide a comparison.

These results show that without the hypercube corrtion and only one training sample in the DRAM area, tmisclassification rate can vary depending on the trainsample. The segmentation on the column is really depdent on the selection of the training set, whereas the smentation on the row is more invariant because the contvariations are less important. In both cases, the best reare achieved when combining the two training sets.

The correction on the first or the second trainingimproves the results in both cases, and the segmentatiomore invariant with the selection of the training sampleWith this segmentation, very good results are achieved~lessthan 3% of misclassification! with a few training samplesThe results with one training set with the correction aregood as using a combination of two training sets withocorrection. Figures 14 and 15 show the improvembrought about by the correction on the column imagesing the second training set. Figures 16 and 17 showslight improvement brought about by the correction onrow images using the second training set.

There are still limitations to the correction techniquespecially when the variations are large or when the inclass distance is small. Furthermore, the correlation msure is accurate on uniform fine textures, but less accuon nonuniform texture, where the correlation value is avaged. We also notice that on the column, the correctused on the combination of the first and second trainingslightly increases the misclassification rate. This is cauby the large difference between the two samples onDRAM area. For some parameters, the two samples ardifferent that they do not overlap. Thus, the correlativalue is dramatically increased, and the correspondingpercubes can be overstretched. Nonetheless, misclasstion is below 3%, which is adequate for this inspectiprocess.

5 Conclusion

In wafer inspection, the performance of the segmentatiocritical since the misclassification of an area can creatfalse detection or increase the overall noise level inarea, which would result in a higher threshold with the riof missing a critical defect. It looks like a trivial problemsince we deal with regular textures, but it is not since tetures are subject to large variations that can be complecorrect. There are also limitations on the system flexibilto train the classifier since we have to work with huge dflow. We developed an original method to anticipate tvariations with a limited number of training samples, thincreasing the training speed and the segmentation pemance.

References1. K. W. Tobin, ‘‘Inspection in semiconductor manufacturing,’’ inWeb-

ster’s Encyclopedia of Electrical and Electronic Engineering, Vol. 10,pp. 242–262, Wiley & Sons, New York~1999!.

2. K. W. Tobin and L. Neiberg, ‘‘Metrology data management and infomation systems,’’ inHandbook of Silicon Semiconductor Metrolog,pp. 679–703, Marcel Dekker, New York~2001!.

3. ‘‘The national technology roadmap for semiconductors: technolorequirement,’’ Semiconductor Industry Association~2001!.

4. J. Harrigan and M. Stoller, ‘‘Automated wafer inspection in the manfacturing line,’’ Solid State Technol.34~10!, 69–72~1991!.

10 to 128.219.49.9. Terms of Use: http://spiedl.org/terms

Page 8: Content based segmentation of patterned wafers

nt-eteI

ned

elet

ous

d

el

sifi-ent

nal

Patterned wafer segmentation

5. P. Bourgeat, F. Meriaudeau, P. Gorria, and K. W. Tobin, ‘‘Contebased segmentation of patterned wafer for automatic threshold dmination,’’ in Machine Vision Applications in Industrial Inspection X,Proc. SPIE5011, 183–189~2003!.

6. P. Bourgeat, F. Meriaudeau, K. W. Tobin, and P. Gorria, ‘‘Patterwafer segmentation,’’ inQuality Control by Artificial Vision VI, Proc.SPIE5132, 36–44~2003!.

7. M. Unser, ‘‘Texture classification and segmentation using wavframes,’’ IEEE Trans. Image Process.4~11!, 1549–1560~1995!.

8. M. J. Shensa, ‘‘The discrete wavelet transform: wedding the a trand Mallat algorithms,’’IEEE Trans. Signal Process.40~10!, 2464–2482 ~1992!.

9. P. Dutilleux ‘‘An implementation of the algorithm a` trous to computethe wavelet transform,’’ inWavelets: Time Frequency Methods anPhase Space, pp. 298–304, Springer, Berlin~1989!.

10. M. Feil and A. Uhl, ‘‘Real-time image analysis using MIMD paralla trous wavelet algorithms,’’Real-Time Imag.7~6!, 483–493~2001!.

11. J. Miteran, P. Gorria, and M. Robert, ‘‘Classification ge´ometrique parpolytopes de contraintes. Performances et inte´gration,’’ Trait. Signal11~5!, 393–408~1994!.

12. J. Miteran, ‘‘Performances et integration d’un algorithme de clascation geometrique par apprentissage. Applications en traitemd’image,’’ PhD Thesis, University of Burgundy~1994!.

13. J. Tian, ‘‘The mathematical theory and applications of biorthogoCoifman wavelet systems,’’ PhD Thesis, Rice University~1996!.

Pierrick Bourgeat received his MS degreein 2001 from the University of FrancheComte, France, and is currently pursuing aPhD degree in image processing in col-laboration with the Le2i Laboratory at theUniversity of Burgundy, France, and theImage Science and Machine Vision Groupat the Oak Ridge National Laboratory, U.S.He is interested in image segmentationand industrial inspection.

Fabrice Meriaudeau received his engi-neering degree in material science, his MSdegree in physics in 1994, and his PhD de-gree in image processing in 1997 from theUniversity of Burgundy. From June 1997 toSeptember 1998, he was a postdoctoralfellow at the Oak Ridge National Labora-tory working in the field of optical sensingand near-field optics. He is an associateprofessor and heads the research group(Le2i laboratory, UMR CNRS 5158) at the

Technical Institute of Le Creusot, University of Burgundy, and his

Downloaded from SPIE Digital Library on 24 Feb 20

r-research interests include image segmentation, pattern recognition,multispectral, and IR imaging. He has authored or coauthored morethan 90 papers on optical sensing and image processing. He chairsvarious conferences dedicated to machine vision.

Kenneth W. Tobin is a corporate researchfellow and group leader of the Image Sci-ence and Machine Vision Group at the OakRidge National Laboratory, Oak Ridge,Tennessee. The group performs appliedcomputer vision research and developmentin industrial inspection and metrology, bio-medical imaging, and national security. Heperforms research in nondestructive testand analysis, image processing, andimage-based metrology for automation and

process characterization. He has authored and co-authored over120 publications and he currently holds six U.S. Patents with fiveadditional patents pending in the areas of computer vision, photon-ics, radiography, and microscopy. Dr. Tobin is a Fellow of SPIE—The International Society for Optical Engineering and a member ofthe Institute of Electrical and Electronics Engineers. He has a PhDin nuclear engineering from the University of Virginia, Charlottes-ville, Virginia, and an MS in nuclear engineering and a BS in physicsfrom Virginia Tech., Blacksburg, Virginia.

Patrick Gorria received his engineeringdegree from the ENSAM, France, in 1980.From 1980 to 1982 he was in Mexico andreceived his PhD degree from the Univer-sity of Burgundy, France, in 1984. Since1993 he has been a professor at the Labo-ratory Le2i of Image Processing in the In-stitut Universitaire de Technologie (IUT) leCreusot, where he is engaged in researchon and teaching of quality control by artifi-cial vision. He has contributed to several

areas of classification, sensors, and other systems and applicationsusing artificial vision systems for quality control. He is currently aprincipal researcher at Le2i, where he conducts research and devel-opment of defects detection for quality control by artificial vision.Gorria holds several patents in the domain of image processing.

Journal of Electronic Imaging / July 2004 / Vol. 13(3) / 435

10 to 128.219.49.9. Terms of Use: http://spiedl.org/terms