Top Banner
Comparing and Improving Algorithms for Iris Recognition Deborah Rankin 1 , Bryan Scotney 1 , Philip Morrow 1 , Rod McDowell 2 , Barbara Pierscionek 2 1 School of Computing and Information Engineering, 2 School of Biomedical Sciences University of Ulster Coleraine, United Kingdom [email protected], {bw.scotney; pj.morrow; dr.mcdowell; b.pierscionek}@ulster.ac.uk Abstract— The iris has been proposed as a reliable means of biometric identification. The importance of the iris as a unique identifier is predicated on the assumption that the iris is stable throughout a person’s life. This does not take into account the fact that the iris changes in response to a number of external factors including medication, disease, surgery and age and is part of a dynamic optical system that alters with light levels and focussing distance. What is required is a means of identifying the features in the iris which do not alter over time from those that change in response to external factors. Iris segmentation issues and the effects of pupil dilation in identification are examined in this study using existing iris recognition algorithms and a series of images captured from three subjects. A technique that enhances segmentation is presented and discussed. I. INTRODUCTION The requirement for a reliable biometric for purposes of human identification is increasing as a result of growth in national security and public safety measures that are being implemented. The iris is one means of biometric identification proposed. A biometric is any physical or behavioural characteristic that can be used to uniquely identify an individual. Biometric suitability is measured by the number of degrees-of-freedom or independent dimensions of variation that a feature contains. The iris has approximately 266 degrees-of-freedom: the largest among facial features [1]. The iris is deployed as a biometric in recognition systems across a number of international airports. The underlying algorithms in many of these systems are the result of pioneering work by John Daugman [1-3]. Alternative methods have been developed including work by Monro et al [4], Wildes [5], Boles and Boashash [6], Ma et al [7] and Masek [8]. This paper discusses these algorithms and investigates those of Daugman and Masek. Algorithms developed to date are based on the assumption that the iris is immutable throughout a person’s life. Iris formation is completed during gestation with pigmentation changes occurring in early life. Thereafter it is considered immutable although clinical findings suggest otherwise [9-11]. The iris can show change in response to external factors, for example, surgery, medication, disease or age. These changes may render iris recognition a less reliable method of identification than first proposed. This study investigates existing iris recognition algorithms and proposes a method to enhance segmentation in Daugman’s algorithm. Performance of this method for iris localisation is compared with the methods of Daugman [1] and Masek [8]. The effects of change or potential change in iris features on iris-to-iris matching are investigated using Masek’s complete iris recognition algorithm with the proposed enhanced segmentation technique incorporated. Development of an automated system to extract pertinent iris features from high resolution images is proposed. This will allow features to be analysed over time within iris images that contain a higher level of detail than those collected in studies to date. The features can then be classified as mutable or immutable. This paper discusses existing iris recognition algorithms and their fundamental steps. Segmentation issues are examined and a method is proposed to enhance segmentation of the iris. The effects of pupil dilation on iris recognition are also investigated. II. BACKGROUND Iris recognition algorithms contain a series of fundamental steps to include image pre-processing, feature extraction and matching. A. Image pre-processing Initially eye images must be segmented to extract only the iris region by locating the inner (pupil) and outer (limbus) boundaries of the iris (Fig. 1). Occluding features must also be removed and the iris pattern normalised. Segmentation is important with only accurately segmented images suitable for 2009 13th Irish Machine Vision and Image Processing Conference 978-0-7695-3796-2/09 $26.00 © 2009 IEEE DOI 10.1109/IMVIP.2009.25 99 2009 13th International Machine Vision and Image Processing Conference 978-0-7695-3796-2/09 $26.00 © 2009 IEEE DOI 10.1109/IMVIP.2009.25 99
6

Comparing and Improving Algorithms for Iris Recognition

May 14, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Comparing and Improving Algorithms for Iris Recognition

Comparing and Improving Algorithms for Iris Recognition

Deborah Rankin1, Bryan Scotney1, Philip Morrow1, Rod McDowell2, Barbara Pierscionek2 1School of Computing and Information Engineering, 2School of Biomedical Sciences

University of Ulster Coleraine, United Kingdom

[email protected], {bw.scotney; pj.morrow; dr.mcdowell; b.pierscionek}@ulster.ac.uk

Abstract— The iris has been proposed as a reliable means of biometric identification. The importance of the iris as a unique identifier is predicated on the assumption that the iris is stable throughout a person’s life. This does not take into account the fact that the iris changes in response to a number of external factors including medication, disease, surgery and age and is part of a dynamic optical system that alters with light levels and focussing distance. What is required is a means of identifying the features in the iris which do not alter over time from those that change in response to external factors. Iris segmentation issues and the effects of pupil dilation in identification are examined in this study using existing iris recognition algorithms and a series of images captured from three subjects. A technique that enhances segmentation is presented and discussed.

I. INTRODUCTION The requirement for a reliable biometric for

purposes of human identification is increasing as a result of growth in national security and public safety measures that are being implemented. The iris is one means of biometric identification proposed. A biometric is any physical or behavioural characteristic that can be used to uniquely identify an individual. Biometric suitability is measured by the number of degrees-of-freedom or independent dimensions of variation that a feature contains. The iris has approximately 266 degrees-of-freedom: the largest among facial features [1].

The iris is deployed as a biometric in recognition systems across a number of international airports. The underlying algorithms in many of these systems are the result of pioneering work by John Daugman [1-3]. Alternative methods have been developed including work by Monro et al [4], Wildes [5], Boles and Boashash [6], Ma et al [7] and Masek [8]. This paper discusses these algorithms and investigates those of Daugman and Masek.

Algorithms developed to date are based on the assumption that the iris is immutable throughout a

person’s life. Iris formation is completed during gestation with pigmentation changes occurring in early life. Thereafter it is considered immutable although clinical findings suggest otherwise [9-11]. The iris can show change in response to external factors, for example, surgery, medication, disease or age. These changes may render iris recognition a less reliable method of identification than first proposed.

This study investigates existing iris recognition algorithms and proposes a method to enhance segmentation in Daugman’s algorithm. Performance of this method for iris localisation is compared with the methods of Daugman [1] and Masek [8]. The effects of change or potential change in iris features on iris-to-iris matching are investigated using Masek’s complete iris recognition algorithm with the proposed enhanced segmentation technique incorporated. Development of an automated system to extract pertinent iris features from high resolution images is proposed. This will allow features to be analysed over time within iris images that contain a higher level of detail than those collected in studies to date. The features can then be classified as mutable or immutable.

This paper discusses existing iris recognition algorithms and their fundamental steps. Segmentation issues are examined and a method is proposed to enhance segmentation of the iris. The effects of pupil dilation on iris recognition are also investigated.

II. BACKGROUND Iris recognition algorithms contain a series of

fundamental steps to include image pre-processing, feature extraction and matching.

A. Image pre-processing Initially eye images must be segmented to extract

only the iris region by locating the inner (pupil) and outer (limbus) boundaries of the iris (Fig. 1). Occluding features must also be removed and the iris pattern normalised. Segmentation is important with only accurately segmented images suitable for

2009 13th Irish Machine Vision and Image Processing Conference

978-0-7695-3796-2/09 $26.00 © 2009 IEEE

DOI 10.1109/IMVIP.2009.25

99

2009 13th International Machine Vision and Image Processing Conference

978-0-7695-3796-2/09 $26.00 © 2009 IEEE

DOI 10.1109/IMVIP.2009.25

99

Page 2: Comparing and Improving Algorithms for Iris Recognition

proceeding to the later stages of iris recognition. A generic segmentation technique has not yet been developed, however specific methods have been proposed [9]. Existing methods function successfully on specific data sets but not universally across images taken under various conditions. This paper discusses segmentation and in particular examines the methods of Daugman [1] and Masek [8].

Daugman [1] implements integro-differential operators to detect the limbic boundary followed by the pupil boundary. This method computes the integral of the smoothed radial image derivative along concentric circles. The operator performs an exhaustive search across the image with varying circle centres and radii to find the local maxima that correspond to the pupil and limbic boundaries.

An alternative segmentation method, proposed by Wildes [5], implements an edge detection operator and the Hough transform. Masek’s algorithm [8] implements Canny edge detection and a circular Hough transform to segment the iris. This technique generates a gradient edge map of the eye image using the Canny operator. The circular Hough transform uses the edge map to detect circular objects within the image that could be the pupil or iris boundary depending on the stage of the search. The method searches for the iris boundary and then searches within the detected region for the pupil boundary. Further techniques have been developed employing the same approach but with slight variations [4, 7, 12-14]. A major disadvantage of such techniques is the requirement of parameters for hysteresis thresholding that must be modified for use across different image data sets.

In contrast Kennell et al [15] proposed a segmentation technique with simple binary thresholding and morphological transformations (erosion and dilation) to detect the pupil. The fourth order statistic of local image kurtosis was effective in finding the iris boundary. Mira and Mayer [16] also implement thresholding and morphological transformations to detect the iris boundaries.

Following inner and outer iris boundary detection, segmentation algorithms detect and remove occluding eyelids, eyelashes and specular reflections present in iris images [2, 8, 14 and 17].

Iris images are then normalised to ensure the iris region has fixed dimensions to allow accurate comparisons. The rubber sheet model employed by Daugman [1] remaps each point in the iris region to a pair of dimensionless real coordinates. A similar technique, used by Boles and Boashash [6], is deployed at matching and retains only structures visible in both images. Image registration [5] can be deployed at matching where the image to be identified is transformed into spatial alignment with the database image with which it is to be compared.

B. Feature extraction Information on the most discriminating features of

the iris must be extracted to allow accurate identification of a subject during matching. Existing techniques aim to represent the maximum number of features with minimal computational complexity.

Daugman [1] employs wavelet based analysis of iris features using a quadrature pair of 2D Gabor filters. Phase information is extracted to provide the most significant iris feature information that is then used to encode the iris information in a binary bit pattern. Amplitude information is discarded so that the encoded iris is not affected by changing levels of illumination. Masek implements a similar technique but instead uses log Gabor filters [8]. Gabor filters are deficient in encoding natural images as they over-represent low frequency components and under-represent high frequency components [18]. This is overcome by alternatively implementing log Gabor filters that have the advantage of remaining unaffected by background brightness when extracting iris feature phase information. In contrast a technique has been devised that calculates the zero-crossings of the wavelet to extract iris feature information [6]. Zero-crossings represent the most significant iris features that are used for encoding. A variation of this, developed by Monro et al [4], extracts features by calculating zero-crossings obtained by a 1D Discrete Cosine Transform. The DCT is applied to image patch vectors and the difference between the DCT coefficients of adjacent patch vectors is calculated. The zero-crossings are obtained to create a binary representation of the iris.

C. Matching The final stage in iris recognition is matching when

two iris images are compared and it is determined whether they belong to the same person. Daugman devised a test of statistical independence between two iris codes [2] and this has been implemented by many other authors including Masek [8] and Monro [4]. An extract from a binary iris code is shown in Fig. 2, with all bits set to 0 or 1 and illustrated as black and white respectively. The complete iris code comprises 20x480 bits.

Figure 1. Segmented iris

100100

Page 3: Comparing and Improving Algorithms for Iris Recognition

The Hamming Distance (HD) between the two

irides to be compared is calculated. HD measures the number of identical bits between two binary bit patterns. A decision criterion based on the distribution of HDs of irides that are the same and the distribution of those that are different is determined. The overlap in these distributions determines the decision criterion. If the calculated HD between two images falls below the decision criterion, the irides are from the same person. If the calculated HD is higher, the irides are from different people. Alternatively, normalised correlation [5] has been used to capture similarity between corresponding points in two irides to be compared. In contrast Zhu [19] devised a method using Weighted Euclidean Distance to match irides. Like the HD, this gives a measure of similarity of the two iris templates.

III. METHODOLOGY

A. Experimental design Iris images have been captured from three

Caucasian adults aged between 23 and 64 years. 19 images from Subject A were captured over 13 weeks, 57 images were captured from Subject B over 24 weeks and 10 images from Subject C were obtained over 6 weeks. In this longitudinal study images were captured approximately 1-3 times per week for each subject. On two occasions Subject B had their pupil dilated using Tropicamide 0.5% drops. Tropicamide is a short acting mydriatic (dilates the pupil) and cycloplegic (reduces accommodation). On one occasion Subject B had their pupil constricted by consensual lighting.

B. Image acquisition The images were captured using a Takagi clinical

biomicroscopic slit lamp, model number SM-70. Images were captured at 16x magnification to obtain the complete iris region at a high resolution. Image size was 571x767 pixels with 96x96 dpi. Other image databases are available [22, 23], but have images of lower resolution and so contain less detailed iris feature information than those images collected in this study. The images in these databases also contain a significant amount of redundant facial detail surrounding the eye. The images used in this study contain only the eye. Fig. 3 shows sample images.

Consent for the study was given by the Biomedical Sciences Ethics Filter Committee at the University of Ulster.

(a) (b)

The biomicroscopic slit lamp was attached to a

desktop computer; specialist software called Anterior Retinal Capture (ARC) was used to acquire, view and store images. A steady primary gaze position was maintained by having the subject focus on a fixed target positioned on the slit lamp to ensure that images at the same gaze position were captured from all subjects on each occasion. Room lights were turned off to minimise spurious illumination and reflections. Slit lamp illumination was set to its lowest level so as to avoid discomfort to the subject and full constriction of their pupil. Slit beam angle was set at 45° and beam aperture was set at a maximum.

IV. EXPERIMENTAL RESULTS The robustness of iris segmentation algorithms is

examined and an enhanced technique proposed. The effect of pupil dilation on identification is also investigated.

A. Segmentation The effect of pupil dilation on iris recognition is

investigated by employing existing iris recognition techniques. Whilst reporting excellent segmentation accuracy in images from the databases used in their development [22, 23], the segmentation techniques within the recognition algorithms applied produced inadequate results when applied to the images used in this study. It was therefore necessary to develop an enhancement to ensure improved segmentation rates and enable experimentation on the effect of pupil dilation. Segmentation accuracy is defined by the difference of pixels between the segmented area of the image and the ground truth area of the iris boundaries. Clearly this difference could be quantified and an appropriate threshold identified, however in this study the threshold was determined by visual inspection.

Masek’s segmentation technique [8] locates the pupil boundary in 86.4% of images in this study and the iris boundary is detected in only 33% of images. This indicates that Masek’s algorithm is successful for pupil boundary detection in the images from this study but does not detect the iris boundary sufficiently. Whilst this could be considered a suitable algorithm that requires a few minor modifications to improve iris detection, Masek’s algorithm faces a major disadvantage in that it requires parameters for use in

Figure 3. Sample images: (a) subject A; (b) subject B.

Figure 2. Extract from encoded iris

101101

Page 4: Comparing and Improving Algorithms for Iris Recognition

hysteresis thresholding. Daugman’s technique was also investigated as it has the advantage of not requiring image dependent parameters.

Of the 88 images used in experimentation, both pupil and iris boundary were successfully detected in just 2.3% of the images using Daugman’s technique. The inner boundary was detected in 4.5% and the outer boundary was detected in 15.9% of the images. Results, as illustrated in Table 1, indicate that this method has poor performance on images from the iris database in this study and so cannot function universally across data sets. Daugman’s technique also faces difficulty when attempting to detect the iris boundary if contrast between iris and sclera is low. This occurs more frequently in blue and green eyes and under certain lighting conditions. Previous results have confirmed this [20] as Daugman’s technique performs poorly on images from the CASIA iris database which have poor contrast between iris and sclera [22].

Although Masek’s algorithm has shown superior segmentation results, we propose an enhancement to Daugman’s technique as it has the advantage of retaining full automation throughout the recognition process as it does not require pre-defined parameters for each image dataset. The enhancement involves obtaining a coarse estimate of pupil location before applying the integro-differential operator. The pupil is a prominent feature in an eye image and so could be considered the appropriate choice of primary boundary to detect. Our proposed technique initially thresholds the greyscale image and a suitable threshold is obtained automatically [21]. The image complement is obtained and the image is transformed morphologically using image opening and closing to smooth the image and remove outlying points. At this point the pupil object is differentiated from all other image objects. The algorithm analyses the image objects and determines the object that is closest to the image centre and which does not touch the edges of the image. This object is then assumed to be the pupil. The centre points and radius of the object are an estimate of pupil location in the image and are passed to Daugman’s integro-differential operator which then uses this estimate to determine the exact location of the pupil by searching within the neighbourhood of the estimated pupil location. When the pupil centre and radius are detected, these values are passed to the integro-differential operator which searches within a 10x10 neighbourhood of the pupil centre to detect the iris centre and radius. In contrast Daugman’s original technique starts at this point and performs a complete image search to initially detect the iris boundary, and using these values then searches for the pupil. Fig. 4 shows an image (a) before segmentation, (b) after proposed thresholding and morphology, and (c) after segmentation. Note that in Fig. 4(b) the outer objects represent the dark outer regions of the image and can be disregarded.

(a) (b)

(c)

Results show the proposed method accurately

detected the pupil in 95.5% images, an increase of 91% on Daugman’s method. A major contributor to the initial poor performance of Daugman’s technique is that if the iris boundary is not accurately located initially there are difficulties with pupil detection.

In summary, results show that Daugman’s method has the worst performance in detecting both pupil and iris boundaries for the images used in this study. Masek’s technique is efficient in pupil detection but requires improvement for iris boundary detection. A method is proposed to improve upon Daugman’s technique by providing it with a better starting point for segmentation by coarsely estimating pupil location. As Table 1 illustrates, the proposed method significantly improves pupil detection, and in addition iris boundary detection is also improved. Whilst the proposed technique enhances segmentation in the images from this study, in the case of CASIA [22] and Bath [23] images, masking of the outer portion of the image is required to avoid segmentation problems that are caused by the presence of significant inclusion of the facial area surrounding the eye in these images.

Pupil Iris Pupil and Iris Method No. % No. % No. %

Masek 76 86.4 29 33 29 33 Daugman 4 4.5 14 15.9 2 2.3 Proposed 84 95.5 56 63.6 55 62.5

TABLE I. SUCCESS RATES FOR PUPIL AND IRIS BOUNDARY DETECTION

Figure 4. Segmentation: (a) before segmentation; (b) thresholded image; and (c) segmented image.

102102

Page 5: Comparing and Improving Algorithms for Iris Recognition

B. Pupil dilation The pupil controls the amount of light that enters

the eye. Pupil dilation and constriction are mediated by sympathetic and parasympathetic nerves in the autonomic nervous system (ANS). Pupil size can also be induced by medication which acts on the nerve endings overriding the natural ANS signals. Dilation of the pupil can be caused by commonly prescribed medications such as anti-histamines, antidepressants and psychosomatics, for example, Benadryl, Prozac and Morphine respectively, as well as narcotics, for example, marijuana and heroin. To determine the effect pupil dilation has on iris recognition algorithms, controlled experiments can be carried out, for example, by administering Tropicamide 0.5% drops to dilate a subject’s pupil. When the pupil is dilated a large portion of the iris is no longer visible and so sufficient features may not remain for an iris recognition system to identify a subject.

This study implements Masek’s iris recognition algorithm [8], with the proposed segmentation enhancement incorporated, to determine the effect of pupil dilation on iris identification. Experimentation was carried out on a subsample of 29 accurately segmented images, 5 of Subject A and 24 of Subject B. Pupil dilation occurred in 3 of Subject A’s images. All pair-wise comparisons were carried out for these images by calculating the HD between each pair of images; 435 comparisons were completed. To determine the matching decision criterion the HD distribution of intra-class and inter-class comparisons was obtained with HD of 0.4 chosen as the decision criterion for iris matching (Fig. 5).

Images featuring pupil dilation (Fig. 6(a)) were then compared with images from the same person but without dilation (Fig. 6(b)). This consisted of 63 comparisons, 21 for each image.

Intra-class and Inter-class Distribution

0

5

10

15

20

25

30

35

0

0.04

0.08

0.12

0.16 0.2

0.24

0.28

0.32

0.36 0.4

0.44

0.48

0.52

0.56 0.6

0.64

0.68

0.72

0.76 0.8

0.84

0.88

0.92

0.96

Hamming distance

Freq

uenc

y

Intra-class comparisonsInter-class comparisons

(a) (b)

Of the three dilated images, the first is slightly

dilated, the second moderately dilated and the third fully dilated. Results have shown that the first image was identified accurately in all cases due to few features being lost. The second image was identified in all but two cases; although a further 9 images yielded HD above 0.35 which is near the decision criterion. The third image with fully dilated pupil identified the subject in just one comparison. The remaining 20 failed to identify the subject. These results indicate that iris recognition systems may not cope sufficiently well with pupil dilation, which could result in a false reject within the system.

V. CONCLUSIONS This work has examined issues associated with iris

recognition algorithms. High resolution images were collected to study these techniques in detail.

Segmentation has proven to be a problem in development of such techniques. An enhancement of Daugman’s technique was presented that implements thresholding and morphological operators to obtain an initial estimation of pupil location. This estimate was then passed as a starting point to Daugman’s operator and the order changed in which boundaries are detected to search for the pupil boundary first. The proposed algorithm produced a 60.1% improvement in segmentation as well as reducing computational complexity.

The effect of pupil dilation on iris recognition algorithms was investigated and found to produce false rejects at matching. In addition to dilation, many other factors may cause change to the visible iris pattern. These include subjects who have undergone surgery, for example, cataract or laser, subjects who had/have a disease, for example, glaucoma, and subjects who take medication that may cause various changes to the iris. The effects of pupil dilation and cataract surgery have been investigated by Rakshit et al [10] and Roizenblatt [11] on a small scale. The effect these have on iris change on a larger scale will be investigated in the future.

Images from more subjects will be collected on a regular basis. Medical details will be collected from subjects in order to determine physiological or

Figure 6. Pupil dilation: (a) with dilation; (b) without dilation.

Figure 5. Intra-class and inter-class distribution

103103

Page 6: Comparing and Improving Algorithms for Iris Recognition

pathological causes of change. Further development of the presented algorithm to extract pertinent iris features from high resolution images is proposed.

REFERENCES [1] J. Daugman, "How iris recognition works," IEEE Trans. on

Circuits and Systems for Video Technology, vol. 14, pp. 21-30, 2004.

[2] J. G. Daugman, "High confidence visual recognition of persons by a test of statistical independence," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 15, pp. 1148-1161, 1993.

[3] J. Daugman, "New Methods in Iris Recognition," IEEE Trans. on Systems, Man, and Cybernetics, Part B, vol. 37, pp. 1167-1175, 2007.

[4] D. M. Monro, S. Rakshit and Dexin Zhang, "DCT-Based Iris Recognition," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 29, pp. 586-595, 2007.

[5] R. P. Wildes, "Iris recognition: an emerging biometric technology," Proc IEEE, vol. 85, pp. 1348-1363, 1997.

[6] W. W. Boles and B. Boashash, "A human identification technique using images of the iris and wavelet transform," IEEE Trans. on Signal Processing, vol. 46, pp. 1185-1188, 1998.

[7] L. Ma, T. Tan, Y. Wang and D. Zhang, "Personal identification based on iris texture analysis," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 25, pp. 1519-1533, 2003.

[8] L. Masek. (2003). "Recognition of human iris patterns for biometric identification," [Online]. Available: http://www.csse.uwa.edu.au/~pk/studentprojects/libor/LiborMasekThesis.pdf

[9] B. Pierscionek, S. Crawford and B. Scotney, "Iris Recognition and Ocular Biometrics - The Salient Features," International Machine Vision and Image Processing Conference, 2008, pp. 170-175, 2008.

[10] S. Rakshit and D. M. Monro, "Medical Conditions: Effect on Iris Recognition," IEEE 9th Workshop on Multimedia Signal Processing 2007, pp. 357-360, 2007.

[11] R. Roizenblatt, P. Schor, F. Dante, J. Roizenblatt and R. Belfort, "Iris recognition as a biometric method after cataract surgery," BioMedical Engineering OnLine, vol. 3, pp. 2, 2004.

[12] J. Cui, Y. Wang, T. Tan, L. Ma, and Z. Sun, "A fast and robust iris localization method based on texture segmentation," in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, pp. 401-408, 2004.

[13] C. Tisse, L. Martin, L. Torres and M. Robert, "Person identification technique using human iris recognition," in Proceedings of Vision Interface, pp. 294-299, 2002.

[14] W. K. Kong and D. Zhang, "Accurate iris segmentation based on novel reflection and eyelash detection model," Proceedings of 2001 International Symposium on Intelligent Multimedia, Video and Speech Processing, pp. 263-266, 2001.

[15] L. R. Kennell, R. W. Ives and R. M. Gaunt, "Binary Morphology and Local Statistics Applied to Iris Segmentation for Recognition," IEEE International Conference on Image Processing, pp. 293-296, 2006.

[16] J. De Mira Jr. and J. Mayer, "Image feature extraction for application of biometric identification of iris - a morphological approach," XVI Brazilian Symposium on Computer Graphics and Image Processing, 2003, pp. 391-398, 2003.

[17] D. Zhang, D. M. Monro and S. Rakshit, "Eyelash Removal Method for Human Iris Recognition," IEEE International Conference on Image Processing, pp. 285-288, 2006.

[18] D. Field, "Relations between the statistics of natural images and the response properties of cortical cells," J. Opt. Soc. Am., vol. 4, pp. 2379-2394, December 1987.

[19] Y. Zhu, T. Tan, and Y. Wang, "Biometric personal identification based on iris patterns," Proceedings 15th International Conference on Pattern Recognition, vol. 2, pp. 801-804 vol.2, 2000.

[20] H. Proença and L. Alexandre, "UBIRIS: A Noisy Iris Image Database," 13th International Conference on Image Analysis and Processing, pp. 970-977, 2005.

[21] A. A. Low, “Automatic selection of grey level for splitting” in Introductory Computer Vision and Image Processing. New York: McGraw-Hill, 1991, pp. 57-58.

[22] Chinese Academy of Sciences Institute of Automation (CASIA) iris database. (2005). [Online] Available at: http://www.cbsr.ia.ac.cn/english/IrisDatabase.asp [Accessed 27 November 2008].

[23] D.M. Monro, S. Rakshit, and D. Zhang. (2007). Smart Sensors Ltd. Iris Image Database [Online] Available at: http://www.irisbase.com/download.htm [Accessed 18 November 2008].

104104