1 Improving Iris Recognition Performance using Segmentation, Quality Enhancement, Match Score Fusion and Indexing Mayank Vatsa, Richa Singh, and Afzel Noore Abstract—This paper proposes algorithms for iris segmenta- tion, quality enhancement, match score fusion, and indexing to improve both the accuracy and speed of iris recognition. A curve evolution approach is proposed to effectively segment a non-ideal iris image using the modified Mumford-Shah functional. Different enhancement algorithms are concurrently applied on the seg- mented iris image to produce multiple enhanced versions of iris image. A SVM based learning algorithm selects locally enhanced regions from each globally enhanced image and combines these good quality regions to create a single high quality iris image. Two distinct features are extracted from the high quality iris image. The global textural feature is extracted using 1D log polar Gabor transform and the local topological feature is extracted using Euler numbers. An intelligent fusion algorithm combines the textural and topological matching scores to further improve the iris recognition performance and reduce the false rejection rate, while an indexing algorithm enables fast and accurate iris identification. The verification and identification performance of the proposed algorithms are validated and compared with other algorithms using CASIA Version 3, ICE 2005, and UBIRIS iris databases. Index Terms—Iris Recognition, Mumford-Shah Curve Evolu- tion, Quality Enhancement, Information Fusion, Support Vector Machine, Iris Indexing. I. I NTRODUCTION C URRENT iris recognition systems claim to perform with very high accuracy. However, these iris images are captured in a controlled environment to ensure high quality. Daugman proposed an iris recognition system representing iris as a mathematical function [1]-[4]. Wildes [5], Boles [6], and several other researchers proposed different recognition algo- rithms [7]-[32]. With a sophisticated iris capture setup, users are required to look into the camera from a fixed distance and the image is captured. Iris images captured in an uncontrolled environment produce non-ideal iris images with varying image quality. If the eyes are not opened properly, certain regions of the iris cannot be captured due to occlusion which further affects the process of segmentation and consequently the recognition performance. Images may also suffer from motion blur, camera diffusion, presence of eyelids and eyelashes, head rotation, gaze direction, camera angle, reflections, contrast, luminosity, and problems due to contraction and dilation. Fig. 1 from the UBIRIS database [26], [27] shows images with some of the above mentioned problems. These artifacts in iris M. Vatsa, R. Singh, A. Noore are with Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, WV, USA {mayankv, richas, noore}@csee.wvu.edu. images increase the false rejection rate (FRR) thus decreasing the performance of recognition system. Experimental results from the Iris Challenge Evaluation (ICE) 2005 and ICE 2006 [30], [31] also show that most of the recognition algorithms have high FRR. Table I compares existing iris recognition algorithms with respect to image quality, segmentation, en- hancement, feature extraction, and matching techniques. A detailed literature survey of iris recognition algorithms can be found in [28]. This research effort focuses on reducing the false rejection by accurate iris detection, quality enhancement, fusion of textural and topological iris features, and iris indexing. For iris detection, some researchers assume that iris is circular or elliptical. In non-ideal images such as off-angle iris images, motion blur and noisy images, this assumption is not valid because the iris appears to be non-circular and non-elliptical. In this research, we propose a two-level hierarchical iris segmentation algorithm to accurately and efficiently detect iris boundaries from non-ideal iris images. The first level of iris segmentation algorithm uses intensity thresholding to detect approximate elliptical boundary and the second level applies Mumford Shah functional to obtain the accurate iris boundary. We next describe a novel Support Vector Machine (SVM) based iris quality enhancement algorithm [29]. The SVM quality enhancement algorithm identifies good quality regions from different globally enhanced iris images and combines them to generate a single high quality feature-rich iris image. Textural and topological features [17], [18] are then extracted from the quality enhanced image for matching. Most of the iris recognition algorithms extract features that provide only global information or local information of iris patterns. In this paper, the feature extraction algorithm extracts both global texture features and local topological features. The texture features are extracted using 1D log polar Gabor transform which is invariant to rotation and translation, and the topological features are extracted using Euler number which is invariant under translation, rotation, scaling, and polar transformation. The state-of-art iris recognition algorithms have a very low false acceptance rate but reducing the number of false rejection is still a major challenge. In multibiometric literature [33], [34], [35], [36], it has been suggested that fusion of information extracted from different classifiers provides better performance compared to single classifiers. In this paper, we propose using 2ν -SVM to develop a fusion algorithm that combines the match scores obtained by matching texture and topological features for improved performance. Further,
15
Embed
Improving Iris Recognition Performance using … Improving Iris Recognition Performance using Segmentation, Quality Enhancement, Match Score Fusion and Indexing Mayank Vatsa, Richa
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
1
Improving Iris Recognition Performance using
Segmentation, Quality Enhancement, Match Score
Fusion and IndexingMayank Vatsa, Richa Singh, and Afzel Noore
Abstract—This paper proposes algorithms for iris segmenta-tion, quality enhancement, match score fusion, and indexing to
improve both the accuracy and speed of iris recognition. A curve
evolution approach is proposed to effectively segment a non-idealiris image using the modified Mumford-Shah functional. Different
enhancement algorithms are concurrently applied on the seg-mented iris image to produce multiple enhanced versions of iris
image. A SVM based learning algorithm selects locally enhancedregions from each globally enhanced image and combines these
good quality regions to create a single high quality iris image.Two distinct features are extracted from the high quality iris
image. The global textural feature is extracted using 1D log polarGabor transform and the local topological feature is extracted
using Euler numbers. An intelligent fusion algorithm combinesthe textural and topological matching scores to further improve
the iris recognition performance and reduce the false rejectionrate, while an indexing algorithm enables fast and accurate iris
identification. The verification and identification performance ofthe proposed algorithms are validated and compared with other
algorithms using CASIA Version 3, ICE 2005, and UBIRIS irisdatabases.
Index Terms—Iris Recognition, Mumford-Shah Curve Evolu-
tion, Quality Enhancement, Information Fusion, Support VectorMachine, Iris Indexing.
I. INTRODUCTION
CURRENT iris recognition systems claim to perform
with very high accuracy. However, these iris images are
captured in a controlled environment to ensure high quality.
Daugman proposed an iris recognition system representing iris
as a mathematical function [1]-[4]. Wildes [5], Boles [6], and
several other researchers proposed different recognition algo-
rithms [7]-[32]. With a sophisticated iris capture setup, users
are required to look into the camera from a fixed distance and
the image is captured. Iris images captured in an uncontrolled
environment produce non-ideal iris images with varying image
quality. If the eyes are not opened properly, certain regions of
the iris cannot be captured due to occlusion which further
affects the process of segmentation and consequently the
recognition performance. Images may also suffer from motion
blur, camera diffusion, presence of eyelids and eyelashes, head
rotation, gaze direction, camera angle, reflections, contrast,
luminosity, and problems due to contraction and dilation. Fig.
1 from the UBIRIS database [26], [27] shows images with
some of the above mentioned problems. These artifacts in iris
M. Vatsa, R. Singh, A. Noore are with Lane Department of ComputerScience and Electrical Engineering, West Virginia University, Morgantown,
Poursaberi and - Morphological operators Wiener Daubechies 2 wavelet + Hamming -Araabi [25] and thresholds 2D filter distance and harmonic mean
- Active contours and - Gaze deviation correction,Daugman [32] generalized coordinates - Iris Code second rank in ICE 2006
and low time complexity
recognition can be divided into verification and identification.
The performance of both verification and identification suffer
due to non-ideal acquisition issues. However, identification is
more difficult compared to verification because of the problem
of high penetration rate and false accept rate. To improve
the identification performance, we propose an iris indexing
algorithm. In the proposed indexing algorithm, the Euler code
is first used to filter possible matches. This subset is further
processed using the texture features and 2ν-SVM fusion for
accurate identification.
Section II presents the proposed non-ideal iris segmentation
algorithm and Section III describes the novel quality enhance-
ment algorithm. Section IV briefly explains the extraction of
global features using 1D log polar Gabor transform and the
extraction of local features using Euler number. Section V
describes the intelligent match score fusion algorithm and Sec-
tion VI presents the indexing algorithm to reduce the average
identification time. The details of iris databases and existing
algorithms used for validation of the proposed algorithm is
presented in Section VII. Section VIII and IX summarize the
verification and identification performance of the proposed
algorithms with existing recognition and fusion algorithms.
II. NON-IDEAL IRIS SEGMENTATION ALGORITHM
Processing non-ideal iris images is a challenging task be-
cause the iris and pupil are non-circular and the shape varies
depending on how the image is captured. The first step in
iris segmentation is the detection of pupil and iris boundaries
from the input eye image and unwrapping the extracted iris
into a rectangular form. Researchers have proposed different
algorithms for iris detection. Daugman [1] applied integro-
differential operator to detect the boundaries of iris and pupil.
The segmented iris is then converted into rectangular form by
applying polar transformation. Wildes [5] used the first deriva-
tive of image intensity to find the location of edges correspond-
ing to the iris boundaries. This system explicitly models the
upper and lower eyelids with parabolic arcs whereas Daugman
excludes the upper and the lower portions of the image. Boles
and Boashash [6] localized and normalized the iris by using
edge detection and other computer vision algorithms. Ma et
al. [12], [13] used Hough transform to detect the iris and pupil
boundaries. Normally, the pupil has a dark color and the iris
has a light color with varying pigmentation. In certain non-
ideal conditions, the iris can be dark and the pupil can appear
illuminated. For example, because of the specular reflections
from the cornea or co-axial illumination directly into the eye,
light is reflected into the retina and back through the pupil
which makes the pupil appear bright. Also, the boundary of
non-ideal iris image is irregular and cannot be considered
exactly circular or elliptical. For such non-ideal and irregular
iris images, researchers have recently proposed segmentation
algorithms which combine conventional intensity techniques
with active contours for pupil and iris boundary detection [32],
[37]-[39]. These algorithms use intensity based techniques for
center and pupil boundary detection. The pupil boundary is
used to initialize the active contour which evolves to find
the outer boundary of iris. This method of evolution from
pupil to the outer iris boundary is computationally expensive.
We therefore propose a 2-stage iris segmentation algorithm in
which we first estimate the inner and outer boundaries of iris
using an elliptical model. In the second stage, we apply the
modified Mumford Shah functional [40] in a narrow band over
the estimated boundaries to compute the exact inner and outer
boundaries of the iris.
To identify the approximate boundary of pupil in non-ideal
eye-images, an elliptical region with major axis a = 1, minor
axis b = 1, and center (x, y) is selected as the center of eye
and the intensity values are computed for a fixed number of
points on the circumference. The parameters of the ellipse
(a, b, x, y, θ) are iteratively varied with a step size of two
3
(a)
(b)
(c)
(d)
(e)
Fig. 1. Iris images representing the challenges of iris recognition (a) Iristexture occluded by eyelids and eyelashes, (b) Iris images of an individual
with different gaze direction, (c) Iris images of an individual showing theeffects of contraction and dilation, (d) Iris images of same individual at
different instances: First image is of good quality and second image hasmotion blurriness and limited information present, (e) Images of an individualshowing the effect of natural luminosity factor [26].
pixels to increase the size of ellipse and every time a fixed
number of points are randomly chosen on the circumference
(in the experiments, it is set to be 40 points) to calculate
the total intensity value. This process is repeated to find
the boundary with maximum variation in intensity and the
center of the pupil. The approximate outer boundary of the
iris is also detected in a similar manner. The parameters for
outer boundary a1, b1, x1, y1, and θ1 are varied by setting
the initial parameters to the pupil boundary parameters. A
fixed number of points (in the experiments, it is set to be
120 points) are chosen on the circumference and the sum
of the intensity values is computed. Values corresponding
to the maximum intensity change give the outer boundary
of the iris and the center of this ellipse gives the center
of the iris. This method thus provides approximate iris and
pupil boundaries, corresponding centers, and major and minor
axis. Some researchers assume the center of pupil to be the
center of iris and compute the outer boundary. While this
helps to simplify the modeling, in reality, this assumption
is not valid for non-ideal iris. Computing outer boundary
using the proposed algorithm provides accurate segmentation
even when the pupil and iris are not concentric. Using these
approximate inner and outer boundaries, we now perform the
curve evolution with modified Mumford-Shah functional [40],
[41] for iris segmentation.
In the proposed curve evolution method for iris segmenta-
tion, the model begins with the following energy functional:
Energy(c) = α
∫
Ω
||∂C
∂c|| φ dc+
β
∫∫
in(C)
|I(x, y) − c1|2dxdy
+λ
∫∫
out(C)
|I(x, y) − c2|2dxdy (1)
where C is the evolution curve such that C = (x, y) :ψ(x, y) = 0, c is the curve parameter, φ is the weighting
function or stopping term, Ω represents the image domain,
I(x, y) is the original iris image, c1 and c2 are the average
value of pixels inside and outside C respectively, and α,
β, and λ are positive constants such that α < β ≤ λ.
Parameterizing Equation 1 and deducing the associated Euler-
Lagrange equation leads to the following active contour model,
ψt = αφ(ν+εk)|5 ψ|+5φ5 ψ+βδ(I−c1)2 +λδψ(I−c2)
2
(2)
where ν is the advection term, εk is the curvature based
smoothing term, 5 is the gradient operator and δ =0.5/(π(x2 + 0.25)). The stopping term φ is defined as,
φ =1
1 + (| 5 I|)2(3)
The active contour ψ is initialized to the approximate
pupil boundary and the exact pupil boundary is computed by
evolving the contour in a narrow band [42] of ±5 pixels.
Similarly, for computing the exact outer iris boundary, the
approximate iris boundary is used as the initial contour ψ and
the curve is evolved in a narrow band [42] of ±10 pixels.
Using the stopping term, φ, the curve evolution stops at the
exact outer iris boundary. Since we are using the approximate
iris boundaries as the initial ψ, the complexity of curve
evolution is reduced and is suitable for real-time applications.
Fig. 2 shows the pupil and iris boundaries extracted using the
proposed non-ideal iris segmentation algorithm.
In non-ideal cases, eyelids and eyelashes may be present
as noise and decrease the recognition performance. Using the
technique described in [1], eyelids are isolated by fitting lines
to the upper and lower eyelids. A mask based on the detected
4
Fig. 2. Iris detection using the proposed non-ideal iris segmentation algorithm.
eyelids and eyelashes is then used to extract the iris without
noise. Image processing of iris is computationally intensive
as the area of interest is of donut shape and grabbing the
pixels in this region requires repeated rectangular to polar
conversion. To simplify this, the detected iris is unwrapped
into a rectangular region by converting into polar coordinates.
Let I(x, y) be the segmented iris image and I(r, θ) be the
polar representation obtained using Equations 4 and 5.
r =√
(x− xc)2 + (y − yc)2 0 ≤ r ≤ rmax (4)
θ = tan−1
(
y − yc
x− xc
)
(5)
r and θ are defined with respect to the center coordinates,
(xc, yc). The center coordinates obtained during approximate
elliptical iris boundary fitting are used as the center point for
cartesian to polar transformation. The transformed polar iris
image is further used for enhancement, feature extraction, and
matching.
III. GENERATION OF SINGLE HIGH QUALITY IRIS IMAGE
USING ν -SUPPORT VECTOR MACHINE
For iris image enhancement, researchers consecutively apply
selected enhancement algorithms such as deblurring, denois-
ing, entropy correction, and background subtraction, and use
the final enhanced image for further processing. Huang et
al. [43] used super-resolution and Markov network for iris
image quality enhancement but their method does not perform
well with unregistered iris images. Ma et al. [12] proposed
background subtraction based iris enhancement that filters the
high frequency noise. Poursaberi and Araabi [25] proposed the
use of low pass Wiener 2D filter for iris image enhancement.
However, these filtering techniques are not effective in mit-
igating the effects of blur, out of focus, and entropy based
irregularities. Another challenge with existing enhancement
techniques is that they enhance the low quality regions present
in the image but are likely to deteriorate the good quality
regions and alter the features of the iris image. A non-ideal
iris image containing multiple irregularities may require the
application of specific algorithms to local regions that need
enhancement. However, identifying and isolating these local
regions in an iris image can be tedious, time consuming, and
not pragmatic. In this paper, we address the problem by con-
currently applying a set of selected enhancement algorithms
globally to the original iris image [29]. Thus each resulting
image contains enhanced local regions. These enhanced local
regions are identified from each of the transformed images
using support vector machine [44] based learning algorithm
and then synergistically combined to generate a single high
quality iris image.
Let I be the original iris image. For every iris image in the
training database, a set of transformed images is generated by
applying standard enhancement algorithms for noise removal,
amount of time to perform identification. We have proposed
an iris indexing algorithm using local and global features
to reduce the identification time without compromising the
identification accuracy. The performance is evaluated using
three non-homogeneous databases with varying characteristics.
The proposed algorithms are also compared with existing
algorithms. It is shown that the cumulative effect of accurate
segmentation, high quality iris enhancement, and intelligent
fusion of match scores obtained using global and local features
reduces the false rejection rate for verification. Moreover, the
proposed indexing algorithm significantly reduces the compu-
tational time without affecting the identification accuracy.
ACKNOWLEDGMENT
The authors would like to thank Dr. Patrick Flynn, CASIA
(China), and U.B.I. (Portugal) for providing the iris databases
used in this research. Authors also acknowledge the reviewers
and editors for providing constructive and helpful comments.
REFERENCES
[1] J.G. Daugman, “High confidence visual recognition of persons by a test
of statistical independence,” IEEE Transactions on Pattern Analysis andMachine Intelligence, vol. 15, no. 11, pp. 1148-1161, 1993.
[2] J.G. Daugman, “The importance of being random: Statistical principles
of iris recognition,” Pattern Recognition, vol. 36, no. 2, pp. 279-291,2003.
[3] J.G. Daugman, “Uncertainty relation for resolution in space, spatial
frequency, and orientation optimized by two-dimensional visual corticalfilters,” Journal of the Optical Society of America A, vol. 2, no. 7, pp.1160-1169, 1985.
[4] J.G. Daugman, “Biometric personal identification system based on irisanalysis,” US Patent Number US5291560, 1994.
[5] R.P. Wildes, “Iris recognition: an emerging biometric technology,”Proceedings of the IEEE, vol. 85, no. 9, pp. 1348-1363, 1997.
[6] W.W. Boles and B. Boashash, “A human identification technique using
images of the iris and wavelet transform,” IEEE Transactions on SignalProcessing, vol. 46, no. 4, pp. 1185-1188, 1998.
[7] Y. Zhu, T. Tan, and Y. Wang, “Biometric personal identification based
on iris patterns,” Proceedings of the IEEE International Conference onPattern Recognition, pp. 2801-2804, 2000.
[8] C.L. Tisse, L. Martin, L. Torres, and M. Robert, “Iris recognition system
for person identification,” Proceedings of the Second InternationalWorkshop on Pattern Recognition in Information Systems, pp. 186-199,
2002.[9] C.L. Tisse, L. Torres, and R. Michel, “Person identification technique
using human iris recognition,” Proceedings of the 15th International
Conference on Vision Interface, pp. 294-299, 2002.[10] W.-S. Chen and S.-Y. Yuan, “A novel personal biometric authentica-
tion technique using human iris based on fractal dimension features,”
Proceedings of the International Conference on Acoustics, Speech andSignal Processing, vol. 3, pp. 201-204, 2003.
[11] L. Masek and P. Kovesi, “MATLAB source code for a biometric identi-
fication system based on iris patterns,” The School of Computer Scienceand Software Engineering, The University of Western Australia, 2003(http://www.csse.uwa.edu.au/ pk/studentprojects/libor/sourcecode.html).
[12] L. Ma, T. Tan, Y. Wang, and D. Zhang, “Personal identification basedon iris texture analysis,” IEEE Transactions on Pattern Analysis and
Machine Intelligence, vol. 25, no. 12, pp. 1519-1533, 2003.
14
TABLE VII
IRIS IDENTIFICATION PERFORMANCE WITH AND WITHOUT THE PROPOSED IRIS INDEXING ALGORITHM. ACCURACY IS REPORTED FOR RANK #1
IDENTIFICATION USING A DATABASE OF 2085 CLASSES WITH 26,881 IRIS IMAGES.
Algorithms Local feature Global feature Fusion Identification accuracy (%) Time (seconds)
Case 2Proposedalgorithm Euler code Texture 2ν-SVM 97.21 1.82
Case 3
[13] L. Ma, T. Tan, Y. Wang, and D. Zhang, “Efficient iris recognitionby characterizing key local variations,” IEEE Transactions on Image
Processing, vol. 13, no. 6, pp. 739-750, 2004.
[14] B.R. Meena, M. Vatsa, R. Singh, and P. Gupta, “Iris based humanverification algorithms,” Proceedings of the International Conference onBiometric Authentication, pp. 458-466, 2004.
[15] M. Vatsa, R. Singh, and P. Gupta, “Comparison of iris recognitionalgorithms,” Proceedings of the International Conference on IntelligentSensing and Information Processing, pp. 354-358, 2004.
[16] C. Sanchez-Avila and R. Snchez-Reillo, “Two different approaches
for iris recognition using Gabor filters and multiscale zero-crossingrepresentation,” Pattern Recognition, vol. 38, no. 2, pp. 231-240, 2005.
[17] M. Vatsa, “Reducing false rejection rate in iris recognition by quality
enhancement and information fusion,” Master’s Thesis, West VirginiaUniversity, 2005.
[18] M. Vatsa, R. Singh, and A. Noore, “Reducing the false rejection rate
of iris recognition using textural and topological features,” InternationalJournal of Signal Processing, vol. 2, no. 1, pp. 66-72, 2005.
[19] L.Yu, D. Zhang, K.Wang, W.Yang, “Coarse iris classification using box-counting to estimate fractal dimensions,” Pattern Recognition, vol. 38,
pp. 1791-1798, 2005.
[20] B. Ganeshan, D. Theckedath, R. Young, and C. Chatwin, “Biometric irisrecognition system using a fast and robust iris localization and alignment
procedure,” Optics and Lasers in Engineering, vol. 44, no. 1, pp. 1-24,2006.
[21] N.D. Kalka, J. Zuo, V. Dorairaj, N.A. Schmid, and B. Cukic, “Imagequality assessment for iris biometric,” Proceedings of the SPIE Confer-
ence on Biometric Technology for Human Identification III, vol. 6202,pp. 61020D-1-62020D-11, 2006.
[22] H. Proenca and L.A. Alexandre, “ Toward noncooperative iris recogni-
tion: a classification approach using multiple signatures,” IEEE Trans-actions on Pattern Analysis and Machine Intelligence, vol. 29, no. 4, pp.607-612, 2007.
[23] J. Thornton, M. Savvides, B.V.K. Vijaya Kumar, “A bayesian approach
to deformed pattern matching of iris images,” IEEE Transactions onPattern Analysis and Machine Intelligence, vol. 29, no. 4, pp. 596-606,
2007.
[24] D.M. Monro, S. Rakshit, and D. Zhang, “DCT-based iris recognition,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.29, no. 4, pp. 586-596, 2007.
[25] A. Poursaberi and B.N. Araabi, “Iris recognition for partially occludedimages: methodology and sensitivity analysis,” EURASIP Journal onAdvances in Signal Processing, vol. 2007, Article ID 36751, 12 pages,
2007.
[26] H. Proenca and L.A. Alexandre, “UBIRIS: a noisy iris image database,”Proceedings of the 13th International Conference on Image Analysis andProcessing, vol. 1, pp. 970-977, 2005.
[27] http://iris.di.ubi.pt/.
[28] K.W. Bowyer, K. Hollingsworth, and P.J. Flynn, “Image understandingfor iris biometrics: a survey,” Computer Vision and Image Understand-ing, doi:10.1016/j.cviu.2007.08.005, 2008 (To appear).
[29] R. Singh, M. Vatsa, and A. Noore, Improving verification accuracy by
synthesis of locally enhanced biometric images and deformable model,Signal Processing, vol. 87, no. 11, pp. 2746-2764, 2007.
[30] X. Liu, K.W. Bowyer, and P.J. Flynn, “Experiments with an improvediris segmentation algorithm,” Proceedings of the Fourth IEEE Workshop
on Automatic Identification Advanced Technologies, pp. 118-123, 2005.
[31] http://iris.nist.gov/ice/ICE Home.htm.
[32] J. Daugman, “New methods in iris recognition,” IEEE Transactions onSystems, Man and Cybernetics - B, vol. 37, no. 5, pp. 1168-1176, 2007.
[33] J. Kittler, M. Hatef, R.P. Duin, and J.G. Matas, “On combining classi-
fiers,” IEEE Transactions on Pattern Analysis and Machine Intelligence,vol. 20, no. 3, pp. 226-239, 1998.
[34] A. Ross and A.K. Jain, “Information fusion in biometrics,” PatternRecognition Letters, vol. 24, no. 13, pp. 2115-2125, 2003.
[35] J.F. Aguilar, J.O. Garcia, J.G. Rodriguez, and J. Bigun, “Kernel-basedmultimodal biometric verification using quality signals,” Proceedings ofthe SPIE Biometric Technology for Human Identification, vol. 5404, pp.
544-554, 2004.
[36] B. Duc, G. Maitre, S. Fischer, and J. Bigun, “Person authentication
by fusing face and speech information,” Proceedings of the FirstInternational Conference on Audio and Video based Biometric Person
authentication, pp. 311-318, 1997.
[37] A. Ross and S. Shah, “Segmenting non-ideal irises using geodesic active
contours,” Proceedings of Biometric Consortium Conference, 2006.
[38] E.M. Arvacheh and H.R. Tizhoosh, “Iris segmentation: detecting pupil,
limbus and eyelids,” Proceedings of the IEEE International Conferenceon Image Processing, pp. 2453-2456, 2006.
[39] X. Liu, “Optimizations in iris recognition,” Ph.D. Dissertation, Univer-sity of Notre Dame, 2006.
[40] A. Tsai, A. Yezzi, Jr., and A. Willsky, “Curve evolution implementationof the Mumford-Shah functional for image segmentation, denoising, in-
terpolation, and magnification,” IEEE Transactions on Image Processing,vol. 10, no. 8, pp. 1169-1186, 2001.
[41] T. Chan and L. Vese, “Active contours without edges,” IEEE Transac-tions on Image Processing, vol. 10, no. 2, pp. 266-277, 2001.
[42] R. Malladi, J. Sethian, and B. Vemuri, “Shape modeling with front prop-agation: a level set approach,” IEEE Transactions on Pattern Analysisand Machine Intelligence, vol. 17, no. 2, pp. 158-175, 1995.
[43] J.Z. Huang, L. Ma, T.N. Tan, and Y.H. Wang, “Learning-based en-hancement model of iris,” Proceedings of the British Machine Vision
Conference, pp. 153-162, 2003.
[44] V.N. Vapnik, The nature of statistical learning theory, 2nd Edition,Springer, 1999.
[45] P.-H. Chen, C.-J. Lin, and B. Schlkopf, “A tutorial on ν-support vectormachines,” Applied Stochastic Models in Business and Industry, vol. 21,pp. 111-136, 2005.
[46] C.C. Chang and C.J. Lin, “LIBSVM: a library forsupport vector machines,” 2001, Software available at
http://www.csie.ntu.edu.tw/∼cjlin/libsvm.
[47] R. Singh, M. Vatsa, and A. Noore, “SVM based adaptive biometric
image enhancement using quality assessment,” In B. Prasad and S.R.M.Prasanna (Eds), Speech, Audio, Image and Biomedical Signal Processing
using Neural Networks, Springer Verlag, Chapter 16, pp. 351-372, 2008.
15
[48] D.J. Field, “Relations between the statistics of natural images and the
response properties of cortical cells,” Journal of the Optical Society ofAmerica, vol. 4, pp. 2379-2394, 1987.
[49] A. Bishnu, B.B. Bhattacharya, M.K. Kundu, C.A. Murthy, and T.
Acharya, “Euler vector for search and retrieval of graytone images,”IEEE Transactions on Systems, Man and Cybernetics-B, vol. 35, no. 4,pp. 801-812, 2005.
[50] C. Palm and T.M. Lehmann, “Classification of color textures by Gaborfiltering,” Machine Graphics and Vision, vol. 11, no. 2/3, pp. 195-219,2002.
[51] D. J. Field, “What is the goal of sensory coding?” Neural Computation,vol. 6, pp. 559-601, 1994.
[52] Y. Wang, T. Tan, and A.K. Jain, “Combining face and iris biometrics foridentity verification,” Proceedings of the Fourth International Conferenceon Audio and Video Based Biometric Person Authentication, pp. 805-
813, 2003.[53] H.G. Chew, C.C. Lim, and R.E. Bogner, “An implementation of training
dual-nu support vector machines,” In L.Qi, K.L.Teo and X.Yang (Eds),
Optimization and Control with Applications, Kluwer, 2005.[54] R. Mukherjee, “Indexing techniques for fingerprint and iris databases,”
Mayank Vatsa is a graduate research assistant inthe Lane Department of Computer Science andElectrical Engineering at West Virginia University.
He is currently pursuing his Doctoral degree inComputer Science. He had been actively involvedin the development of a multimodal biometric sys-
tem which includes face, fingerprint, signature, andiris recognition at Indian Institute of TechnologyKanpur, India from July 2002 to July 2004. His
current areas of interest are pattern recognition,image processing, uncertainty principles, biometric
authentication, watermarking, and information fusion. Mayank has more than
65 publications in refereed journals, book chapters, and conferences. He hasreceived four best paper awards. He is a member of the IEEE, Computer
Society, and ACM. He is also a member of Phi Kappa Phi, Tau Beta Pi,Sigma Xi, Upsilon Pi Epsilon, and Eta Kappa Nu honor societies.
Richa Singh is a graduate research assistant in
the Lane Department of Computer Science andElectrical Engineering at West Virginia University.She is currently pursuing her Doctoral degree in
Computer Science. She had been actively involved inthe development of a multimodal biometric system
which includes face, fingerprint, signature, and irisrecognition at Indian Institute of Technology Kan-pur, India from July 2002 to July 2004. Her cur-
rent areas of interest are pattern recognition, imageprocessing, machine learning, granular computing,
biometric authentication, and data fusion. Richa has more than 65 publications
in refereed journals, book chapters, and conferences, and has received fourbest paper awards. She is a member of the IEEE, Computer Society, and ACM.She is also a member of Phi Kappa Phi, Tau Beta Pi, Upsilon Pi Epsilon, and
Eta Kappa Nu honor societies.
Afzel Noore received his Ph.D. in Electrical Engi-
neering from West Virginia University. He workedas a digital design engineer at Philips India. From1996 to 2003, Dr. Noore served as the Associate
Dean for Academic Affairs and Special Assistantto the Dean in the College of Engineering andMineral Resources at West Virginia University. He
is a Professor in the Lane Department of ComputerScience and Electrical Engineering. His researchinterests include computational intelligence, biomet-
rics, software reliability modeling, machine learning,hardware description languages, and quantum computing. His research has
been funded by NASA, NSF, Westinghouse, GE, Electric Power ResearchInstitute, the US Department of Energy, and the US Department of Justice.Dr. Noore has over 85 publications in refereed journals, book chapters, and
conferences. He has received four best paper awards. Dr. Noore is a member ofthe IEEE and serves in the editorial boards of Recent Patents on Engineeringand the Open Nanoscience Journal. He is a member of Phi Kappa Phi, Sigma
Xi, Eta Kappa Nu, and Tau Beta Pi honor societies.