Top Banner
Application of Pyramidal Directional Filters for Biometric Identification using Conjunctival Vasculature Patterns Sriram Pavan Tankasala, Plamen Doynov, and Reza Derakhshani Department of Computer Science Electrical Engineering, University of Missouri – Kansas City 454 Robert H. Flarsheim Hall, 5110 Rockhill Road, Kansas City, MO 64110-2499, US AbstractDirectional pyramidal filter banks as feature extractors for ocular vascular biometrics are proposed. Apart from the red, green, and blue (RGB) format, we analyze the significance of using HSV, YCbCr, and layer combinations (R+Cr)/2, (G+Cr)/2, (B+Cr)/2. For classification, Linear Discriminant Analysis (LDA) is used. We outline the advantages of a Contourlet transform implementation for eye vein biometrics, based on vascular patterns seen on the white of the eye. The performance of the proposed algorithm is evaluated using Receiver Operating Characteristic (ROC) curves. Area under the curve (AUC), equal error rate (EER), and decidability values are used as performance metrics. The dataset consists of more than 1600 still images and video frames acquired in two separate sessions from 40 subjects. All images were captured from a distance of 5 feet using a DSLR camera with an attached white LED light source. We evaluate and discuss the results of cross matching features extracted from still images and video recordings of conjunctival vasculature patterns. The best AUC value of 0.9999 with an EER of 0.064% resulted from using Cb layer in YCbCr color space. The best (lowest value) EER of 0.032% was obtained with an AUC value of 0.9998 using the green layer of the RGB images. Keywords- Pyramidal directional filter banks; Contourlet transform; conjunctival vasculature patterns; biometrics; vein recognition I. INTRODUCTION Within the past decade, biometric-based personal authentication technologies have found many applications [1, 2]. The historical dominance of face and fingerprint modalities is being challenged by other biometric modalities such as iris, vein patterns, palm prints, hand geometry, voice, and DNA short tandem repeats [3]. The success of ocular biometrics is based on its inherent advantages [4] and recent progress in related supporting technologies and processing algorithms [5- 11]. However, many challenges remain, especially with respect to variable image acquisition conditions and the required degree of user cooperation. Retina imaging techniques require specialized devices, very close proximity, and user cooperation [12-14]. Iris recognition is relatively well established and accepted. However, the iris modality requires near infrared imaging for the majority of dark, pigment-rich eyes. Recently, personal recognition using ocular imaging in the visible spectrum has received increased attention. This is especially true for vasculature seen on the white of the eye. Images of the eye in the visible spectrum reveal the vascularity of the outer coatings of the eyes (mostly due to conjunctival and episcleral layers, Figure 1). The conjunctiva and its underlying episclera are anterior segment structures of the human eye, exposed to the naked eye and easy to capture with regular RGB cameras. The covering mucous membrane is clear and facilitates the imaging of the vasculature on the outer surface of the bulbar layer. In interest of brevity, we will henceforth refer to the plurality of conjunctival and episcleral vasculature seen on the white of the eye as conjunctival vasculatures (CV). Conjunctival tissue lines the inside of eyelids and spreads over the anterior sclera (white part of the eye) up to the scleral- corneal limbus. The diversity of CV during pattern formation provides an immense amount of unique textural information, which can be used as biometric tokens. Figure 1: Wavelet vs. Contourlet approach across vascular patterns The conjunctival vasculature can be used as a separate modality or to complement the iris modality to compensate for iris images with off-angle gaze (especially images captured at an extreme gaze in the left or right direction). Previous work on textural classification of conjunctival vasculature has demonstrated high accuracies that support the practical use of conjunctival vasculature as a biometric modality [7-11, 15- 17]. In general, biometrics is a pattern recognition problem, and thus is heavily dependent on the following two stages: feature extraction and classification (matching). The performance of any biometric system depends on the reliable and robust feature extraction. Previous work on conjunctival vasculature recognition shows the importance of various feature extraction methods for obtaining higher accuracies [7, 11, 15-17]. In this paper, we propose and evaluate the performance of methods that use HSV and YCbCr color presentation before feature 978-1-4799-1535-4/13/$31.00 ©2013 IEEE 639
6

Application of Pyramidal Directional Filters for Biometric Identification using Conjunctival Vasculature Patterns

Feb 27, 2023

Download

Documents

Kathy Krause
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Application of Pyramidal Directional Filters for Biometric Identification using Conjunctival Vasculature Patterns

Application of Pyramidal Directional Filters for

Biometric Identification using Conjunctival

Vasculature Patterns Sriram Pavan Tankasala, Plamen Doynov, and Reza Derakhshani

Department of Computer Science Electrical Engineering, University of Missouri – Kansas City

454 Robert H. Flarsheim Hall, 5110 Rockhill Road, Kansas City, MO 64110-2499, US

Abstract— Directional pyramidal filter banks as feature

extractors for ocular vascular biometrics are proposed. Apart

from the red, green, and blue (RGB) format, we analyze the

significance of using HSV, YCbCr, and layer combinations

(R+Cr)/2, (G+Cr)/2, (B+Cr)/2. For classification, Linear

Discriminant Analysis (LDA) is used. We outline the advantages

of a Contourlet transform implementation for eye vein

biometrics, based on vascular patterns seen on the white of the

eye. The performance of the proposed algorithm is evaluated

using Receiver Operating Characteristic (ROC) curves. Area

under the curve (AUC), equal error rate (EER), and decidability

values are used as performance metrics. The dataset consists of

more than 1600 still images and video frames acquired in two

separate sessions from 40 subjects. All images were captured

from a distance of 5 feet using a DSLR camera with an attached

white LED light source. We evaluate and discuss the results of

cross matching features extracted from still images and video

recordings of conjunctival vasculature patterns. The best AUC

value of 0.9999 with an EER of 0.064% resulted from using Cb

layer in YCbCr color space. The best (lowest value) EER of

0.032% was obtained with an AUC value of 0.9998 using the

green layer of the RGB images.

Keywords- Pyramidal directional filter banks; Contourlet

transform; conjunctival vasculature patterns; biometrics; vein

recognition

I. INTRODUCTION

Within the past decade, biometric-based personal authentication technologies have found many applications [1, 2]. The historical dominance of face and fingerprint modalities is being challenged by other biometric modalities such as iris, vein patterns, palm prints, hand geometry, voice, and DNA short tandem repeats [3]. The success of ocular biometrics is based on its inherent advantages [4] and recent progress in related supporting technologies and processing algorithms [5-11]. However, many challenges remain, especially with respect to variable image acquisition conditions and the required degree of user cooperation. Retina imaging techniques require specialized devices, very close proximity, and user cooperation [12-14]. Iris recognition is relatively well established and accepted. However, the iris modality requires near infrared imaging for the majority of dark, pigment-rich eyes. Recently, personal recognition using ocular imaging in the visible spectrum has received increased attention. This is especially true for vasculature seen on the white of the eye. Images of the eye in the visible spectrum reveal the vascularity of the outer coatings of the eyes (mostly due to conjunctival and episcleral layers, Figure 1). The conjunctiva and its underlying episclera

are anterior segment structures of the human eye, exposed to the naked eye and easy to capture with regular RGB cameras. The covering mucous membrane is clear and facilitates the imaging of the vasculature on the outer surface of the bulbar layer. In interest of brevity, we will henceforth refer to the plurality of conjunctival and episcleral vasculature seen on the white of the eye as conjunctival vasculatures (CV). Conjunctival tissue lines the inside of eyelids and spreads over the anterior sclera (white part of the eye) up to the scleral-corneal limbus. The diversity of CV during pattern formation provides an immense amount of unique textural information, which can be used as biometric tokens.

Figure 1: Wavelet vs. Contourlet approach across vascular patterns

The conjunctival vasculature can be used as a separate modality or to complement the iris modality to compensate for iris images with off-angle gaze (especially images captured at an extreme gaze in the left or right direction). Previous work on textural classification of conjunctival vasculature has demonstrated high accuracies that support the practical use of conjunctival vasculature as a biometric modality [7-11, 15-17]. In general, biometrics is a pattern recognition problem, and thus is heavily dependent on the following two stages: feature extraction and classification (matching). The performance of any biometric system depends on the reliable and robust feature extraction. Previous work on conjunctival vasculature recognition shows the importance of various feature extraction methods for obtaining higher accuracies [7, 11, 15-17]. In this paper, we propose and evaluate the performance of methods that use HSV and YCbCr color presentation before feature

978-1-4799-1535-4/13/$31.00 ©2013 IEEE 639

Page 2: Application of Pyramidal Directional Filters for Biometric Identification using Conjunctival Vasculature Patterns

extraction and thus avoiding the non-uniformity of the RGB color space. The features are extracted locally and globally from color and texture information. We examine the feature extraction using different color models and demonstrate the differences. We used Pyramidal directional filtering approach (Contourlets) for feature extraction. Contourlets overcome the limitations of traditional wavelets [18, 19]. As an extension of wavelets with an added property of multi-directionality, the Contourlet transform has the ability to extract edge information as well as smooth contour information. Contourlets are the discrete version of the Curvelets with added benefits of multi-resolution and multi-directional functionality. Similar to 2D-Curvelets, the Contourlets have advantages to process edges as curves and derive reliable information from image patterns. Encouraged by the successful use of Contourlets in various image processing applications [18], we propose algorithms based on Contourlets for feature extraction in ocular biometrics. The rest of this paper is organized as follows. Section II

describes the data collection protocol, the image

preprocessing, and the segmentation of scleral regions of

interest (ROI). Section III provides background on Contourlets

with some details and formulas for the Pyramidal direction

approach in Contourlets, as well as the classification methods

used. Section IV presents the implementation details. The

results are presented in Section V and are followed by

discussions in Section VI. The final section VII provides a

summary of the main results along with conclusions and

directions for future work.

II. DATA COLLECTION PROTOCOL, SEGMENTATION AND

PREPROCESSING

A. Data collection protocol

Data was collected from 40 volunteers (IRB protocol 11-57e). For image acquisition, we used a Canon T2i DSLR camera with an attached Digi-Slave Flex 6400 white LED illumination source, which is a macro ring light with two extended lateral lighting pads. Eye images were captured in still and video mode. In still mode, the camera was operating in its native burst speed of 4 frames per second (fps). In 1080 progressive scan video mode, the capture rate was 30 fps. Data was collected in two consecutive sessions separated by a 30-minute rest interval. Data from session I were used for training and those of session II were used for testing. Subjects were asked to avert their gaze to left and right for maximum exposure of the sclera and CV (Figure 8). The average distance from the camera to the subject was kept at 5 feet. The camera was equipped with a Canon EF 70mm zoom lens and was operated at f-stop 5.6, an exposure time of 1/125 sec, and ISO 800. For each subject, five samples (from the multiple still images) and five frames (from 1080p video) were randomly chosen from session-I and session-II for further processing.

B. Segmentation of sclera

Segmentation of scleral regions of RGB ocular images was

performed using k-means clustering with Euclidean distance

metric and k=3 [11]. Each pixel is denoted as a 3-dimensional

vector representing its RGB intensities. The pixels pertaining

to the scleral region were determined as the cluster with the

largest Euclidean distance from the origin of the coordinate

system to its centroid. The pixels belonging to the iris region

(the iris mask) were determined as the cluster with the

smallest Euclidean distance from the origin of the coordinate

system to its centroid. See Figure 2. The largest connected

region was selected for scleral and iridial masks. Due to the

presence of artifacts such as specular reflections and glare,

some pixels within the sclera region were not assigned to the

proper cluster, thereby appearing as holes in the sclera mask.

To smooth the contour of the scleral mask and to fill its

aforesaid voids, a convex hull operation is applied to the

clustering-derived scleral mask. The convex hull operation

may incorrectly include some pixels pertaining to the iris

cluster and adjacent to the limbic boundary in the scleral

mask. To address this problem, we remove the convex hull of

the iridial mask from the convex hull of the scleral mask. In

the end, for each eye image, a maximum area rectangle

inscribed in the scleral mask is extracted from the scleral

region, as shown in Figure 2, and is designated as the region

of interest (ROI) for the forthcoming operations.

Figure 2: Segmentation of sclera using k-means clustering algorithm.

a) Original image b) Resultant sclera mask c) Sclera mask imposed on the

original image d) Inscribed max area rectangle on segmented sclera

C. Preprocessing

A Contrast Limited Adaptive Histogram Enhancement

algorithm (CLAHE) was used to enhance vascular patterns of

the ROI. The CLAHE algorithm is performed on non-

overlapping partitions (tiles) of an image (8 x 8 tiles per ROI

in this study). The contrast of the each tile is enhanced in such

a way that its histogram matches a specified histogram shape

(flat histogram in this study). Bilinear interpolation is applied

to eliminate edge effect at the boundaries of the tiles at the

time of reconstruction [20]. RGB, YCbCr, and HSV color spaces and exponential

power density function: HSV presents color and intensity

values in a more intuitive and perceptually relevant fashion in

comparison to the Cartesian (cube) representation. The hue

(H) and saturation (S) components of the HSV color space are

640

Page 3: Application of Pyramidal Directional Filters for Biometric Identification using Conjunctival Vasculature Patterns

intimately related to the way the human eye perceives colors.

The value (V) relates to the intensity of the colors. Because

HSV is a simple transformation of the device-dependent RGB

model, its defined physical colors depend on the RGB

primaries of the device (or on the particular RGB profile it

represents), including gamma corrections. Therefore, each

unique and device dependent RGB profile has a unique HSV

space.

YCbCr is another color representation used for color

processing and perceptual uniformity. Its luminance is very

similar to the grayscale version of the original image. A

Chrominance-Blue (Cb) channel is strong in parts of the

image containing blue color. Similarly, Chrominance-Red (Cr)

is strong in places of occurrence of reddish colors, and both

Cb and Cr are weak in green color regions. For the color space

transformations and for computations presented in this paper,

we used the MATLAB 2012b (MathWorks, MA) along its

Image Processing and Parallel Computing toolboxes.

An exponential power density function applies an

exponential distribution to an intensity image, and is described

as

( ⁄ )

The above formula, in which is the mean parameter, was

applied to each pixel of the ROI. As a result, higher intensities

are attenuated, accentuating the lower intensities of the ROI

vascular patterns (Figure 4).

III. THEORY ON CONTOURLETS AND CLASSIFICATION

A. Introduction to Contourlets

Contourlets apply a two-step filter bank to extract

information from contour-rich patterns of an image. The

Contourlet transform was first introduced by Do and Vitterli in

2001 [19]. Contourlets were developed for extracting reliable

information from the contour pattern segments of an image

and overcoming the limitations of traditional wavelets in this

regard [19]. The Laplacian pyramid, in conjunction with

directional filter bank, are used as a two-step filter bank.

B. Laplacian pyramids

The Laplacian pyramid decomposes an image into low-pass and band-pass images (Figure 3). The process generates a low-pass version of the original image (LPout) and band-pass filtered image (BPout). BPout is difference between original image and synthesis filtered image (SF) [21,22].

Figure 3: Laplacian pyramid decomposition for input image (I), Low pass

filtering (LP), down-sampling module (DS), up sampling module (US),

synthesis filtering (SF), low pass output (LPout), band-pass output (BPout)

C. Directional filtering

A directional filter bank was efficiently implemented using

an l-level binary tree decomposition which results in 2l sub-

bands that use a wedge shaped filter. Decomposition tree

expansion includes two building blocks. The first block

involves a two channel Quincunx filter bank for dividing the

2D-spectrum into horizontal and vertical directions. Quincunx

filter banks are two-dimensional, non-separable filter banks

and are widely used in many signal processing applications

[23]. The second block is a shear operator [19]. From the

above Laplacian pyramid and directional filter banks, the

Contourlet transform is given as

[ ] (2)

In the above equation,

[ ] is the directional

filter bank basis, and is the Laplacian pyramid, l is

the level of decomposition,

is the sampling matrix, and

is the synthesis filter.

D. Classification

Linear Discriminant Analysis (LDA) is a supervised linear classification and dimensionality reduction method for casting multi-dimensional features into a single dimension in a way that the projected data points, of the original classes, are maximally separable. In this study, Fisher’s LDA was used [24].

E. Performance metrics

The receiver operating characteristic (ROC) analysis was

used to test the performance of the classifier [25]. We used the

Area under the ROC curve (AUC), Equal error rate (EER),

and decidability distance as performance metrics. The

decidability index (d') was introduced by Daugman for

evaluating the quality of typical dichotomies (two-class

separability ).

F. Match-score level fusion

Various fusion techniques can be applied to combine

information for better classification. In general, match score

level fusion is a better choice to improve the classifiability and

to lower the errors given multiple biometric cues. Typical

techniques used in match score level fusion are the simple (but

yet effective) sum rule, min rule, and max rule [26].

IV. EXPERIMENTAL PROCEDURE

A. Segmentation of sclera and preprocessing

To verify proper segmentation, results were visually inspected and corrected when necessary. After k-means segmentation of the sclera, the average initial size of the ROIs (max-area rectangles inscribed in the segmented sclera) for still images was close to 256x256 pixels. The average initial size of the ROIs obtained from video-frame images was close to 128x128 pixels. Thus, the images were resized to the aforesaid powers of two. In order to compare the influence of image resolution on the results, downsampled and non-downsampled versions were processed.

641

Page 4: Application of Pyramidal Directional Filters for Biometric Identification using Conjunctival Vasculature Patterns

B. Preprocessing

CLAHE was applied to the ROIs with the following

parameters: 8×8 tiles, contrast enhancement limit of 0.01, and

uniform histogram. Consecutively, the images were further

processed using an exponential power density function for

vessel enhancement with a mean value of 0.25, as shown in

Figure 4.

Figure 4: Original green color layer of max-area rectangle ROI (left), the same

image after CLAHE application (center), and after enhancement with

exponential power density function (right)

C. Feature extraction

Feature extraction was performed using the Contourlet

transform [19]. Preprocessed ROIs were used as an input to

the Contourlet transform with the level of decomposition l = 4.

For each sub-band of the decomposed image, a feature vector

was formed using the calculated mean, median, variance, and

entropy. For the current study, multiple separate layers from

different color models or color layers in some combination

were used. In this paper, we present the results from using

separate layers from the RGB, HSV, YCbCr color spaces and

a combination of layers (R+Cr)/2, (G+Cr)/2, (B+Cr)/2.

D. Classification

One-versus-rest Fisher’s LDAs were trained with

Contourlet features from still mode and video mode images in

session I (one to one biometric verification analysis). We

tested the trained LDA with images from session II. Cross

matching between the still mode and video mode images and

vice versa was also performed. This was repeated for the four

possible image capturing scenarios: two gaze directions (left

and right), for each eye (left and right). We use the

abbreviated notations left eye looking left (LLL), left eye

looking right (LLR) for the left eye, and similarly RLL and

RLR for the right eye. The cross matching was performed for

all possible capturing scenarios using the statistical parameters

(mean, median, variance, and entropy) of the Contourlet

transform coefficients of each sub-band. We performed match

score fusion using three techniques (sum rule, min rule, and

max rule) over four ROI’s (LLL, LLR, RLL, and RLR) across

all image sizes of each ROI. ROC analysis was performed to

evaluate the performance of the LDA classifier [21]. AUC and

EER of the ROCs and d’ values were reported for individual

layers from RGB, HSV, YCbCr color spaces, and for the

(R+Cr)/2, (G+Cr)/2, (B+Cr)/2 combinations.

V. RESULTS

A. Results using Fisher’s LDA with still images

We calculated the ROC AUC’s, EER’s and d’ values for each color space using Contourlet features for still images. The best AUC value of 0.9999, with an EER of 0.064% and a d’ value of 3.728, resulted using min rule for the Cb layer in the YCbCr color space. The best (lowest value) EER of 0.032% was obtained with an AUC value of 0.9998 and a d’ value of 4.197 using min rule and green layers of the original RGB images. The best (maximum) d’ value of 4.572 and an associated AUC of 0.9930 and EER of 2.4% were obtained using the green layer in the RGB images and simple sum rule

for match score level fusion.

B. Results using Fisher’s LDA with video frames

AUC’s, EER’s and d’ values for the each color space were calculated using video frames. The highest AUC value of 0.9970 with an EER of 2.5% and d’ value of 3.156 was obtained using min rule with green layer of the RGB image.

C. Results using Fisher LDA with still images for training and

video frames for testing

The highest AUC value of 0.9990 with an EER of 0.16% resulted using min rule with the green layer of an RGB image. The performance of the green + Cr color combination placed second with an EER of 0.67% and AUC of 0.9982. The highest d’ value of 3.636 (high inter-class separability) in the still images/video frames cross matching was obtained with the blue layer in the RGB image using the simple sum rule for match score level matching.

D. Results using Fisher LDA with video frames for training

and still images for testing

In this scenario, video frames were used for training the

LDA (enrollment) and still frames were used for testing. The

best AUC value of 0.9927 with an EER value of 2.6% and a d’

value of 2.673 were obtained using the min rule with V (value

layer) in the HSV color space. The green + Cr combination

performed slightly lower, with an AUC of 0.9900 and an EER

value of 4.8%, but with a higher d’ value of 3.410. Figures 5

and 6 show the EER’s and d’ results for all color transformed

still images, video frames, still images vs. video frames, and

video frames vs. still images.

VI. DISCUSSION

The CLAHE method was used in most of the previous work for vessel enhancement for CV recognition [7, 10, 11]. In this study, in conjunction with CLAHE, we used an exponential power density function as shown in Figure 4. The visibility of the texture is better compared to the enhancement with CLAHE only. The AUC, EER, and d’ values for the images enhanced with an exponential power density function are also better when compared to CLAHE preprocessing only. The summary of results for all the three performance factors is shown in in Table 1. As can be seen, the AUC increases from 0.9277 to 0.9938 and the EER decreases from 3.685% to 2.40 %, and d’ increases from 2.7247 to 4.5697, supporting the advantage of exponential enhancement.

642

Page 5: Application of Pyramidal Directional Filters for Biometric Identification using Conjunctival Vasculature Patterns

TABLE1. AUC, EER, AND D’ VALUES FOR CLAHE AND CLAHE + EXPONENTIAL POWER DENSITY FUNCTION ENHANCEMENT OF THE GREEN LAYER OF RGB IMAGE USING STILL IMAGES WITH SIMPLE SUM RULE MATCH SCORE FUSION.

AUC's EER's [%] d’

CLAHE 0.9277 3.6859 2.7247

CLAHE + Exponential

enhancement

0.9938 2.4038 4.5697

The process of cross matching between still images and videos is performed to investigate the reliability and robustness of the Contourlet features with various color transformed images across different image capturing methods for CV recognition. Though acquired with the same camera, the resolution and compression of the still images and the video frames are different. The results indicate that for a practical implementation, still images need to be used for enrollment and video frames could be used for verification (Figure 7). In the current study, the images were randomly chosen from each capture stack (sequence of frames taken in a short period of time) for further processing and matching. To achieve better performance, the random selection of video frames has to be replaced with certain frame selection criteria (image quality). Previous work on CV mostly includes usage of the green

layer of the RGB images [7-11]. In this study, we additionally

used HSV and YCbCr color transformed images. We also

investigated red and blue layers. The results clearly show that

the green layer performed better in most of the cases. Within

the confines of our dataset, the value V of HSV worked better

than the green layer with an EER of 2.6% for videos vs. still

images. The Cb layer from YCbCr yielded an AUC of 0.9999

with an EER of 0.064 % for still images. The lowest EER of

0.032% was obtained for RGB stills using their green layer. In

summary, RGB’s green layer performed the best overall. The V

layer from HSV, Cr and Cb from YCbCr, blue from RGB, and

green + Cr layer combination may also be used to obtain

texture information. The aforesaid can be visually inferred from

the results presented in Figures 5 and 6.

Figure 5: EER’s (lower is better) for stills, video frames, stills vs. video

frames, and video frames vs. stills; using LDA with min rule match score

level fusion.

Figure 6: d’ (higher is better) for stills, video frames, still vs. video frames,

and video frames vs. stills; using LDA with min rule match score level fusion.

Figure 7: ROC curves for best results using still image, video frames, still vs.

video, and video vs. stills.

a)

b)

Figure 8: Sample eye images from a) still and b) video frames database used in the study

643

Page 6: Application of Pyramidal Directional Filters for Biometric Identification using Conjunctival Vasculature Patterns

VII. CONCLUSION AND FUTURE WORK

Contourlet based CV biometric recognition was

successfully performed with a best EER of 0.032% for still

images using green layer of RGB captures. Image

enhancement for better texture representation using CLAHE +

exponential power density function was successfully

implemented. From the results we conclude that, apart from

green layer of RGB color space, HSV and YCbCr color spaces

can also be used for CV biometric recognition.

For future work, we would like to investigate the

performance of the proposed method over larger datasets. We

would also like to study different fusion techniques using

wavelets, Contourlets and gray level co-occurrence matrices to

build a more robust multi-algorithmic CV biometric

recognition system

ACKNOWLEDGEMENT

Research was sponsored by the Leonard Wood Institute in

cooperation with the U.S. Army Research Laboratory and was

accomplished under Cooperative Agreement Number

W911NF-07-2-0062. The views and conclusions contained in

this document are those of the authors and should not be

interpreted as representing the official policies, either

expressed or implied, of the Leonard Wood Institute, the

Army Research Laboratory or the U.S. Government. The U.S.

Government is authorized to reproduce and distribute reprints

for Government purposes notwithstanding any copyright

notation heron. Authors thank Mr. Sashi Saripalle for his help

in data collection. Authors would like to thanks Dr. Simona

Crihalmeanu and Dr. Arun Ross from Michigan State

University for providing scleral segmentation algorithm and

guidance.

REFERENCES

[1] A.K.Jain, A.Kumar, "Second Generation Biometrics,Springer, Ch.

“Biometrics of Next Generation: An Overview"," 2010. [2] D. Bhattacharyya, R. Ranjan, P. Das, K. Tai-hoon, and S. K.

Bandyopadhyay, "Biometric Authentication Techniques and its Future

Possibilities," in Second International Conference on Computer and Electrical Engineering, 2009. ICCEE '09, 2009, pp. 652-655.

[3] L. Nadel and T. Cushing, "Eval-Ware: Biometrics Resources [Best of the

Web]," IEEE Signal Processing Magazine , vol. 24, pp. 136-139, 2007. [4] A. K. Jain, A. Ross, and S. Prabhakar, "An introduction to biometric

recognition," IEEE Transactions on Circuits and Systems for Video

Technology, , vol. 14, pp. 4-20, 2004.

[5] K. W. Bowyer, K. Hollingsworth, and P. J. Flynn, "Image understanding for

iris biometrics: A survey," Computer Vision and Image Understanding, vol.

110, pp. 281-307, 2008.

[6] J. Daugman, "New Methods in Iris Recognition," IEEE Transactions on

Systems, Man, and Cybernetics, Part B: Cybernetics, , vol. 37, pp. 1167-1175, 2007.

[7] R. Derakhshani and A. Ross, "A Texture-Based Neural Network Classifier

for Biometric Identification using Ocular Surface Vasculature," in International Joint Conference on Neural Networks, 2007, pp. 2982-2987.

[8] V. Gottemukkula, S. K. Saripalle, S. P. Tankasala, R. Derakhshani, R.

Pasula, and A. Ross, "Fusing iris and conjunctival vasculature: Ocular biometrics in the visible spectrum," in IEEE Conference on Technologies for

Homeland Security (HST), 2012, 2012, pp. 150-155.

[9] R. Derakhshani, A. Ross and S. Crihalmeanu, "A New Biometric Modality Based On Conjunctival Vasculature," Proc. of Artificial Neural Networks in

Engineering (ANNIE), (St. Louis, USA), November 2006.

[10] S. P. Tankasala, P. Doynov, R. R. Derakhshani, A. Ross, and S. Crihalmeanu, "Biometric recognition of conjunctival vasculature using GLCM

features," in International Conference on Image Information Processing

(ICIIP), 2011, 2011, pp. 1-6. [11] M. Tistarelli, M. Nixon, S. Crihalmeanu, A. Ross, and R. Derakhshani,

"Enhancement and Registration Schemes for Matching Conjunctival

Vasculature," in Advances in Biometrics. vol. 5558, ed: Springer Berlin / Heidelberg, 2009, pp. 1240-1249.

[12] C. Fancourt, L. Bogoni, K. Hanna, Y. Guo, R. Wildes, N. Takahashi, and

U. Jain, "Iris Recognition at a Distance," in Audio- and Video-Based Biometric Person Authentication. vol. 3546, Springer Berlin Heidelberg, 2005, pp. 1-13.

[13] S. Marios.Unconstrained Biometric Identification: Emerging

Technologies. [14] A.Ross, "Recent Progress in Ocular and Face Biometrics: A CITeR

Perspective," 2010. [15] O. Kangrok and T. Kar-Ann, " Extracting sclera features for

cancelableidentity verification," in International Conference on Biometrics

(ICB), 2012 5th IAPR, 2012, pp. 245-250. [16] N. L. Thomas, Y. Du, and Z. Zhou, "A new approach for sclera vein

recognition," pp. 770805-770805, 2010.

[17] Z. Zhou, Y. Du, N. L. Thomas, and E. J. Delp, "Multimodal eye recognition," pp. 770806-770806, 2010.

[18] A. Azizi and H. Pourreza, "A Novel Method Using Contourlet to Extract

Features for Iris Recognition System," in Emerging Intelligent Computing Technology and Applications. vol. 5754, Springer Berlin Heidelberg, 2009, pp.

544-554.

[19] M. N. Do and M. Vetterli, "The contourlet transform: an efficient directional multiresolution image representation," IEEE Transactions on Image

Processing, , vol. 14, pp. 2091-2106, 2005.

[20] K. Zuiderveld, "Contrast limited adaptive histogram equalization," in Graphics gems IV, ed: Academic Press Professional, Inc., 1994, pp. 474-485.

[21] M. N. Do and M. Vetterli, "Framing pyramids," IEEE Transactions on

Signal Processing, , vol. 51, pp. 2329-2342, 2003. [22] P. J. Burt and E. H. Adelson, "The Laplacian Pyramid as a Compact Image

Code," IEEE Transactions on Communications, vol. 31, pp. 532-540, 1983.

[23] M. Vetterli, "Multi-dimensional sub-band coding: Some theory and algorithms," Signal Processing, vol. 6, pp. 97-112, 1984.

[24] R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification (2nd

Edition): Wiley-Interscience, 2000. [25] T. Fawcett, "An introduction to ROC analysis," Pattern Recognition

Letters,Elsevier, vol. 27, pp. 861-874, 2006.

[26] A. Ross and A. Jain, "Information fusion in biometrics," Pattern Recognition Letters, vol. 24, p. 11p, 2003.

644