Top Banner
Miami: June 25, 2009 Miami: June 25, 2009 1 Face Recognition by Fusion of Local and Face Recognition by Fusion of Local and Global Matching Scores using DS Theory: Global Matching Scores using DS Theory: An Evaluation with Uni-classifier and An Evaluation with Uni-classifier and Multi-classifier Paradigm Multi-classifier Paradigm Authors: D. R. Kisku, M. Tistarelli, J. K. Sing and P. Authors: D. R. Kisku, M. Tistarelli, J. K. Sing and P. Gupta Gupta Presented by: Presented by: Dr. Linda Brodo (Uniss, Italy) Dr. Linda Brodo (Uniss, Italy) IEEE Computer Society Workshop on Biometrics IEEE Computer Society Workshop on Biometrics In Association with CVPR 2009 In Association with CVPR 2009
27

IEEE CVPR Biometrics 2009

Oct 28, 2014

Download

Technology

Face recognition by fusion of local and global matching scores using DS theory: An evaluation with uni-classifier and multi-classifier paradigm
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: IEEE CVPR Biometrics 2009

Miami: June 25, 2009Miami: June 25, 2009 11

Face Recognition by Fusion of Local and Face Recognition by Fusion of Local and Global Matching Scores using DS Theory: Global Matching Scores using DS Theory:

An Evaluation with Uni-classifier and An Evaluation with Uni-classifier and Multi-classifier ParadigmMulti-classifier Paradigm

Authors: D. R. Kisku, M. Tistarelli, J. K. Sing and P. GuptaAuthors: D. R. Kisku, M. Tistarelli, J. K. Sing and P. Gupta

Presented by:Presented by: Dr. Linda Brodo (Uniss, Italy)Dr. Linda Brodo (Uniss, Italy)

IEEE Computer Society Workshop on Biometrics IEEE Computer Society Workshop on Biometrics

In Association with CVPR 2009In Association with CVPR 2009

Page 2: IEEE CVPR Biometrics 2009

Miami: June 25, 2009 2

Agenda of DiscussionAgenda of Discussion

► Local and global feature-based face recognitionLocal and global feature-based face recognition► Challenges of face recognitionChallenges of face recognition► SIFT: feature extractionSIFT: feature extraction► Why dynamic and static salient facial parts are Why dynamic and static salient facial parts are

considered for face recognition?considered for face recognition?► Local and global matching strategyLocal and global matching strategy► Fusion of local and global face matchingFusion of local and global face matching► Experimental evaluation and resultsExperimental evaluation and results► ConclusionConclusion

► ReferencesReferences

Page 3: IEEE CVPR Biometrics 2009

Miami: June 25, 2009 3

Local and global feature-based face Local and global feature-based face recognitionrecognition

► Human faces can be characterized both on the Human faces can be characterized both on the basis of local as well as of global features.basis of local as well as of global features.

► Global features are easier to capture and are less Global features are easier to capture and are less discriminative than localized features, but are less discriminative than localized features, but are less sensitive to localized changes in the face due to the sensitive to localized changes in the face due to the partial deformability of the facial structure.partial deformability of the facial structure.

► On the other hand, local features can be highly On the other hand, local features can be highly discriminative, but suffer more for local changes in discriminative, but suffer more for local changes in the facial appearance or partial face occlusion.the facial appearance or partial face occlusion.

► The optimal face representation should allow to The optimal face representation should allow to match localized facial features, but also match localized facial features, but also determining a global similarity measurement for determining a global similarity measurement for the face.the face.

Page 4: IEEE CVPR Biometrics 2009

Miami: June 25, 2009 4

Challenges of face recognitionChallenges of face recognition

► Intra-class variations and inter-class variations of Intra-class variations and inter-class variations of faces can be affected by constraints as:faces can be affected by constraints as:

- Ill-posed- Ill-posed

- Pose variations- Pose variations

- Facial expressions- Facial expressions

- Age- Age

- Race- Race

- Facial part localization- Facial part localization

Page 5: IEEE CVPR Biometrics 2009

Miami: June 25, 2009 5

SIFT: feature extractionSIFT: feature extraction

► SIFT (Scale Invariant Feature Transform) proposed SIFT (Scale Invariant Feature Transform) proposed by David Lowe, it is invariant to image rotation, by David Lowe, it is invariant to image rotation, scaling, partial illumination changes and 3D scaling, partial illumination changes and 3D projection. projection.

► The basic idea of the SIFT descriptor is detecting The basic idea of the SIFT descriptor is detecting feature points efficiently through a staged filtering feature points efficiently through a staged filtering approach that identifies stable points in the scale-approach that identifies stable points in the scale-space.space.

► Each SIFT feature point consists of:Each SIFT feature point consists of: spatial location, scale, orientation, spatial location, scale, orientation, Keypoint descriptorKeypoint descriptor

Page 6: IEEE CVPR Biometrics 2009

Miami: June 25, 2009 6

SIFT: stepsSIFT: stepsFeature points are extracted from the following steps:Feature points are extracted from the following steps:

- select candidates for feature points by searching peaks in the scale-space from a - select candidates for feature points by searching peaks in the scale-space from a

Difference of Gaussian (DoG) functions;Difference of Gaussian (DoG) functions;- localize the feature points by using the measurement of their stability;- localize the feature points by using the measurement of their stability;- assign orientations based on local image properties;- assign orientations based on local image properties;- calculate and determine the feature descriptors which represent local shape - calculate and determine the feature descriptors which represent local shape

distortions and illumination changes.distortions and illumination changes.

50 100 150 200

50

100

150

200

50 100 150 200

50

100

150

200

Invariant SIFT feature extractions are shown on a pair of face images.

Page 7: IEEE CVPR Biometrics 2009

Miami: June 25, 2009 7

Why dynamic and static salient facial Why dynamic and static salient facial parts are considered for face recognition parts are considered for face recognition

??► Faces are deformable objects which are generally Faces are deformable objects which are generally

difficult to characterize with a rigid representation.difficult to characterize with a rigid representation.► Different facial regions, convey different information on Different facial regions, convey different information on

the subject’s identity, but suffer from different time the subject’s identity, but suffer from different time variability either due to motion or illumination changes.variability either due to motion or illumination changes.

E. g. a talking face: while the eyes can be almost E. g. a talking face: while the eyes can be almost invariant over time, the mouth moves changing itsinvariant over time, the mouth moves changing itsappearance over time.appearance over time.As a consequence, the features extracted from the mouthAs a consequence, the features extracted from the moutharea cannot be directly matched with the correspondingarea cannot be directly matched with the corresponding

features from a static template.features from a static template.

Page 8: IEEE CVPR Biometrics 2009

Miami: June 25, 2009 8

Why dynamic and static salient facial Why dynamic and static salient facial parts are considered for face recognition parts are considered for face recognition

??

Moreover, single facial features may be occluded making theMoreover, single facial features may be occluded making the

corresponding image area not usable for identification.corresponding image area not usable for identification.

► For local matching each face area is handled For local matching each face area is handled independently,independently,

► for global matching all features are grouped together,for global matching all features are grouped together, in particular the features, extracted from the image areas in particular the features, extracted from the image areas

corresponding to the localized facial landmarks, are grouped corresponding to the localized facial landmarks, are grouped

together.together.

Page 9: IEEE CVPR Biometrics 2009

Miami: June 25, 2009 9

Local and global matching strategy: Local and global matching strategy: Local matching Local matching

The aim is to correlate the extracted SIFT features withThe aim is to correlate the extracted SIFT features with independent facial landmarks:independent facial landmarks:► The eyes and mouth positions are automatically located by The eyes and mouth positions are automatically located by

applying the technique proposed in applying the technique proposed in [Smeraldi and others., 1999].[Smeraldi and others., 1999]. ► The position of nostrils is automatically located by The position of nostrils is automatically located by

applying the technique proposed in applying the technique proposed in [Gourier and others, 2004].[Gourier and others, 2004]. A circular region of interest (ROI), centered at each extractedA circular region of interest (ROI), centered at each extractedfacial landmark location, is defined to determine the SIFTfacial landmark location, is defined to determine the SIFTfeatures to be considered as belonging to each face area.features to be considered as belonging to each face area.

50 100 150 200 250 300 350 400

50

100

150

200

50 100 150 200 250 300 350 400

50

100

150

200

Example of matching static facial features.

Page 10: IEEE CVPR Biometrics 2009

Miami: June 25, 2009 10

Local matchingLocal matchingThe SIFT descriptors are grouped together at locationsThe SIFT descriptors are grouped together at locations

corresponding to static (eyes, nose) and dynamiccorresponding to static (eyes, nose) and dynamic

(mouth) facial positions.(mouth) facial positions.

Below, an example showing the independent matching Below, an example showing the independent matching

facial features from local areas.facial features from local areas.

50 100 150 200 250 300 350 400

50

100

150

200

50 100 150 200 250 300 350 400

50

100

150

200

Example of independent matching of static and dynamic facial features.

Page 11: IEEE CVPR Biometrics 2009

Miami: June 25, 2009 11

Local matchingLocal matching

Finally, the fused matching score is computed by combiningFinally, the fused matching score is computed by combining

these four individual matching scores together using sum rulethese four individual matching scores together using sum rule[Snelick and others, 2005]:[Snelick and others, 2005]:

)),(),,(),,(

),....,((),(gallerytestmouthgallerytestnosegallerytesteyeRight

gallerytesteyeLeftgallerytestLOCAL

IIDIIDIID

IIDsumIIFD

Page 12: IEEE CVPR Biometrics 2009

Miami: June 25, 2009 12

Local and global matching strategy: Local and global matching strategy: Global matchingGlobal matching

► Before performing the face matching a one to one Before performing the face matching a one to one correspondence is established for each pair of facial correspondence is established for each pair of facial landmarks.landmarks.

► SIFT features extracted from (left-eye, right-eye, SIFT features extracted from (left-eye, right-eye, nose, mouth) area are grouped together to form an nose, mouth) area are grouped together to form an augmented vector by concatenation. augmented vector by concatenation.

► The actual matching is performed comparing the The actual matching is performed comparing the global feature vectors for a pair of face images. global feature vectors for a pair of face images.

► The final matching score is computed by first The final matching score is computed by first determining all the minimum pair distances and then determining all the minimum pair distances and then computing a mean score of all the minimum pair computing a mean score of all the minimum pair distances as: distances as:

Page 13: IEEE CVPR Biometrics 2009

Miami: June 25, 2009 13

Global matchingGlobal matching► The final distances are determined by the Hausdorff The final distances are determined by the Hausdorff

distance metric which generates a vector of 128 elements:distance metric which generates a vector of 128 elements:

)}}(),({min{min),( j

probei

gallery

NjMi

galleryprobeGLOBAL kIkIIIFD

Page 14: IEEE CVPR Biometrics 2009

Miami: June 25, 2009 14

Fusion of local and global face Fusion of local and global face matchingmatching

► In the proposed classifier fusion, Dempster-Shafer In the proposed classifier fusion, Dempster-Shafer decision theory is applied to combine the matching decision theory is applied to combine the matching scores obtained from the local and global matching scores obtained from the local and global matching strategies.strategies.

► The Dempster-Shafer theory is based on combining The Dempster-Shafer theory is based on combining the evidences obtained from different sources to the evidences obtained from different sources to compute the probability of an event. This is obtained compute the probability of an event. This is obtained by combining three elements: by combining three elements: the basic probability assignment function (the basic probability assignment function (bpabpa), ), the belief function (the belief function (bfbf), ), the plausibility function (the plausibility function (pfpf).).

Page 15: IEEE CVPR Biometrics 2009

Miami: June 25, 2009 15

Fusion of local and global face Fusion of local and global face matchingmatching

.,)()(1

)()(

)()()(

Cmm

mm

mmCm

GlobalLocal

GlobalLocal

GlobalLocal

C

GlobalLocal

GlobalLocal

FDFDLocal Local and FDand FDGlobal Global are the two sets of matching scores.are the two sets of matching scores.Let Let LocalLocal GlobalGlobal are the corresponding power sets of are the corresponding power sets of

FDFDLocalLocal

and FDand FDGlobal.Global.

is the set of sets in is the set of sets in LocalLocal GlobalGlobal, with C , with C ..

Fusion of local and global face matching based onFusion of local and global face matching based onDempster-Shafer theoryDempster-Shafer theory

Page 16: IEEE CVPR Biometrics 2009

Miami: June 25, 2009 16

)()()( GlobalmLocalmfinalm

Fusion of local and global face Fusion of local and global face matchingmatching

The denominator is a normalizing factor which revealsThe denominator is a normalizing factor which revealshow much the probability assignments on local andhow much the probability assignments on local andGlobal feature matching are conflicting.Global feature matching are conflicting.

Page 17: IEEE CVPR Biometrics 2009

Miami: June 25, 2009 17

Fusion of local and global face Fusion of local and global face matchingmatching

.,

)(,

otherwisereject

finalmifaccept

decision

Page 18: IEEE CVPR Biometrics 2009

Miami: June 25, 2009 18

Experimental evaluation and resultsExperimental evaluation and results

► We carried out extensive experiments on the IITK We carried out extensive experiments on the IITK and the ORL face databases.and the ORL face databases.

► The local and global matching strategies are The local and global matching strategies are evaluated independently on both databases. evaluated independently on both databases.

► The matching scores obtained from the (local and The matching scores obtained from the (local and global matching) are fused together to improve the global matching) are fused together to improve the recognition performancerecognition performance

Page 19: IEEE CVPR Biometrics 2009

Miami: June 25, 2009 19

Evaluation on IITK databaseEvaluation on IITK database

► 800 face images (200×4).800 face images (200×4).► ±20 degrees rotation in ±20 degrees rotation in

head positionhead position► Illumination controlledIllumination controlled► Downscaled to 140×100 Downscaled to 140×100

pixels.pixels.

100

101

102

0

5

10

15

20

25

30

35

40

<--- False Acceptance Rate --->

<--

- F

alse

Rej

ecti

on R

ate

--->

ROC Curves Determined on IITK Face Database

Global Matching Strategy

Local Matching Strategy

ROC curves determined on IITK face database is shown for both the local and global matching strategy.

MatchingStrategy

FRR (%) FAR (%) EER (%)Recognition

rate (%)

Localmatching

6.29 2.19 4.24 95.76

Global matching

9.87 3.61 6.79 93.21

Page 20: IEEE CVPR Biometrics 2009

Miami: June 25, 2009 20

Evaluation on ORL databaseEvaluation on ORL database► 400 face images and 40 persons.400 face images and 40 persons.► We have used 200 face images We have used 200 face images

and 5 sample/user.and 5 sample/user.► ±20 to ±30 degrees orientation ±20 to ±30 degrees orientation

changes are considered.changes are considered.► The face images show variations The face images show variations

of pose and facial expression of pose and facial expression (smile/not smile, open/closed (smile/not smile, open/closed eyes).eyes).

► Downscaled original face images Downscaled original face images to 140×100 pixels.to 140×100 pixels. 10

010

110

20

10

20

30

40

50

60

70

80

90

<--- False Acceptance Rate --->

<--

- F

alse

Rej

ecti

on R

ate

--->

ROC Curves Determined on ORL Face Database

Local Matching Strategy

Global Matching Strategy

ROC curves determined on ORL face database are shown for local and global matching strategies.

MatchingStrategy

FRR (%) FAR (%) EER (%)Recognition

rate (%)

Local matching

3.77 1.45 2.61 97.39

Global matching

5.86 2.48 4.17 95.83

Page 21: IEEE CVPR Biometrics 2009

Miami: June 25, 2009 21

Fusion of local and global matching Fusion of local and global matching scoresscores

► We applied the Dempster-Shafer theory for We applied the Dempster-Shafer theory for fusion.fusion.

► The fusion method has been applied to the The fusion method has been applied to the IITK, the ORL and the Yale face databases. IITK, the ORL and the Yale face databases. (To limit the page length, the partial results for Yale database have (To limit the page length, the partial results for Yale database have not been included in the paper).not been included in the paper).

Page 22: IEEE CVPR Biometrics 2009

Miami: June 25, 2009 22

Fusion of local and global matching Fusion of local and global matching scoresscores

100

101

102

0

10

20

30

40

50

60

70

80

<--- False Acceptance Rate --->

<--

- F

alse

Rej

ecti

on R

ate

--->

ROC Curves Determined from Fusion Approach

Test On ORL Face DB

Test On IITK Face DBTest On Yale Face DB

ROC curves determined from three face databases: IITK, ORL and Yale face databases.

Page 23: IEEE CVPR Biometrics 2009

Miami: June 25, 2009 23

ConclusionConclusion

► Human faces can be characterized both on the Human faces can be characterized both on the basis of local as well as of global features. While basis of local as well as of global features. While global features are easier to capture they are global features are easier to capture they are generally less discriminative than localized generally less discriminative than localized features, but are less sensitive to localized changes features, but are less sensitive to localized changes in the face due to the partial deformability of the in the face due to the partial deformability of the facial structure. facial structure.

► The optimal face representation should then allow The optimal face representation should then allow to match localized facial features, but also to match localized facial features, but also determining a global similarity measurement for determining a global similarity measurement for the face.the face.

Page 24: IEEE CVPR Biometrics 2009

Miami: June 25, 2009 24

ReferencesReferences► G. Shakhnarovich and B. Moghaddam. Face recognition in G. Shakhnarovich and B. Moghaddam. Face recognition in

subspaces. In S. Li and A. Jain, editors, subspaces. In S. Li and A. Jain, editors, Handbook of Face Handbook of Face RecognitionRecognition, pages 141–168. Springer Verlag, 2004., pages 141–168. Springer Verlag, 2004.

► G. Shakhnarovich, J. W. Fisher, and T. Darrell, Face recognition from G. Shakhnarovich, J. W. Fisher, and T. Darrell, Face recognition from long-term observations. In long-term observations. In IEEE European Conference on Computer IEEE European Conference on Computer VisionVision, pages 851-865, 2002., pages 851-865, 2002.

► L. Wiskott, J. Fellous, N. Kruger, and C. Malsburg. Face recognition by L. Wiskott, J. Fellous, N. Kruger, and C. Malsburg. Face recognition by elastic bunch graph matching. elastic bunch graph matching. IEEE Transactions on Pattern Analysis IEEE Transactions on Pattern Analysis and Machine Intelligenceand Machine Intelligence, 19:775–779, 1997., 19:775–779, 1997.

► J. Bigun. Retinal vision applied to facial features detection and face J. Bigun. Retinal vision applied to facial features detection and face authentication. authentication. Pattern Recognition LettersPattern Recognition Letters, 23(4):463–475, 1997., 23(4):463–475, 1997.

► G. Zhang, X. Huang, S. Li, Y. Wang, and X. Wu. G. Zhang, X. Huang, S. Li, Y. Wang, and X. Wu. Boosting local binary Boosting local binary pattern (lbp)-based face recognition. In L. 3338, editor, pattern (lbp)-based face recognition. In L. 3338, editor, SINOBIOMETRICS SINOBIOMETRICS 2004, pages 179–186. Springer Verlag, 2004.2004, pages 179–186. Springer Verlag, 2004.

► G. Heusch, Y. Rodriguez, and S. Marcel. Local binary patterns as an G. Heusch, Y. Rodriguez, and S. Marcel. Local binary patterns as an image preprocessing for face authentication. IDIAP-RR 76, IDIAP, image preprocessing for face authentication. IDIAP-RR 76, IDIAP, 2005.2005.

► D. R. Kisku, A. Rattani, E. Grosso and M. Tistarelli. Face identification D. R. Kisku, A. Rattani, E. Grosso and M. Tistarelli. Face identification by SIFT-based complete graph topology. In by SIFT-based complete graph topology. In IEEE Workshop Automatic IEEE Workshop Automatic Identification Advanced TechnologiesIdentification Advanced Technologies, pages 63-68, 2007., pages 63-68, 2007.

► D. Lowe. Object recognition from local scale-invariant features. D. Lowe. Object recognition from local scale-invariant features. Int. Int. Conf. on Computer VisionConf. on Computer Vision, pages 1150–1157, 1999., pages 1150–1157, 1999.

Page 25: IEEE CVPR Biometrics 2009

Miami: June 25, 2009 25

Contd….ReferencesContd….References► D. Lowe. Distinctive image features from scale-invariant keypoints. D. Lowe. Distinctive image features from scale-invariant keypoints. Int. Int.

Journal of Computer VisionJournal of Computer Vision, 60(2):91–110, 2004., 60(2):91–110, 2004.► U. Park, S. Pankanti,U. Park, S. Pankanti,  and A. K. Jain. Fingerprint verification using SIFT and A. K. Jain. Fingerprint verification using SIFT

features. In features. In Biometric Technology for Human Identification VBiometric Technology for Human Identification V. Edited by . Edited by Vijaya Kumar, B. V. K.; Prabhakar, Salil; Ross, Arun A. Proceedings of the Vijaya Kumar, B. V. K.; Prabhakar, Salil; Ross, Arun A. Proceedings of the SPIE, 6944:69440K-69440K-9, 2008.SPIE, 6944:69440K-69440K-9, 2008.

► F. Smeraldi, N. Capdevielle, and J. Bigün. Facial features detection by F. Smeraldi, N. Capdevielle, and J. Bigün. Facial features detection by saccadic exploration of the Gabor decomposition and support vector saccadic exploration of the Gabor decomposition and support vector machines. In machines. In 11th Scandinavian Conference on Image Analysis11th Scandinavian Conference on Image Analysis, 1: 39-44, , 1: 39-44, 1999.1999.

► N. Gourier, D. H. James, and L. Crowley. Estimating face orientation from N. Gourier, D. H. James, and L. Crowley. Estimating face orientation from robust detection of salient facial structures. robust detection of salient facial structures. FG Net Workshop on Visual FG Net Workshop on Visual Observation of Deictic Gestures (POINTING), Observation of Deictic Gestures (POINTING), 2004.2004.

► R. Snelick, U. Uludag, A. Mink, M. Indovina, and A. Jain. Large scale R. Snelick, U. Uludag, A. Mink, M. Indovina, and A. Jain. Large scale evaluation of multimodal biometric authentication using state-of-the-art evaluation of multimodal biometric authentication using state-of-the-art systemssystems. . IEEE Transactions on Pattern Analysis and Machine IntelligenceIEEE Transactions on Pattern Analysis and Machine Intelligence , , 27(3):450-455, 2005.27(3):450-455, 2005.

► B. Heisele, P. Ho, J. Wu, and T. Poggio. Face recognition: component-based B. Heisele, P. Ho, J. Wu, and T. Poggio. Face recognition: component-based versus global approaches.versus global approaches. Computer Vision and Image UnderstandingComputer Vision and Image Understanding, , 91(1-2):6-21, 2003.91(1-2):6-21, 2003.

► N. Wilson. Algorithms for Dempster-Shafer theory. N. Wilson. Algorithms for Dempster-Shafer theory. OxfordOxford BrookesBrookes UniversityUniversity..

► J. A. Barnett. Computational methods for a mathematical theory of J. A. Barnett. Computational methods for a mathematical theory of evidence. In evidence. In IJCAIIJCAI, pages 868-875, 1981., pages 868-875, 1981.

Page 26: IEEE CVPR Biometrics 2009

Miami: June 25, 2009 26

THANK you !THANK you !

Page 27: IEEE CVPR Biometrics 2009

Miami: June 25, 2009 27

for contacts:for contacts:

[email protected]@[email protected]@gmail.com