Top Banner
A Project Report on Face Recognition using Principle Component Analysis algorithm Submitted By Priyanshu chaurasia Vinay kr. singh Abhishek bajpayee Kuldeep kumar Santosh yadav DEPARTMENT OF ELECTRONICS & COMMUNICATION ENGINEERING VIDYA BHAVAN COLLEGE FOR ENGINEERING TECHNOLOGY, KANPUR UTTAR PRADESH TECHNICAL UNIVERSITY, LUCKNOW Academic Session - 2015
66

Final Report

Jan 15, 2016

Download

Documents

Jill Wiley

This document is not related to IEEE any type or any means. This is a simple project report on Face Recognition using MATLAB which is submitted in finals of engineering. It's a college work .You can use it as support for your project .
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Final Report

A Project Report on

Face Recognition using Principle Component Analysis algorithm

Submitted By

Priyanshu chaurasia

Vinay kr. singh

Abhishek bajpayee

Kuldeep kumar

Santosh yadav

DEPARTMENT OF ELECTRONICS & COMMUNICATION ENGINEERING

VIDYA BHAVAN COLLEGE FOR ENGINEERING TECHNOLOGY, KANPUR

UTTAR PRADESH TECHNICAL UNIVERSITY, LUCKNOW

Academic Session - 2015

Page 2: Final Report

A Project Report on

Face Recognition using Principle Component Analysis algorithm

SUBMIITTED BY

Priyanshu chaurasia

Vinay kr. Singh

Abhishek Bajpayee

Kuldeep kumar

Santosh yadav

submitted in partial fulfillment of therequirements for the award of the degree

of

Bachelor of Technology

IN

ELECTRONICS & COMMUNICATION ENGINEERING

VIDYA BHAVAN COLLEGE FOR ENGINEERING TECHNOLOGY, KANPUR

UTTAR PRADESH TECHNICAL UNIVERSITY, LUCKNOW

Academic Session- 2015

Page 3: Final Report

ii

CERTIFICATE

I hereby certify that the work which is being presented in the B.Tech. Major Project Report

entitled “Face Recognition using Principle Component Analysis algorithm”, in partial

fulfillment of the requirements for the award of the Bachelor of Technology in Electronics

& Communication Engineering and submitted to the Department of Electronics &

Communication Engineering of Vidya Bhavan College of Engineering, Kanpur is an

authentic record of my own work carried out during a period from January 2015 to June

2015 under the supervision of Uttam yadav, Hod E&C Department.

The matter presented in this thesis has not been submitted by me for the award of any

other degree elsewhere.

This is to certify that the above statement made by the candidate is correct to the best

of my knowledge.

Signature of Supervisor(s)

DEPARTMENT OF ELECTRONICS AND COMMUNICATION,

VIDYA BAHVAN COLLEGE FOR ENGINEERING AND TECHNOLOGY, KANPUR

Page 4: Final Report

iii

Candidate’s Declaration

I hereby declare that the work presented in this dissertation entitled “Face

Recognition Using Principal Component Analysis Algorithm” in partial

fulfillment for the award of degree of Bachelor of Technology submitted in the

department of Electronics and Communication Engineering, Vidya Bhavan college

for Engineering Technology, Kanpur(Affilited Uttar pradesh Technical University,

Lucknow) is my own work carried out, under the guidance of Mr. Uttam Yadav,

Professor in the department Electronics and Communication Engineering, Vidya

Bhavan College For Engineering and Technology, Kanpur, U.P.

Date: Priyanshu Chaurasia(1141831042)

Vinay Kr. Singh(1141831054)

Abhisek Bajpayee(1141831001)

Kuldeep Kumar(1141831028)

Santosh Yadav(1041831021)

This is to certify that the above statements made by the candidate are correct to the

best of my knowledge.

Date:

(Mr. Uttam Yadav) (Mr. Uttam Yadav) Reader & Head Professor

Department of Department of E&C Engineering

E & C Engineering V.B.C.E.T., Kanpur (U.P.)

V.B.C.E.T., Kanpur (U.P.)

Page 5: Final Report

iv

ACKNOWLEDGEMENT

I would like to extend my gratitude and my sincere thanks to my honorable, esteemed

supervisor Uttam yadav, Professor, department of Electronics and Communication

Engineering, V.B.C.E.T., Kanpur for his immeasurable guidance and valuable time that he

devoted for project. I sincerely thank for his exemplary guidance and encouragement. His

trust and support inspired me in the most important moments of making right decisions and I

am glad to work with him.

I would also like to thanks Uttam yadav, Head, department of Electronics and

Communication Engineering, V.B.C.E,T, Kanpur and faculty members Shivam singh, Divya

pandey, Ritesh yadav for their guidance and help. I am thankful to scores of books and

technical papers, from where various ideas were drawn but origin of which is now obscure.

NAME OF STUDENT

Priyanshu chaurasia (1141831042)

Vinay kr.singh(1141831054)

Abhishek bajpayee (1141831001)

Kuldeep kumar (1141831028)

Santosh kumar (1041831021 )

Page 6: Final Report

v

TABLE OF CONTENTS

1. Chapter 1- Introduction 1-8

1.1- Use the Face for Recognition

1.2- Applications

1.3- General Difficulties

5

6

7

2. Chapter-2 Literature Survey of Face Recognition Techniques 9-25

2.0- face recognition techniques

2.1- Face Recognition from Intensity Images

2.1.1- Featured-based

2.1.2- Advantages and Disadvantages

2.1.3- Holistic

2.1.3.1- Statistical

2.2- Motivation

2.3- Approach

2.4- Ethics and Societal Implications

2.5- Face recognition

2.6- Eigenface-based Recognition

2.7- How Humans Perform Face Recognition

2.8- Face Recognition From a Law Enforcement Perspective

2.9- Current Uses of Face Recognition

2.10- Face recognition techniques and results

2.11- Face Recognition

9

9

9

13

14

14

16

18

18

19

19

20

22

24

24

25

3. Chapter 3 26-31

3.1- Introduction of PCA technology

3.2- Face recognition by using some other algorithms

3.3- Principal Components Analysis

3.4- Theory of PCA

26

27

27

29

Page 7: Final Report

vi

3.5- Principal component Analysis using matrix

3.6- Limitations of PCA

29

31

4. RESULTS 32-43

5. APPENDIX-A 44-47

6. REFERENCES 48-57

Page 8: Final Report

vii

LIST OF FIGURE

1. Chapter 1

1.1 Example of face recognition

1.2 Biometric Market Technology

1-8

3

4

2.

chapter2

2.1.1.Geometrical features(white) used in the face recognition

experiments.

2.1.2. 35 manually identified facial features.

2.1.3. Grids for face recognition

2.1.4.a) The twelve fiducial points of interest for face recognition; b)

Feature vector has 21 components; ten distances D1-D10 (normalized

with /(D4+D5)) and eleven profile arcs A1-A11 (normalized with

/(A5+A6)) .

2.1.5. Overall Project Structure.

2.3. Face Identification System

2.7. 1. Staring at the faces in the green circles will cause one to

misidentify the central face with the faces circled in red. This is an

example of face aftereffects .

2.7.2. Photograph during the recording of “We Are the World.”

2.8 :- Figure depicts increasingly controlled environments from left to

right.

9-25

11

11

12

13

16

18

21

21

23

3. Chapter 3

3.1.The Flow diagram for the face recognition.

26-31

26

Page 9: Final Report

viii

4. Chapter 4

Case1-

4.1. Training Set (AT&T).

4.2 Normalized Training Set (AT&T).

4.3. Mean image.

4.4. Eigen faces.

4.5. Input and Re-constructed images.

4.6. Weight of input image and Euclidian distance of input image.

Case-2

4.7. Input and Re-constructed image with input image absent in

training database.

Case-3

4.8 . Training database images.

4.9. Normalized Training database images.

4.10. The Mean image.

4.11. Eigen face images.

4.12. Input and Re-constructed images.

4.13. Weight of input image and Euclidian distance of input image.

Case-4

4.14. Training database images.

4.15. Normalized Training database images.

4.16. The Mean image.

4.17. Input and Re-constructed images.

4.18. Input and Re-constructed images.

32-43

32

32

33

33

34

34

35

37

37

38

38

38

39

39

40

40

41

41

41

42

42

43

Page 10: Final Report

1

CHAPTER-1

1. INTRODUCTION

“Biometrics consist of technologies that support automatic identification or verification of

identity based on behavioral or physical traits” [1]. Biometrics validates identities by

calculating unique individual characteristics. The most predominant areas of biometrics

involve fingerprints, iris and facial characteristics, hand geometry, retina, voice and touch.

Over the last twenty years or so there has not been a significant market for Biometric

technologies. They have been consigned to infrequent use in films and in some high-security

government or military installations. Nowadays, Biometrics is increasing it‟s stronghold in

many aspects of both public and private life. For example, in some cases in the computer

industry, Biometrics is replacing the more conventional personal identification numbers

(PIN) and passwords. Although Password/Pin systems and Token systems are still the most

common person verification and identification methods, trouble with forgery, theft and

lapses in users‟ memory pose a very real threat to high security environments which are now

turning to biometric technologies to alleviate this potentially dangerous threat. The areas in

which Biometrics are gaining the most support is in the protection of restricted areas, both

commercial and domestic.

This project is based on the face recognition method of Biometric authentication. Although

other Biometric methods such as fingerprints and iris scans are more accurate methods of

identification; facial recognition provides an ingrained human backup because we

instinctively recognize one another. Face recognition is the process in which the facial

features of someone are recognized and then matched to one of the many faces in a database.

“The intuitive way to do face recognition is to look at the major features of the face and

compare them to the same features on other faces.”[2].

The history of face recognition dates back to the 1960‟s when a semi-automated method was

used to compare facial features. First the key features in the photograph were marked by

hand; key features included eyes, ears, nose and mouth.

Page 11: Final Report

2

Then the distances and ratios between these marks and a common reference point were

computed and these values were then compared to reference data of other faces. In the early

1970‟s Goldstein, Harmon and Lesk [2] created a face recognition system using 21 particular

markers e.g. hair color and lip thickness. This method was less computerized then the

previous method because many of the measurements had to be made entirely by hand.

The next step in face recognition was made by Fisher and Elshlagerb [2] in the early 1970‟s.

They measured the key features in a face using templates of the features of the different parts

of the face.

They then plotted all the pieces on to a general template. Even though this method was more

automated then the previous it proved to be too inconclusive as the features used did not

include enough distinctive data to represent a face.

Kirby and Sirovich pioneered the eigenface approach in 1988 at Brown University. It was the

first genuinely successful system for automatic recognition of human faces. The system

functions by projecting face images onto a feature space that projects the significant

variations among known face images. These significant features are known as “Eigenfaces”.

This eigenface approach is used in this project.

Fingerprints, hand geometry, iris scans, DNA analysis and to some extent personal signatures

are all biometric identifiers. But the only one that does not delay or interfere with access is

face recognition.

Humans identify others by their face and voice and therefore are likely to be more

comfortable with a system that uses face and voice recognition.

Page 12: Final Report

3

Figure 1.1 : Example of face recognition [4]

This makes face recognition ideal for high traffic areas which are open to the general public

for e.g. airports and railway stations, ATM‟s, public transportation and businesses of all

kinds. Face recognition gives a record of who was there. Since the record is stored in a

database, known persons can be detected automatically and unknown persons checked

quickly.

Concerns following recent terror attacks and constant threats to safety have created a

pressing need for advanced security. As a result, merchants of sophisticated face recognition

biometrics solutions have found their products very much in demand.

According to the World Face Recognition Biometrics Market, the face recognition market

earned revenues of $186 million in 2005 and is likely to grow at a compound annual growth

rate (CAGR) of 27.5 percent to reach $1021.1 million in 2012 [6].

Page 13: Final Report

4

Figure 1.2 : Biometric Market Technology [7]

However, to compete effectively with alternative biometric technologies, price reduction of

face biometric solutions is critical. A major competitor that matches the face biometric

system in terms of accuracy and performance is the non-automated fingerprint identification

systems (non-AFIS). A reluctance to lower prices may hamper the adoption of face

recognition biometrics.

The purpose of this project is to create a vision based biometric authentication system for

PCs. Now a days, most PCs use a password based authentication system to determine access

rights. The reason for creating a vision based authentication system is because the password

based system:

Less secure – Anybody may enter anybody else‟s password

Remembering passwords – This may become a problem when a user is accessing a

number of different systems

Tedious – A user has to enter his/her passwords every time the workstation needs to be

locked

Because of the advances in image processing techniques, particularly in the areas of face

detection and face recognition, coupled with the low cost of digital imaging hardware, make

a vision based authentication system quite practical.

Page 14: Final Report

5

1.1 USE THE FACE FOR RECOGNITION

Biometric-based techniques have emerged as the most promising option for recognizing

individuals in recent years since, instead of authenticating people and granting them access to

physical and virtual domains based on passwords, PINs, smart cards, plastic cards, tokens,

keys and so forth, these methods examine an individual‟s physiological and/or behavioural

characteristics in order to determine and/or ascertain his identity. Passwords and PINs are

hard to remember and can be stolen or guessed; cards, tokens, keys and the like can be

misplaced, forgotten, purloined or duplicated; magnetic cards can become corrupted and

unreadable. However, an individual‟s biological traits cannot be misplaced, forgotten, stolen

or forged.

Biometric-based technologies include identification based on physiological characteristics

(such as face, fingerprints, finger geometry, hand geometry, hand veins, palm, iris, retina, ear

and voice) and behavioural traits (such as gait, signature and keystroke dynamics) [1]. Face

recognition appears to offer several advantages over other biometric methods, a few of which

are outlined here:

Almost all these technologies require some voluntary action by the user, i.e., the user needs

to place his hand on a hand-rest for fingerprinting or hand geometry detection and has to

stand in a fixed position in front of a camera for iris or retina identification. However, face

recognition can be done passively without any explicit action or participation on the part of

the user since face images can be acquired from a distance by a camera. This is particularly

beneficial for security and surveillance purposes. Furthermore, data acquisition in general is

fraught with problems for other biometrics: techniques that rely on hands and fingers can be

rendered useless if the epidermis tissue is damaged in some way (i.e., bruised or cracked).

Iris and retina identification require expensive equipment and are much too sensitive to any

body motion. Voice recognition is susceptible to background noises in public places and

auditory fluctuations on a phone line or tape recording.

Signatures can be modified or forged. However, facial images can be easily obtained with a

couple of inexpensive fixed cameras. Good face recognition algorithms and appropriate pre-

processing of the images can compensate for noise and slight variations in orientation, scale

Page 15: Final Report

6

and illumination. Finally, technologies that require multiple individuals to use the same

equipment to capture their biological characteristics potentially expose the user to the

transmission of germs and impurities from other users. However, face recognition is totally

non-intrusive and does not carry any such health risks.

1.2 APPLICATION

Face recognition is used for two primary tasks:

1. Verification (one-to-one matching): When presented with a face image of an unknown

individual along with a claim of identity, ascertaining whether the individual is who he/she

claims to be.

2. Identification (one-to-many matching): Given an image of an unknown individual,

determining that person‟s identity by comparing (possibly after encoding) that image with a

database of (possibly encoded) images of known individuals.

There are numerous application areas in which face recognition can be exploited for these

two purposes, a few of which are outlined below.

• Security (access control to buildings, airports/seaports, ATM machines and border

checkpoints [2, 3]; computer/ network security [4]; email authentication on multimedia

workstations).

• Surveillance (a large number of CCTVs can be monitored to look for known criminals,

drug offenders, etc. and authorities can be notified when one is located; for example, this

procedure was used at the Super Bowl 2001 game at Tampa, Florida [5]; in another instance,

according to a CNN report, two cameras linked to state and national databases of sex

offenders, missing children and alleged abductors have been installed recently at Royal Palm

Middle School in Phoenix, Arizona [6]).

• General identity verification (electoral registration, banking, electronic commerce,

identifying newborns, national IDs, passports, drivers‟ licenses, employee IDs).

• Criminal justice systems (mug-shot/booking systems, post-event analysis, forensics).

Page 16: Final Report

7

• Image database investigations (searching image databases of licensed drivers, benefit

recipients, missing children, immigrants and police bookings).

• “Smart Card” applications (in lieu of maintaining a database of facial images, the face-print

can be stored in a smart card, bar code or magnetic stripe, authentication of which is

performed by matching the live image and the stored template) [7].

• Multi-media environments with adaptive human computer interfaces (part of ubiquitous or

context aware systems, behaviour monitoring at childcare or old people‟s centres,

recognizing a customer and assessing his needs) [8, 9].

• Video indexing (labelling faces in video) [10, 11].

• Witness faces reconstruction [12].

In addition to these applications, the underlying techniques in the current face recognition

technology have also been modified and used for related applications such as gender

classification [13-15], expression recognition [16, 17] and facial feature recognition and

tracking [18]; each of these has its utility in various domains: for instance, expression

recognition can be utilized in the field of medicine for intensive care monitoring [19] while

facial feature recognition and detection can be exploited for tracking a vehicle driver‟s eyes

and thus monitoring his fatigue [20], as well as for stress detection [21].

Face recognition is also being used in conjunction with other biometrics such as speech, iris,

fingerprint, ear and gait recognition in order to enhance the recognition performance of these

methods [8, 22-34].

1.3 GENERAL DIFFICULTY

Face recognition is a specific and hard case of object recognition. The difficulty of this

problem stems from the fact that in their most common form (i.e., the frontal view) faces

appear to be roughly alike and the differences between them are quite subtle. Consequently,

frontal face images form a very dense cluster in image space which makes it virtually

Page 17: Final Report

8

impossible for traditional pattern recognition techniques to accurately discriminate among

them with a high degree of success [35].

Furthermore, the human face is not a unique, rigid object. Indeed, there are numerous factors

that cause the appearance of the face to vary.

The sources of variation in the facial appearance can be categorized into two groups: intrinsic

factors and extrinsic ones [36].

A) Intrinsic factors are due purely to the physical nature of the face and are independent of

the observer. These factors can be further divided into two classes: intrapersonal and

interpersonal

[37]. Intrapersonal factors are responsible for varying the facial appearance of the same

person, some examples being age, facial expression and facial paraphernalia (facial hair,

glasses, cosmetics, etc.). Interpersonal factors, however, are responsible for the differences in

the facial appearance of different people, some examples being ethnicity and gender.

B) Extrinsic factors cause the appearance of the face to alter via the interaction of light with

the face and the observer. These factors include illumination, pose, scale and imaging

parameters (e.g., resolution, focus, imaging, noise, etc.).

Evaluations of state-of-the-art recognition techniques conducted during the past several

years, such as the FERET evaluations [7, 38], FRVT 2000 [39], FRVT 2002 [40] and the

FAT 2004 [41], have confirmed that age variations, illumination variations and pose

variations are three major problem plaguing current face recognition systems [42].

Although most current face recognition systems work well under constrained conditions (i.e.,

scenarios in which at least a few of the factors contributing to the variability between face

images are controlled), the performance of most of these systems degrades rapidly when they

are put to work under conditions where none of these factors are regulated [43].

Page 18: Final Report

9

CHAPTER-2

Literature Survey of Face Recognition Techniques

2. Face Recognition Techniques

The method for acquiring face images depends upon the underlying application. For instance,

surveillance applications may best be served by capturing face images by means of a video

camera while image database investigations may require static intensity images taken by a

standard camera.

Some other applications, such as access to top security domains, may even necessitate the

forgoing of the nonintrusive quality of face recognition by requiring the user to stand in front

of a 3D scanner or an infra-red sensor.

Therefore, depending on the face data acquisition methodology, face recognition techniques

can be broadly divided into three categories: methods that operate on intensity images, those

that deal with video sequences, and those that require other sensory data such as 3D

information or infra-red imagery. The following discussion sheds some light on the methods

in each category and attempts to give an idea of some of the benefits and drawbacks of the

schemes mentioned therein in general (for detailed surveys, please see [44, 45]).

2.1- Face Recognition from Intensity Images

Face recognition methods for intensity images fall into two main categories: feature-based

and holistic [46-48]. An overview of some of the well-known methods in these categories is

given below.

2.1.1- Featured-based

Feature-based approaches first process the input image to identify and extract (and measure)

distinctive facial features such as the eyes, mouth, nose, etc., as well as other fiducial marks,

and then compute the geometric relationships among those facial points, thus reducing the

Page 19: Final Report

10

input facial image to a vector of geometric features. Standard statistical pattern recognition

techniques are then employed to match faces using these measurements.

Early work carried out on automated face recognition was mostly based on these techniques.

One of the earliest such attempts was by Kanade [49], who employed simple image

processing methods to extract a vector of 16 facial parameters - which were ratios of

distances, areas and angles (to compensate for the varying size of the pictures) -and used a

simple Euclidean distance measure for matching to achieve a peak performance of 75% on a

database of 20 different people using 2 images per person (one for reference and one for

testing).

Brunelli and Poggio [46], building upon Kanade‟s approach, computed a vector of 35

geometric features (Fig. 3) from a database of 47 people (4 images per person) and reported a

90% recognition rate. However, they also reported 100% recognition accuracy for the same

database using a simple template-matching approach. More sophisticated feature extraction

techniques involve deformable templates ([50], [51], [52]), Hough transform methods [53],

Reisfeld's symmetry operator [54] and Graf's filtering and morphological operations [55].

However, all of these techniques rely heavily on heuristics such as restricting the search

subspace with geometrical constraints [56]). Furthermore, a certain tolerance must be given

to the models since they can never perfectly fit the structures in the image. However, the use

of a large tolerance value tends to destroy the precision required to recognize individuals on

the basis of the model's final best-fit parameters and makes these techniques insensitive to

the minute variations needed for recognition [37]. More recently, Cox et al. [57] reported a

recognition performance of 95% on a database of 685 images (a single image for each

individual) using a 30-dimensional feature vector derived from 35 facial features (Fig. 2.1.2).

However, the facial features were manually extracted, so it is reasonable to assume that the

recognition performance would have been much lower if an automated, and hence less

precise, feature extraction method had been adopted. In general,current algorithms for

automatic feature extraction do not provide a high degree of accuracy and require

considerable computational capacity [58].

Page 20: Final Report

11

Fig. 2.1.1: Geometrical features(white) used in the face recognition experiments[46].

Fig. 2.1.2: 35 manually identified facial features [57].

Another well-known feature-based approach is the elastic bunch graph matching method

proposed by Wiskott et al. [59] . This technique is based on Dynamic Link Structures [60]. A

graph for an individual face is generated as follows: a set of fiducial points on the face are

chosen.

Each fiducial point is a node of a full connected graph, and is labeled with the Gabor filters‟

responses applied to a window around the fiducial point. Each arch is labeled with the

distance between the correspondent fiducial points. A representative set of such graphs is

combined into a stack-like structure, called a face bunch graph. Once the system has a face

bunch graph, graphs for new face images can then be generated automatically by Elastic

Bunch Graph Matching. Recognition of a new face image is performed by comparing its

image graph to those of all the known face images and picking the one with the highest

similarity value. Using this architecture, the recognition rate can reach 98% for the first rank

and 99% for the first 10 ranks using a gallery of 250 individuals. The system has been

enhanced to allow it to deal with different poses (Fig. 2.1.3) [61] but the recognition

performance on faces of the same orientation remains the same. Though this method was

Page 21: Final Report

12

among the best performing ones in the most recent FERET evaluation [62, 63],

it does suffer from the serious drawback of requiring the graph placement for the first 70

faces to be done manually before the elastic graph matching becomes adequately dependable

[64].

Campadelli and Lanzarotti [65] have recently experimented with this technique, where they

have eliminated the need to do the graph placement manually by using parametric models,

based on the deformable templates proposed in [50], to automatically locate fiducial points.

They claim to have obtained the same performances as the elastic bunch graph employed in

[59]. Other recent variations of this approach replace the Gabor features by a graph matching

strategy [66] and HOGs (Histograms of Oriented Gradients [67].

Fig. 2.1.3: Grids for face recognition [61].

Considerable effort has also been devoted to recognizing faces from their profiles [68-72]

since, in this case, feature extraction becomes a somewhat simpler one-dimensional problem

[57, 71]. Kaufman and Breeding [70] reported a recognition rate of 90% using face profiles;

Page 22: Final Report

13

however, they used a database of only 10 individuals. Harmon et al. [68] obtained

recognition accuracies of 96% on a database of 112 individuals, using a 17-dimensional

feature vector to describe face profiles and utilizing a Euclidean distance measure for

matching. More recently, Liposcak and Loncaric [71] reported a 90% accuracy rate on a

database of 30 individuals, using subspace filtering to derive a 21- dimensional feature vector

to describe the face profiles and employing a Euclidean distance measure to match them (Fig.

2.1.4).

Fig. 2.1.4: a) The twelve fiducial points of interest for face recognition; b) Feature vector has

21 components; ten distances D1-D10 (normalized with /(D4+D5)) and eleven profile arcs

A1-A11 (normalized with /(A5+A6)) [71].

2.1.2- Advantages and Disadvantages

The main advantage offered by the featured-based techniques is that since the extraction of

the feature points precedes the analysis done for matching the image to that of a known

individual, such methods are relatively robust to position variations in the input image [37].

In principle, feature-based schemes can be made invariant to size, orientation and/or lighting

[57]. Other benefits of these schemes include the compactness of representation of the face

images and high speed matching [73].

Page 23: Final Report

14

The major disadvantage of these approaches is the difficulty of automatic feature detection

(as discussed above) and the fact that the implementer of any of these techniques has to make

arbitrary decisions about which features are important [74]. After all, if the feature set lacks

discrimination ability, no amount of subsequent processing can compensate for that intrinsic

deficiency [57].

2.1.3- Holistic

Holistic approaches attempt to identify faces using global representations, i.e., descriptions

based on the entire image rather than on local features of the face. These schemes can be

subdivided into two groups: statistical and AI approaches. An overview of some of the

methods in these categories follows.

2.1.3.1- Statistical

In the simplest version of the holistic approaches, the image is represented as a 2D array of

intensity values and recognition is performed by direct correlation comparisons between the

input face and all the other faces in the database. Though this approach has been shown to

work [75] under limited circumstances (i.e., equal illumination, scale, pose, etc.), it is

computationally very expensive and suffers from the usual shortcomings of straightforward

correlation-based approaches, such as sensitivity to face orientation, size, variable lighting

conditions, background clutter, and noise [76]. The major hindrance to the directmatching

methods‟ recognition performance is that they attempt to perform classification in a space of

very high dimensionality [76]. To counter this curse of dimensionality, several other schemes

have been proposed that employ statistical dimensionality reduction methods to obtain and

retain the most meaningful feature dimensions before performing recognition. A few of these

are mentioned below.

Sirovich and Kirby [77] were the first to utilize Principal Components Analysis (PCA) [78,

79] to economically represent face images. They demonstrated that any particular face can be

efficiently represented along the eigenpictures coordinate space, and that any face can be

approximately reconstructed by using just a small collection of eigenpictures and the

corresponding projections („coefficients‟) along each eigenpicture.

Page 24: Final Report

15

Turk and Pentland [80, 81] realized, based on Sirovich and Kirby‟s findings, that projections

along eigenpictures could be used as classification features to recognize faces.

They employed this reasoning to develop a face recognition system that builds eigenfaces,

which correspond to the eigenvectors associated with the dominant eigenvalues of the known

face (patterns) covariance matrix, and then recognizes particular faces by comparing their

projections along the eigenfaces to those of the face images of the known individuals. The

eigenfaces define a feature space that drastically reduces the dimensionality of the original

space, and face identification is carried out in this reduced space. An example training set,

the average face and the top seven eigenfaces derived from the training images are shown in

figure., respectively. The method was tested using a database of 2,500 images of 16 people

under all combinations of 3 head orientations, 3 head sizes or scales, and 3 lighting

conditions and various resolutions.

Recognition rates of 96%, 85% and 64% were reported for lighting, orientation and scale

variation. Though the method appears to be fairly robust to lighting variations, its

performance degrades with scale changes.

The aim of this project is to build a face recognition authentication system that:

Retrieves images from a camera in real-time.

Detects the presence of a face in the image.

Identifies the face against some enrolled images.

Through integration with the PCs authentication system, logs on the user

corresponding to the identified face.

Page 25: Final Report

16

Camera

Capture Images

Transfer to PC

Face Detection

Detect face in image

Face recognition

Analyse face in image

Recognise the face

Authentication

Authenticate the user

whose face is in the

image

Fig 2.1.5 : Overall Project Structure

The initial plan for this project was:

Get a face detection program up and running.

Implement that program with the authentication system to create an initial working

demo.

Get a face recognition program working.

Detect a users face with the face recognition program and log the user on.

Add a utility to add new users to the database of authorized faces.

Integrate the above programs into a fully functional face recognition authentication

system.

2.2 Motivation

Face detection plays an important role in today‟s world. It has many real-world applications

like human/computer interface, surveillance, authentication and video indexing. However

research in this field is still young. Face recognition depends heavily on the particular choice

Page 26: Final Report

17

of features used by the classifier. One usually starts with a given set of features and then

attempts to derive a optimal subset (under some criteria) of features leading to high

classification performance with the expectation that similar performance can also be

displayed on future trials using novel (unseen) test data.

Interactive Face Recognition (IFR) can benefit the areas of: Law Enforcement, Airport

Security, Access Control, Driver's Licenses & Passports, Homeland Defense, Customs &

Immigration and Scene Analysis. The following paragraphs detail each of these topics, in

turn

Law Enforcement: Today's law enforcement agencies are looking for innovative

technologies to help them stay one step ahead of the world's ever-advancing terrorists.

Airport Security: IFR can enhance security efforts already underway at most airports and

other major transportation hubs (seaports, train stations, etc.). This includes the identification

of known terrorists before they get onto an airplane or into a secure location.

Access Control: IFR can enhance security efforts considerably. Biometric identification

ensures that a person is who they claim to be, eliminating any worry of someone using

illicitly obtained keys or access cards.

Driver's Licenses & Passports: IFR can leverage the existing identification infrastructure.

This includes, using existing photo databases and the existing enrollment technology (e.g.

cameras and capture stations); and integrate with terrorist watch lists, including regional,

national, and international "most-wanted" databases.

Homeland Defense: IFR can help in the war on terrorism, enhancing security efforts. This

includes scanning passengers at ports of entry; integrating with CCTV cameras for "out-of-

the-ordinary" surveillance of buildings and facilities; and more.

Customs & Immigration: New laws require advanced submission of manifests from planes

and ships arriving from abroad; this should enable the system to assist in identification of

individuals who should, and should not be there.

Page 27: Final Report

18

2.3 Approach

The basic algorithm starts with a pre-processing step, consisting of digitization and

segmentation. The next step is called face segmentation. We define the face segmentation

problem as: given a scene that may contain one or more faces, create sub-images that crop

out individual faces. After face segmentation, the device enters into the face identification

mode, as shown.

(small)

Suspect database

Face

Data Base

Feature

Data Base

Face

Segmentation

Feature

Extract ionclassifier

matches

GUI

Displays possible

candidat es for selection

Fig 2.3 :- Face Identification System

Human skin is relatively easy to detect in controlled environments, but detection in

uncontrolled settings is still an open problem [6.]. Many approaches to face detection are

only applicable to static images assumed to contain a single face in a particular part of the

image. Additional assumptions are placed on pose, lighting, and facial expression. When

confronted with a scene containing an unknown number of faces, at unknown locations, they

are prone to high false detection rates and computational inefficiency. Real-world images

have many sources of corruption (noise, background activity, and lighting variation) where

objects of interest, such as people, may only appear at low resolution. The problem of

reliably and efficiently detecting human faces is attracting considerable interest. An earlier

generation of such a system has already been used for the purpose of flower identification by

[7, 8].

2.4- Ethics and Societal Implications

Face detection is the fastest growing biometric technology today [2]. Despite their lingering

questions regarding the practical usefulness of facial identification technology,

Page 28: Final Report

19

law enforcement and military facial identification systems have been in place for several

years without arousing too much controversy. According to industry insiders, this is because

these applications have proven quite successful in carrying out specific objectives and the

public is often unaware of these uses. After September 11, many of the face recognition

companies redoubled its efforts to create reliable facial recognition equipment. According to

study, the industry still has a lot of work to do. Even though there are lots of advantages in a

face recognition system. Some people still feel that face recognition system invades privacy

of a citizen.

Also the accuracy of the systems is of concern. Even if a subject's face is stored in the

database, a disguise or even a minor change in appearance, like wearing sunglasses or

wearing or growing a mustache can often fool the system. Even an unusual facial expression

can confuse the software. Facial identifiers often cannot distinguish twins. Other factors

affecting the reliability of the images are changes in the lighting and the angle at which the

photos are taken. The systems often have difficulty recognizing the effects of aging [35].

2.5- Face Recognition

Face recognition is a biometric which uses computer software to determine the identity of the

individual. Face recognition falls into the category of biometrics which is “the automatic

recognition of a person using distinguishing traits” [6]. Other types of biometrics include

fingerprinting, retina scans, and iris scan.

2.6 Eigenface-based Recognition

2D face recognition using eigenfaces is one of the oldest types of face recognition. Turk and

Pentland published the groundbreaking “Face Recognition Using Eigenfaces” in1991. The

method works by analyzing face images and computing eigenfaces which are faces

composed of eigenvectors. The comparison of eigenfaces is used to identify the presence of a

face and its identity.

There is a five step process involved with the system developed by Turk and Pentland. First,

the system needs to be initialized by feeding it a set of training images of faces. This is used

these to define the face space which is set of images that are face like. Next, when a face is

Page 29: Final Report

20

encountered it calculates an eigenface for it. By comparing it with known faces and using

some statistical analysis it can be determined whether the image presented is a face at all.

Then, if an image is determined to be a face the system will determine whether it knows the

identity of it or not. The optional final step is that if an unknown face is seen repeatedly, the

system can learn to recognize it.

The eigenface technique is simple, efficient, and yields generally good results in controlled

circumstances [1]. The system was even tested to track faces on film. There are also some

limitations of eigenfaces. There is limited robustness to changes in lighting, angle, and

distance [6]. 2D recognition systems do not capture the actual size of the face, which is a

fundamental problem [4]. These limits affect the technique‟s application with security

cameras because frontal shots and consistent lighting cannot be relied upon.

2.7 How Humans Perform Face Recognition

It is important for researchers to know the results of studies on human face recognition [8].

Knowing these results may help them develop ground breaking new methods. After all,

rivaling and surpassing the ability of humans is the key goal of computer face recognition

research.

The key results of a 2006 paper “Face Recognition by Humans: Nineteen Results All

Computer Vision Researchers Should Know About” are as follows:

1. Humans can recognize familiar faces in very low-resolution images.

2. The ability to tolerate degradations increases with familiarity.

3. High-frequency information by itself is insufficient for good face recognition

performance.

4. Facial features are processed holistically.

5. Of the different facial features, eyebrows are among the most important for

recognition.

6. The important configural relationships appear to be independent across the width and

height dimensions.

Page 30: Final Report

21

7. Face-shape appears to be encoded in a slightly caricatured manner.

8. Prolonged face viewing can lead to high level aftereffects, which suggest prototype-

based encoding.

Fig. 2.7.1 :- Staring at the faces in the green circles will cause one to misidentify the

central face with the faces circled in red. This is an example of face aftereffects [8].

9. Pigmentation cues are at least as important as shape cues.

10. Color cues play a significant role, especially when shape cues are degraded.

11. Contrast polarity inversion dramatically impairs recognition performance, possibly due

to compromised ability to use pigmentation cues.

Fig.2.7.2 :- Photograph during the recording of “We Are the World.” This figure

demonstrates how polarity inversion effects face recognition in humans.

Page 31: Final Report

22

Several famous artists are in the picture including Ray Charles, Lionel Ritchie, Stevie

Wonder, Michael Jackson, Tina Turner, Bruce Springstein, and Billy Joel though they

are very difficult to identify.

12. Illumination changes influence generalization.

13. View-generalization appears to be mediated by temporal association.

14. Motion of faces appears to facilitate subsequent recognition.

15. The visual system starts with a rudimentary preference for face-like patterns.

16. The visual system progresses from a piecemeal to a holistic strategy over the first

several year of life.

17. The human visual system appears to devote specialized neural resources for face

perception.

18. Latency of responses to faces in infer temporal (IT) cortex is about 120 ms, suggesting

a largely feed forward computation.

19. Facial identity and expression might be processed by separate systems.

2.8 Face Recognition From a Law Enforcement Perspective

Facial recognition is attractive for law enforcement. It can be used in conjunction with

existing surveillance camera infrastructure to hunt for know criminals. Face recognition is

covert and non intrusive, opposed to other biometrics such as finger prints, retina scans, and

iris scans [6]. This is especially important in conjunction with the law because faces are

considered public. Comprehensive photo databases from mug shots or driver‟s licenses

already exist.

Because of difficulties face recognition has with respect to lighting, angle, and other

factors, it is advantageous to attempt to get as high quality images with regard to these

factors. Facetraps are a concept where cameras are strategically placed in order to obtain

relatively controlled photographs [6]. Examples are placing cameras facing doorways, at

airport check-ins, or near objects people are likely to stare at.

Page 32: Final Report

23

These traps would aid face recognition software by helping to capture a straight frontal

image which allow for higher accuracy of the system. Despite their potential benefit, there

appears to be very little research done on facetraps.

Fig.2.8 :- Figure depicts increasingly controlled environments from left to right. From left to

right: suspect on a plane (no control), subject at a check-in counter, subject on an escalator

staring at a flashing red bulb, subject passing through a doorway, subject sitting in front of a

camera (perfect control) [6].

Some have questioned the legality of face scanning and have argued that such systems

which are used to hunt to criminals in public places are an invasion of privacy. From a legal

perspective, in the United States, one does not have a right to privacy for things shown in

public [6]. “What a person knowingly exposes to the public. . . is not a subject of Fourth

Amendment protection,” United States v. Miller, 425 U.S. 435 (1976). “No person can have

a reasonable expectation that others will not know the sound of his voice, any more than he

can reasonably expect that his face will be a mystery to the world,” United States v. Dionisio,

410 U.S. 1 (1973). These excerpts from Supreme Court decisions help to establish that face

recognition is constitutional. Face recognition must be improved further before it becomes

Page 33: Final Report

24

a useful tool for law enforcement. It remains to be seen what the right balance is, socially

speaking, between maximizing public safety and respecting individual rights.

2.9 Current Uses of Face Recognition

Face recognition systems used tied to surveillance cameras in Tampa, Florida and Newham,

Great Britain [2]. Trials of the systems yielded poor results. The New ham system didn‟t

result in a single arrest being made in three years. Logan Airport, in Boston, performed two

trials of face recognition systems. The system achieved only 61.7% accuracy [5].

Australian customs recently rolled out its Smart Gate system to automate checking faces

with passport photos. Google is testing face recognition using a hidden feature in its image

searching website [7]. Google purchased computer vision company Neven Vision in 2006

and plans to implement its technology into its Picasa photo software.

2.10 Face recognition techniques and results

Use of biometrics has increased over last few years due to its inherent advantages over

customary identification tools such as token card and password etc. In biometrics, after

fingerprint, face recognition is second most preferred method with reasonably good accuracy.

In some applications like CCTV cameras where face of a person is available for processing,

face recognition techniques can to be very useful.

A biometric system which relies on a single biometric identifier is most of the time unable to

meet the desired requirements in making a personal identification and verification. This

happens due to the algorithm‟s limitations. Nowadays various biometric identifiers like, face,

finger, voice, palm, retina and hand writing etc. are used. However, each of these methods

has its own advantages and disadvantages. This work proposes a basic face recognition based

system which is fuzzy fusion of principal component analysis (PCA), independent

component analysis (ICA) and linear discriminant analysis (ILDA) algorithms. In this

chapter, Face identification techniques PCA, ICA and ILDA are discussed. The basic

structure of the algorithms is detailed. For the above mentioned algorithms simulation results

are also presented.

Page 34: Final Report

25

2.11 Face Recognition:

It is quite easy to obtain facial images with a couple of inexpensive fixed cameras. Good face

recognition algorithms and appropriate preprocessing of the images can compensate for noise

and slight variations in orientation, scale and illumination [3].

Face recognition is used for two primary purposes:

1. Verification (one-to-one matching): When presented with a face image of an unknown

individual along with a claim of identity, making sure whether the individual is who he/she

claims to be.

2. Identification (one-to-many matching): Given an image of an unknown individual,

determining the identity of that person by comparing (possibly after encoding) that image

with a database of (possibly encoded) images of known individuals [3].

Page 35: Final Report

26

Chapter- 3

PCA ANALYSIS

Computational models of faces have been an active area of research since late 1980s, for they

can contribute not only to theoretical insights but also to practical applications, such as

criminal identification, security systems, image and film processing, and human-computer

interaction, etc. However, developing a computational model of face recognition is quite

difficult, because faces are complex, multidimensional, and subject to change over time.

3.1 Introduction

The flow diagram for the face recognition techniques is shown in the figure 12 given below:

Fig.3.1 :- The Flow diagram for the face recognition

In the face recognition system the flow must be followed. It defines all the required steps in

the face recognition system. Figure 3.1 defines all the required steps, but the most important

step is the Feature Extraction which is ultimately used for dimensional reduction as well as

for extracting features from input of the system. Extracted features are passed to the last

phase that is classification where the identification or verification rate is calculated.

Page 36: Final Report

27

The most popular methods used in the face recognition are:

1. Principal Component Analysis

2. 2.Independent Component Analysis

3. 3.Linear Discriminate Analysis

3.2- Face recognition

The process of face recognition involves the examination of facial features in an image,

recognizing those features and matching them to one of the many faces in the database.

There are many algorithms capable of performing face recognition; such as:

Principal Component Analysis

Discrete Cosine Transform

3D recognition methods

Gabor Wavelets method

Hidden Markov Models

Kernel methods

The Principal Component Analysis (PCA) method of face recognition is used in this

biometric authentication system. In this chapter we will discuss why the PCA method was

chosen, the theory of face recognition using the PCA method and how it is used in this

system.

3.3 Principal Components Analysis

Why use PCA?

There were many issues to consider when choosing a face recognition method. The keys ones

were:

1) Accuracy 3) Process speed

2) Time limitations 4) Availability

Page 37: Final Report

28

With these in mind the PCA method of face recognition was selected for this project

because:

Simplest and easiest method to implement – due to project deadlines this method

seemed the most practical.

Very fast computation time.

Accurate – this method is definitely not the most accurate of face recognition

algorithms but considering the requirements of this project it was judged to be accurate

enough

PCA is supported within the Open CV library – this was key because it made

integration with the face detection program very easy

There are disadvantages with the PCA method but they were deemed to be inconsequential.

PCA is:

Translation variant – if the image is shifted or tilted then it will not recognise the

face. This flaw doesn‟t affect this system as the camera will always be located upright

and in front of the user.

Scale variant – scaling the images will affect the performance of face recognition.

This was not a big problem in this project as new users are added using the camera at

the PC so the face to be recognised and the faces in the database are captured in the

same way; making them very similar in

size. Also the user will always be relatively close to the camera eliminating the

possibility of a blurry, out of focus face.

Background variant – recognising a face with a different background is difficult. The

face detection program is altered so it not only detects a face but extracts it to a new

image, thus cutting out almost all the background. The residual sections of background

are insignificant since the camera‟s resolution will only keep the face in focus because

of the distance.

Lighting variant – if the light intensity changes then the accuracy of face recognition

drops. This authentication system involves capturing images in front of a PC and

usually the room is lighted when a user is using a PC.

Page 38: Final Report

29

Taking all this into consideration, the PCA method of face recognition was deemed to be the

most practical for this project.

3.4 Theory of PCA

Principal Component Analysis is a process that extracts the most relevant information

contained in a face and then tries to build a computational model that best describes it.

The basic theory of PCA can be described in the following steps:

1. Eigenvectors or eigenfaces of the covariance matrix are found. This is done by training

a set of face images.

2. These eigenvectors become the eigenspace (a multi-dimensional subspace comprised

of the Eigenvectors) in which every face is projected on.

3. Recognition is performed by comparing the location of a face in the eigenspace with

the location of known users. In other words calculating the Euclidean distance.

This is only the basic theory of PCA. In order to fully understand PCA it needs to be

explained mathematically which is done below.

3.5 Principal component Analysis

A 2-D facial image can be represented as 1-D vector by concatenating each row (or column)

into a long thin vector [4, 5]. Let‟s suppose we have M vectors of size N (= rows of image ×

columns of image) that represents a set of sampled images. jp represents the pixel values.

1 2[ , ... ]T

i Nx p p p 1.......i M (3.1)

The images are mean centered by the subtraction of the mean image from each image vector.

Let 1

1 M

i

i

m xM

represent the mean image, and let iw be defined as mean centered image

i iw x m (3.2)

Page 39: Final Report

30

Our goal is to find a set of ie ‟s which have the largest possible projection onto each of the

iw ‟s. We wish to find a set of M orthonormal vectors ie for which the quantity

2

1

1 MT

i i n

n

e wM

is maximized with the orthonormality constraint

T

l k lke e (3.3)

It has been shown that the ie ‟s and i ‟s are given by the eigenvectors and eigenvalues of the

covariance matrix TC WW , where W is a matrix composed of the column vectors iw placed

side by side [4]. The size of C is N × N which could be enormous. For example, images of

size 64×64 create the covariance matrix of size 4096×4096. It is not practical to solve for the

eigenvectors of C directly. According to a common theorem in linear algebra, the vectors

ie and scalars¸ i can be obtained by solving for the eigenvectors and eigenvalues of the M×M

matrix TW W . Let id and i be the eigenvectors and eigenvalues of TW W respectively.

T

i i iW Wd d (3.4)

By multiplying left to both sides by W

T

i i iWW Wd W d (3.5)

which means that the first 1M eigenvectors ie and eigenvalues i of TWW are given by

iWd and i respectively. iWd needs to be normalized in order to be equal to ie . Since we only

sum up a finite number of image vectors, M , the rank of the covariance matrix cannot

exceed 1M (The -1 come from the subtraction of the mean vector m).

The eigenvectors corresponding to nonzero eigenvalues of the covariance matrix produce an

orthonormal basis for the subspace within which most image data can be represented with a

small amount of error.

The sorting of eigenvectors is done according to their corresponding eigenvalues from high

to low. The eigenvector associated with the largest eigenvalue is one that reflects the greatest

variance in the image [5]. That is, the smallest eigenvalue is associated with the eigenvector

that finds the least variance.

Page 40: Final Report

31

They decrease in exponential fashion, meaning that the roughly 90% of the total variance is

contained in the first 5% to 10% of the dimensions. A facial image can be projected onto

M M dimensions by computing

1 2[ , ... ]T

Mv v v (3.6)

where T

i i iv e w . iv is the thi coordinate of the facial image in the new space, which came to be

the principal component. The vectors ie are also images, so called, eigenimages, or

eigenfaces in our case, which was first named by [5-9]. They can be viewed as images and

indeed look like faces. The simplest method for determining which face class provides the

best description of an input facial image is to find the face class k that minimizes the

Euclidean distance

k k (3.7)

where, k is a vector describing the thk face class. If k is less than some predefined

threshold, a face is classified as belonging to the class k.

3.6 Limitations of PCA

The main limitations of the PCA are as follows:

1. The face image should be normalized and frontal-view

2. The system is an auto-associative memory system. It is harmful to be over-fitted.

3. Training is very computationally intensive.

4. It is hard to decide suitable thresholds - It is kind of Art!

5. The suggested methods to deal with unknown faces and non-faces are not good enough to

differentiate them from known faces.

Page 41: Final Report

32

Chapter- 4

Results

Face images for the test are taken from AT&T data base. The database consists of 430

images. We have selected 12 images, for the demonstration of the algorithm. The files are in

PGM format. Each image is displayed by 92×112 pixels, with 256 grey levels per pixel. The

images are arranged in 12 directories (one for each „subject‟), which have names of the

form sX, where X indicates the subject number (between 1 and 25).

Case 1

In the first case 12 images are taken as training set, each with mean 100 and standard

deviation of 80. In the second step the mean and standard deviation of all images are changed

for normalization. This is done to reduce the error due to lighting conditions and background

Fig.4.1:- Training Set (AT&T)

Page 42: Final Report

33

The normalized images are shown in figure 4.2, and these images are very much similar to

the images in figure, however when background changes abruptly, the normalization is very

effective.

Fig.4.2:- Normalized Training Set (AT&T)

In the next step, the mean image is generated as shown in figure 4.3. The pixel values of the

images ranges form 0 to 255.

Fig.4.3:- Mean image

Page 43: Final Report

34

In the next step, co-variance matrix is created, thereafter the Eigen-values are obtained, and

the Eigen values close to zero are dropped and for the left over Eigen values, Eigen vector

are obtained. Finally, after the normalization of Eigen vectors, Eigen faces are calculated

(Figure 4.4).

Fig.4.4:- Eigenfaces

In case of user authentication, template matching is done. In figure 4.5, the input image and

the re-constructed image is shown. The re-constructed image is very much similar to the

input image.

Fig.4.5:- Input and Re-constrcuted images

Page 44: Final Report

35

The pictures, deviate around the forehead portion, as the hair style of the training images are

distinct. Therefore, as in PCA only principal components are considered, it may not be

possible to exactly detect the input image. In figure 4.6 weight of the input image and

Euclidian distance of the input image are shown. The minimum Euclidian distance is 11965

and maximum Euclidian distance is 15547.

Fig.4.6:- Weight of input image and Euclidian distance of input image

Image Minimum Value Maximum Value

11990 14421

11978 15032

11979 14939

Page 45: Final Report

36

11987 13907

11988 14780

11965 15547

11994 14305

11986 14307

11980 14432

11989 13453

11993 14204

11993 14386

It is observed form the table that the minimum Euclidian distance is somewhat constant and

remains around 11990. However, maximum Euclidian distance varies from 13453 to 15547.

Moreover Euclidian distance is more for the images which contains more information (like:

spectacles).

Page 46: Final Report

37

Case 2

Considering the case, where an input image is to be matched with template which is not

present in the training database. It is observed from the figure, that the re-constructed is

different from the input image. This is actually a desired feature because if an image is not

present in database then template matching is not possible.

Fig.4.7:- Input and Re-constructed image with input image absent in training database

The minimum Euclidian distance is 12487 and maximum Euclidian 13927.

Page 47: Final Report

38

Case 3

In the third case, the image dataset of a single person is taken with various facial expressions.

In the dataset 10 images are considered as shown in figure 4.8. The corresponding

normalized images are showninfig.4.9.

Fig.4.8:- Training database images

Fig.4.9:- Normalized Training database images

Page 48: Final Report

39

Fig.4.10:- The Mean image

The mean image is shown in figure 4.10 and the Eigen-face image is shown in figure 4.11.

Fig.4.11:- Eigenface images

Page 49: Final Report

40

Fig.4.12:- Input and Re-constrcuted images

Fig.4.13: - Weight of input image and Euclidian distance of input image

The input and re-constructed images is shown in figure 4.12. The Euclidian Distance is

shown in figure 4.13, with maximum and minimum values are 14439 and 12551

respectively. It is clear from the figure that the facial expressions are also captured by the

PCA algorithm.

Page 50: Final Report

41

Case 4

In this case, total 10 images of two persons are considered with 6 images of a person and 4

images for another person. However, we have a total of 20 images, 10 each in our master

database. In this experiment, we will try to re-construct an image which is not present in

training dataset. In the re-construction phase, the input images will form left over images of

these two persons.

Fig.4.14:- Training database images

In the dataset 10 images (6+4) are considered as shown in figure 4.14, and the normalized

images are shown in figure 4.15 . The mean image is shown in figure 4.16, while the two re-

constructed images are shown in figure 4.17 and figure 4.18 respectively.

Fig.4.15:- Normalized Training database images

Page 51: Final Report

42

Fig. 4.16:- The Mean image

Fig.4.17:- Input and Re-constructed images

In figure 30, it is clear that the re-constructed image is not very similar to the input image.

However, these two images have some similarity.

As the input image is not in training dataset, therefore PCA is not able to recognize the

person, however, if the test image form the training set is chosen, then the re-construction is

good as shown in figure 4.18.

Page 52: Final Report

43

Fig.4.18:- Input and Re-constructed images

(2) Regarding the pattern vector representing a face class, we can make each face class

consist of several pattern vectors, each constructed from a face image of the same

individual under a certain condition, rather than taking the average of these vectors to

represent the face class.

Page 53: Final Report

44

Appendix-A

Matlab Code

% Face recognition Using PCA

clear all

close all

clc

M=10;

um=100;

ustd=80;

S=[];

figure(1);

for i=1:M

str=strcat(int2str(i),'.pgm');

eval('img=imread(str);');

subplot(ceil(sqrt(M)),ceil(sqrt(M)),i)

imshow(img)

if i==3

title('Training set','fontsize',18)

end

drawnow;

[irow icol]=size(img);

temp=reshape(img',irow*icol,1);

S=[S temp];

end

for i=1:size(S,2)

temp=double(S(:,i));

m=mean(temp);

st=std(temp);

S(:,i)=(temp-m)*ustd/st+um;

end

figure(2);

for i=1:M

str=strcat(int2str(i),'.pgm');

img=reshape(S(:,i),icol,irow);

img=img';

eval('imwrite(img,str)');

Page 54: Final Report

45

subplot(ceil(sqrt(M)),ceil(sqrt(M)),i)

imshow(img)

drawnow;

if i==3

title('Normalized Training Set','fontsize',18)

end

end

m=mean(S,2);

tmimg=uint8(m);

img=reshape(tmimg,icol,irow);

img=img';

figure(3);

imshow(img);

title('Mean Image','fontsize',18)

dbx=[];

for i=1:M

temp=double(S(:,i));

dbx=[dbx temp];

end

A=dbx';

L=A*A';

[vv dd]=eig(L);

v=[];

d=[];

for i=1:size(vv,2)

if(dd(i,i)>1e-4)

v=[v vv(:,i)];

d=[d dd(i,i)];

end

end

[B index]=sort(d);

ind=zeros(size(index));

dtemp=zeros(size(index));

vtemp=zeros(size(v));

len=length(index);

for i=1:len

dtemp(i)=B(len+1-i);

ind(i)=len+1-index(i);

vtemp(:,ind(i))=v(:,i);

end

d=dtemp;

v=vtemp;

for i=1:size(v,2)

Page 55: Final Report

46

kk=v(:,i);

temp=sqrt(sum(kk.^2));

v(:,i)=v(:,i)./temp;

end

u=[];

for i=1:size(v,2)

temp=sqrt(d(i));

u=[u (dbx*v(:,i))./temp];

end

for i=1:size(u,2)

kk=u(:,i);

temp=sqrt(sum(kk.^2));

u(:,i)=u(:,i)./temp;

end

figure(4);

for i=1:size(u,2)

img=reshape(u(:,i),icol,irow);

img=img';

img=histeq(img,255);

subplot(ceil(sqrt(M)),ceil(sqrt(M)),i)

imshow(img)

drawnow;

if i==3

title('Eigenfaces','fontsize',18)

end

end

omega = [];

for h=1:size(dbx,2)

WW=[];

for i=1:size(u,2)

t = u(:,i)';

WeightOfImage = dot(t,dbx(:,h)');

WW = [WW; WeightOfImage];

end

omega = [omega WW];

end

InputImage=imread('6.pgm')

figure(5)

subplot(1,2,1)

imshow(InputImage); colormap('gray');title('Input image','fontsize',18)

InImage=reshape(double(InputImage)',irow*icol,1);

temp=InImage;

Page 56: Final Report

47

me=mean(temp);

st=std(temp);

temp=(temp-me)*ustd/st+um;

NormImage = temp;

Difference = temp-m;

p = [];

aa=size(u,2);

for i = 1:aa

pare = dot(NormImage,u(:,i));

p = [p; pare];

end

ReshapedImage = m + u(:,1:aa)*p;

ReshapedImage = reshape(ReshapedImage,icol,irow);

ReshapedImage = ReshapedImage';

subplot(1,2,2)

imagesc(ReshapedImage); colormap('gray');

title('Reconstructed image','fontsize',18)

InImWeight = [];

for i=1:size(u,2)

t = u(:,i)';

WeightOfInputImage = dot(t,Difference');

InImWeight = [InImWeight; WeightOfInputImage];

end

ll = 1:M;

figure(68)

subplot(1,2,1)

stem(ll,InImWeight)

title('Weight of Input Face','fontsize',14)

e=[];

for i=1:size(omega,2)

q = omega(:,i);

DiffWeight = InImWeight-q;

mag = norm(DiffWeight);

e = [e mag];

end

kk = 1:size(e,2);

subplot(1,2,2)

stem(kk,e)

title('Eucledian distance of input image','fontsize',14)

MaximumValue=max(e)

MinimumValue=min(e)

Page 57: Final Report

48

References

[1] A. K. Jain, R. Bolle, and S. Pankanti, "Biometrics: Personal Identification in Networked

Security," A. K. Jain, R. Bolle, and S. Pankanti, Eds.: Kluwer Academic Publishers, 1999.

[2] K. Kim, "Intelligent Immigration Control System by Using Passport Recognition and

Face Verification," in International Symposium on Neural Networks. Chongqing, China,

2005, pp.147-156.

[3] J. N. K. Liu, M. Wang, and B. Feng, "iBotGuard: an Internet-based intelligent robot

security system using invariant face recognition against intruder," IEEE Transactions on

Systems Man And Cybernetics Part C-Applications And Reviews, Vol.35, pp.97-105, 2005.

[4] H. Moon, "Biometrics Person Authentication Using Projection-Based Face Recognition

System in

Verification Scenario," in International Conference on Bioinformatics and its Applications.

Hong Kong,

China, 2004, pp.207-213.

[5] D. McCullagh, "Call It Super Bowl Face Scan 1," in Wired Magazine, 2001.

[6] CNN, "Education School face scanner to search for sex offenders." Phoenix, Arizona:

The Associated

Press, 2003.

[7] P. J. Phillips, H. Moon, P. J. Rauss, and S. A. Rizvi, "The FERET Evaluation

Methodology for Face

Recognition Algorithms," IEEE Transactions on Pattern Analysis and Machine Intelligence,

Vol.22,

pp.1090-1104, 2000.

[8] T. Choudhry, B. Clarkson, T. Jebara, and A. Pentland, "Multimodal person recognition

using unconstrained audio and video," in Proceedings, International Conference on Audio

and Video-Based Person Authentication, 1999, pp.176-181.

[9] S. L. Wijaya, M. Savvides, and B. V. K. V. Kumar, "Illumination-tolerant face

verification of low-bitrate JPEG2000 wavelet images with advanced correlation filters for

handheld devices," Applied

Page 58: Final Report

49

Optics, Vol.44, pp.655-665, 2005.

[10] E. Acosta, L. Torres, A. Albiol, and E. J. Delp, "An automatic face detection and

recognition system for video indexing applications," in Proceedings of the IEEE

International Conference on Acoustics, Speech and Signal Processing, Vol.4. Orlando,

Florida, 2002, pp.3644-3647.

[11] J.-H. Lee and W.-Y. Kim, "Video Summarization and Retrieval System Using Face

Recognition and

MPEG-7 Descriptors," in Image and Video Retrieval, Vol.3115, Lecture Notes in Computer

Science:

Springer Berlin / Heidelberg, 2004, pp.179-188.

[12] C. G. Tredoux, Y. Rosenthal, L. d. Costa, and D. Nunez, "Face reconstruction using a

configural,

eigenface-based composite system," in 3rd Biennial Meeting of the Society for Applied

Research in

Memory and Cognition (SARMAC). Boulder, Colorado, USA, 1999.

[13] K. Balci and V. Atalay, "PCA for Gender Estimation: Which Eigenvectors Contribute?"

in Proceedings of Sixteenth International Conference on Pattern Recognition, Vol.3. Quebec

City, Canada, 2002, pp. 363-366.

[14] B. Moghaddam and M. H. Yang, "Learning Gender with Support Faces," IEEE

Transactions on Pattern Analysis and Machine Intelligence, Vol.24, pp.707-711, 2002.

[15] R. Brunelli and T. Poggio, "HyperBF Networks for Gender Classification," Proceedings

of DARPA

Image Understanding Workshop, pp.311-314, 1992.

[16] A. Colmenarez, B. J. Frey, and T. S. Huang, "A probabilistic framework for embedded

face and

facial expression recognition," in Proceedings of the IEEE Conference on Computer Vision

and Pattern

Recognition, Vol.1. Ft. Collins, CO, USA, 1999, pp. 1592-1597.

[17] Y. Shinohara and N. Otsu, "Facial Expression Recognition Using Fisher Weight Maps,"

in Sixth

Page 59: Final Report

50

IEEE International Conference on Automatic Face and Gesture Recognition, Vol.100, 2004,

pp.499-504.

[18] F. Bourel, C. C. Chibelushi, and A. A. Low, "Robust Facial Feature Tracking," in

British Machine Vision Conference. Bristol, 2000, pp.232-241.

[19] K. Morik, P. Brockhausen, and T. Joachims, "Combining statistical learning with a

knowledgebased

approach -- A case study in intensive care monitoring," in 16th International Conference on

Machine Learning (ICML-99). San Francisco, CA, USA: Morgan Kaufmann, 1999, pp.268-

277.

[20] S. Singh and N. Papanikolopoulos, "Vision-based detection of driver fatigue,"

Department of Computer Science, University of Minnesota, Technical report 1997.

[21] D. N. Metaxas, S. Venkataraman, and C. Vogler, "Image-Based Stress Recognition

Using a Model-

Based Dynamic Face Tracking System," International Conference on Computational

Science, pp.813-821,

2004.

[22] M. M. Rahman, R. Hartley, and S. Ishikawa, "A Passive And Multimodal Biometric

System for

Personal Identification," in International Conference on Visualization, Imaging and Image

Processing.

Spain, 2005, pp.89-92.

[23] R. Brunelli and D. Falavigna, "Person identification using multiple cues," IEEE

Transactions on Pattern Analysis and Machine Intelligence, Vol.17, pp.955-966, 1995.

[24] M. Viswanathan, H. S. M. Beigi, A. Tritschler, and F. Maali, "Information access using

speech, speaker and face recognition," in IEEE International Conference on Multimedia and

Expo, Vol.1, 2000, pp.

493--496.

[25] A. K. Jain, K. Nandakumar, X. Lu, and U. Park, "Integrating Faces, Fingerprints, and

Soft Biometric

Traits for User Recognition," Proceedings of Biometric Authentication Workshop, in

conjunction with ECCV2004,LNCS 3087, pp.259-269, 2004.

Page 60: Final Report

51

[26] P. Melin and O. Castillo, "Human Recognition using Face, Fingerprint and Voice," in

Hybrid Intelligent Systems for Pattern Recognition Using Soft Computing, Vol.172, Studies

in Fuzziness and Soft Computing: Springer Berlin / Heidelberg, 2005, pp.241-256.

[27] K. Chang, K. W. Bowyer, S. Sarkar, and B. Victor, "Comparison and Combination of

Ear and Face

Images in Appearance-Based Biometrics," IEEE Transactions on Pattern Analysis and

Machine

Intelligence, Vol.25, pp.1160-1165, 2003.

[28] R. Chellappa, A. Roy-Chowdhury, and S. Zhou, "Human Identification Using Gait and

Face," in The

Electrical Engineering Handbook, 3rd ed: CRC Press, 2004.

[29] S. Ben-Yacoub, J. Luttin, K. Jonsson, J. Matas, and J. Kittler, "Audio-visual person

verification," in IEEE Computer Society Conference on Computer Vision and Pattern

Recognition, Vol.1. Fort Collins, CO,

USA, 1999, pp.580-585.

[30] X. Zhou and B. Bhanu, "Feature fusion of side face and gait for video-based human

identification,"

Pattern Recognition, Vol.41, pp.778-795, 2008.

[31] D. Bouchaffra and A. Amira, "Structural hidden Markov models for biometrics: Fusion

of face and fingerprint," Pattern Recognition, Vol.41, pp.852-867, 2008.

[32] H. Vajaria, T. Islam, P. Mohanty, S. Sarkar, R. Sankar, and R. Kasturi, "Evaluation and

analysis of a face and voice outdoor multi-biometric system," Pattern Recognition Letters,

Vol.28, pp.1572-1580, 2007.

[33] Y.-F. Yao, X.-Y. Jing, and H.-S. Wong, "Face and palmprint feature level fusion for

single sample

biometrics recognition," Neurocomputing, Vol.70, pp. 1582-1586, 2007.

[34] J. Zhou, G. Su, C. Jiang, Y. Deng, and C. Li, "A face and fingerprint identity

authentication system based on multi-route detection," Neurocomputing, Vol.70, pp.922-931,

2007.

[35] C. Nastar and M. Mitschke, "Real time face recognition using feature combination," in

Third

Page 61: Final Report

52

IEEE International Conference on Automatic Face and Gesture Recognition. Nara, Japan,

1998, pp. 312-317.

[36] S. Gong, S. J. McKenna, and A. Psarrou., Dynamic Vision: From Images to Face

Recognition: Imperial College Press (World Scientific Publishing Company), 2000.

[37] T. Jebara, "3D Pose Estimation and Normalization for Face Recognition," Center for

Intelligent

Machines, McGill University, Undergraduate Thesis May, 1996.

[38] P. J. Phillips, H. Wechsler, J.Huang, and P. J. Rauss, "The FERET database and

evaluation procedure for face-recognition algorithm," Image and Vision Computing, Vol.16,

pp.295-306, 1998.

[39] D. Blackburn, J. Bone, and P. J. Phillips, "Face recognition vendor test 2000," Defense

Advanced

Research Projects Agency, Arlington, VA, Technical report A269514, February 16, 2001.

[40] P. J. Phillips, P. Grother, R. J. Micheals, D. M. Blackburn, E. Tabassi, and J. M. Bone,

"Face

Recognition Vendor Test (FRVT 2002)," National Institute of Standards and Technology,

Evaluation

report IR 6965, March, 2003.

[41] K. Messer, J. Kittler, M. Sadeghi, M. Hamouz, A. Kostin, F. Cardinaux, S. Marcel, S.

Bengio, C.

Sanderson, J. Czyz, L. Vandendorpe, C. McCool, S. Lowther, S. Sridharan, V. Chandran, R.

P. Palacios, E.

Vidal, L. Bai, L. Shen, Y. Wang, Y.-H. Chiang, H.-C. Liu, Y.-P. Hung, A. Heinrichs, M.

Müller, A. Tewes,

C. v. d. Malsburg, R. P. Würtz, Z. Wang, F. Xue, Y. Ma, Q. Yang, C. Fang, X. Ding, S.

Lucey, R. Goss, H.

Schneiderman, N. Poh, and Y. Rodriguez, "Face Authentication Test on the BANCA

Database," in

17th International Conference on Pattern Recognition, Vol.4. Cambridge, UK, 2004,

pp.523-532.

Page 62: Final Report

53

[42] X. Q. Ding and C. Fang, "Discussions on some problems in face recognition," in

Advances In

Biometric Person Authentication, Proceedings, Vol. 3338, Lecture Notes In Computer

Science: Springer

Berlin / Heidelberg, 2004, pp.47-56.

[43] J. Yang, X. Chen, and W. Kunz, "A PDA-based face recognition system," in

Proceedings of sixth IEEE Workshop on Applications of Computer Vision. Orlando, Florida,

2002, pp.19-23.

[44] W. Zhao, R. Chellappa, P. Phillips, and A. Rosenfeld, "Face Recognition: A Literature

Survey," ACM

Computing Surveys, Vol.35, pp.399-458, 2003.

[45] A. F. Abate, M. Nappi, D. Riccio, and G. Sabatino, "2D and 3D face recognition: A

survey," Pattern

Recognition Letters, Vol.28, pp.1885-1906, 2007.

[46] R. Brunelli and T. Poggio, "Face recognition: features versus templates," IEEE

Transactions on Pattern Analysis and Machine Intelligence, Vol.15, pp.1042-

1052, 1993.

[47] M. A. Grudin, "On internal representations in face recognition systems," Pattern

Recognition, Vol.33,

pp.1161-1177, 2000.

[48] B. Heisele, P. Ho, J. Wu, and T. Poggio, "Face recognition: component-based versus

global approaches," Computer Vision and Image Understanding, Vol.91, pp.6-21, 2003.

[49] T. Kanade, "Picture Processing System by Computer Complex and Recognition of

Human Faces," Kyoto University, Japan, PhD. Thesis 1973.

[50] A. Yuille, D. Cohen, and P. Hallinan, "Feature extraction from faces using deformable

templates,"

in IEEE Computer Society Conference on Computer Vision and Templates. San Diego, CA,

USA, 1989,

pp.104-109.

[51] N. Roeder and X. Li, "Experiments in analyzing the accuracy of facial feature

detection," Vision Interface '95, pp.8-16, 1995.

Page 63: Final Report

54

[52] C. Colombo, A. D. Bimbo, and S. D. Magistris, "Human-computer interaction based on

eye movement

tracking," Computer Architectures for Machine Perception, pp.258-263, 1995.

[53] M. Nixon, "Eye spacing measurement for facial recognition," in SPIE Proceedings,

1985, pp.279-285.

[54] D. Reisfeld, "Generalized symmetry transforms: attentional mechanisms and face

recognition," Tel-

Aviv University, PhD. Thesis, technical report 1994.

[55] H. P. Graf, T. Chen, E. Petajan, and E. Cosatto, "Locating faces and facial parts," in

International

Workshop on Automatic Face- and Gesture-Recognition, 1995, pp.41-46.

[56] I. Craw, D. Tock, and A. Bennett, "Finding face features," in Second European

Conference on

Computer Vision, 1992, pp.92-96.

[57] I. J. Cox, J. Ghosn, and P. N. Yianilos, "Featurebased face recognition using mixture-

distance," in

Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 1996,

pp.209-216.

[58] S. Lawrence, C. L. Giles, A. C. Tsoi, and A. D. Back, "Face Recognition: A

Convolutional Neural Network Approach," IEEE Transactions on Neural Networks, Special

Issue on Neural Networks and Pattern Recognition, pp.1-24, 1997.

[59] L. Wiskott, J.-M. Fellous, N. Krüger, and C. von der Malsburg, "Face Recognition by

Elastic Bunch

Graph Matching," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.19,

pp.775-

779, 1997.

[60] M. Lades, J. C. Vorbrüggen, J. Buhmann, J. Lange, C. v. d. Malsburg, R. P. Würtz, and

W. Konen,

"Distortion invariant object recognition in the dynamic link architecture," IEEE Trans.

Computers, Vol.42, pp.300-311, 1993.

Page 64: Final Report

55

[61] L. Wiskott, J. M. Fellous, N. Krüger, and C. von der Malsburg, "Face Recognition by

Elastic Bunch

Graph Matching," in Intelligent Biometric Techniques in Fingerprint and Face Recognition,

L. C. Jain, U.

Halici, I. Hayashi, S. B. Lee, and Jae-Ho, Eds.: CRC Press, 1999, pp.355-396.

[62] P. J. Phillips, P. Rauss, and S. Der, "FERET (FacE REcognition Technology)

Recognition Algorithm

Development and Test Report," U.S. Army Research Laboratory ARL-TR-995, 1996.

[63] P. J. Phillips, H. Moon, S. A. Rizvi, and P. J. Rauss, "The FERET Evaluation

Methodology for Facerecognition Algorithms," in Proceedings, IEEE Conference on

Computer Vision and Pattern Recognition, 1997, pp.137-143.

[64] G. Sukthankar, "Face recognition: a critical look at biologically-inspired approaches,"

Carnegie Mellon

University, Pittsburgh, PA, Technical Report: CMURITR-00-04 2000.

[65] P. Campadelli and R. Lanzarotti, "A Face Recognition System Based on Local Feature

Characterization," in Advanced Studies in Biometrics, Vol.3161, LectureNotes in Computer

Science, M. Tistarelli, J. Bigun, and E. Grosso, Eds. Berlin: Springer, 2005, pp.147-152.

[66] H. Shin, S. D. Kim, and H. C. Choi, "Generalized elastic graph matching for face

recognition," Pattern

Recognition Letters, Vol.28, pp.1077–1082, 2007.

[67] A. Albiol, D. Monzo, A. Martin, J. Sastre, and A. Albiol, "Face recognition using HOG–

EBGM,"

Pattern Recognition Letters, Vol.29, pp.1537-1543,2008.

[68] L. D. Harmon, M. K. Khan, R. LAsch, and P. F. Raming, "Machine Identification of

human faces,"

Pattern Recognition, Vol.13, pp.97-110, 1981.

[69] L. D. Harmon, S. C. Kuo, P. F. Raming, and U. Raudkivi, "Identification of human face

profiles by

computers," Pattern Recognition, Vol.10, pp.301-312, 1978.

[70] G. J. Kaufman and K. J. Breeding, "Automatic recognition of human faces from profile

silhouettes,"

Page 65: Final Report

56

IEEE Transactions On Systems Man And Cybernetics, SMC, Vol.6, pp.113-121, 1976.

[71] Z. Liposcak and S. Loncaric, "A scale-space approach to face recognition from

profiles," in Proceedings of the 8th International Conference on Computer Analysis of

Images and Patterns, Vol. 1689, Lecture Notes In Computer Science. London, UK: Springer-

Verlag, 1999, pp.243-250.

[72] Z. Liposcak and S. Loncaric, "Face recognition from profiles using morphological

signature transform," in Proceedings of the 21st Int'l Conference Information Technology

Interfaces. Pula, Croatia, 1999, pp.93-98.

[73] R. Brunelli and T. Poggio, "Face Recognition Through Geometrical Features," in

Proceedings of the Second European Conference on Computer Vision, Vol.588, Lecture

Notes In Computer Science, G. Sandini, Ed. London, UK: Springer-Verlag, 1992, pp.782-

800.

[74] R. Cendrillon and B. C. Lowell, "Real-Time Face Recognition using Eigenfaces," in

Proceedings of the SPIE International Conference on Visual Communications and Image

Processing, Vol.4067, 2000, pp.269-276.

[75] R. J. Baron, "Mechanisms of Human Facial Recognition," International Journal of

Man-Machine Studies, Vol.15, pp.137-178, 1981.

[76] R.-J. J. Huang, "Detection Strategies for face recognition using learning and evolution,"

George

Mason University, Fairfax, Virginia, Ph. D. Dissertation 1998.

[77] L. Sirovich and M. Kirby, "Low-dimensional Procedure for the Characterization of

Human Faces," Journal of the Optical Society of America A: Optics, Image Science, and

Vision, Vol.4, pp.519-524, 1987.

[78] A. K. Jain and R. C. Dubes, Algorithms for Clustering Data. New Jersey: Prentice-Hall,

1988.

[79] K. Fukunaga, Introduction to Statistical Pattern Recognition, second ed. Boston, MA:

Academic

Press, 1990.

[80] M. Turk and A. Pentland, "Face Recognition Using Eigenfaces," in Proceedings of the

IEEE Conference on Computer Vision and Pattern Recognition, 1991, pp.586-591.

Page 66: Final Report

57

[81] M. Turk and A. Pentland, "Eigenfaces For Recognition," Journal Of Cognitive

Neuroscience, Vol.3, pp.71-86,1991.

[82] A. Pentland, B. Moghaddam, and T. Starner, "Viewbased and modular eigenspaces for

face recognition," in IEEE Conference on Computer Vision and Pattern Recognition, 1994,

pp.84-90.

[83] P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, "Eigenfaces vs. Fisherfaces:

Recognition using class specific linear projection," IEEE Transactions on Pattern Analysis

and Machine Intelligence, Vol.19,

pp.711-720, 1997.

[84] Y. Moses, Y. Adini, and S. Ullman, "Face recognition: the problem of compensating for

changes in

illumination direction," in European Conf. Computer Vision, 1994, pp.286-296.

[85] R. A. Fisher, "The use of multiple measures in taxonomic problems," Annals of

Eugenics, Vol.7, pp. 179-188, 1936.