Top Banner
International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print), ISSN 0976 – 6456(Online) Volume 4, Issue 2, May - December 2013, © IAEME 20 TEXTURE ANALYSIS FOR FACE RECOGNITION J. V. Gorabal Associate Professor, CSE SCEM, Mangalore Manjaiah D. H. Professor, Computer Science Department. Mangalore University, Mangalore ABSTRACT A new approach for face recognition using wavelet features is presented. Initially, the given image is divided into 12 blocks, each of size 50*60 pixels. Then, discrete wavelet transform is applied to each block and energy features (mean) of horizontal and vertical coefficients are determined. The extracted features from training samples are used to train the neural network. Further, the test face image is processed to obtain wavelet energy features and recognized using neural network classifier. Keywords: Wavelet energy features, Neural Network, Face Recognition 1. INTRODUCTION Pattern recognition is a day machine intelligence problem with numerous applications in a wide field, including Face recognition, Character recognition, Speech recognition as well as other types of object recognition. The field of pattern recognition is still very much in it is infancy, although in recent years some of the barriers that hampered such automated pattern systems have been lifted due to advance in computer hardware providing machines capable of faster and more complex computation. Humans do face recognition on regular basis naturally and so effortlessly that we never think of what exactly we the looked at in the face. Face is a dimensional object that is subjected to varying illumination, poses, expressions and so on its two dimensional image. Hence, Face recognition is an intricate visual pattern problem which can be operated in these modes. INTERNATIONAL JOURNAL OF GRAPHICS AND MULTIMEDIA (IJGM) ISSN 0976 - 6448 (Print) ISSN 0976 -6456 (Online) Volume 4, Issue 2, May - December 2013, pp. 20-30 © IAEME: www.iaeme.com/ijgm.asp Journal Impact Factor (2013): 4.1089 (Calculated by GISI) www.jifactor.com IJGM © I A E M E
11
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: 50220130402003

International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print),

ISSN 0976 – 6456(Online) Volume 4, Issue 2, May - December 2013, © IAEME

20

TEXTURE ANALYSIS FOR FACE RECOGNITION

J. V. Gorabal

Associate Professor, CSE SCEM, Mangalore

Manjaiah D. H.

Professor, Computer Science Department. Mangalore University, Mangalore

ABSTRACT

A new approach for face recognition using wavelet features is presented. Initially, the

given image is divided into 12 blocks, each of size 50*60 pixels. Then, discrete wavelet

transform is applied to each block and energy features (mean) of horizontal and vertical

coefficients are determined. The extracted features from training samples are used to train the

neural network. Further, the test face image is processed to obtain wavelet energy features

and recognized using neural network classifier.

Keywords: Wavelet energy features, Neural Network, Face Recognition

1. INTRODUCTION

Pattern recognition is a day machine intelligence problem with numerous applications

in a wide field, including Face recognition, Character recognition, Speech recognition as well

as other types of object recognition. The field of pattern recognition is still very much in it is

infancy, although in recent years some of the barriers that hampered such automated pattern

systems have been lifted due to advance in computer hardware providing machines capable

of faster and more complex computation.

Humans do face recognition on regular basis naturally and so effortlessly that we

never think of what exactly we the looked at in the face. Face is a dimensional object that is

subjected to varying illumination, poses, expressions and so on its two dimensional image.

Hence, Face recognition is an intricate visual pattern problem which can be operated in these

modes.

INTERNATIONAL JOURNAL OF GRAPHICS AND

MULTIMEDIA (IJGM)

ISSN 0976 - 6448 (Print) ISSN 0976 -6456 (Online) Volume 4, Issue 2, May - December 2013, pp. 20-30 © IAEME: www.iaeme.com/ijgm.asp

Journal Impact Factor (2013): 4.1089 (Calculated by GISI) www.jifactor.com

IJGM

© I A E M E

Page 2: 50220130402003

International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print),

ISSN 0976 – 6456(Online) Volume 4, Issue 2, May - December 2013, © IAEME

21

• Face Verification (or Authentication) :

That compares a query face image against a template face image whose identity is

being claimed (i.e; one to one).

• Face Identification (or Recognition)

That compares a query face image against all the template images in the database

to identify the query face.

• Watch list that compares a query face image only to a list of suspects.

Authentication plays a very critical role in security-related applications like E-

commerce. The previously used authentication techniques like security pass or password will

not provide absolute confidence as they could be stolen and passwords are sometimes

(unwisely) written down. To overcome this drawback biometric security systems are uses.

The primary benefit to using face recognition is that facial features are more distinct

from one person to another person and these features have scored highest compatibility in

Machine Readable Travel Documents (MRTD), hence we go for face recognition.

The applications of face recognition are,

• Identity verification for physical access control in buildings or security areas is one of

the most common face recognition applications.

• To allow secure transactions through the Internet, face verification may be used

instead of electronic means like password or PIN numbers, which can be easily stolen

or forgotten.

• Face identification has also been used in forensic applications for criminal

identification (mug-shot matching) and surveillance of public places to detect the

presence of criminals or terrorists (for example in airports or in border control).

• It is also used for government application like national ID, driver`s license, password

and border control, immigration, etc.

The rest of the paper is organized as follows; the detailed survey related to character

recognition of text in scene images is described in Section 2. The proposed method is

presented in Section 3. The experimental results and discussions are given in Section 4.

Section 5 concludes the work and lists future directions of the work.

2. RELATED WORKS

The Face Recognition is of the most difficult task, we have many approaches

proposed for the feature extraction [1].This paper explores the use of morphological operators

for feature extraction in range images and curvature maps of connected part They describe

two general procedures. The first is the identification of connected part boundaries for convex

structures, which is used to extract the node outline and the eye socket outlines of the face.

The part boundaries are dined locally based on minima of minimum principal curvature on

the surface. The locus of these points suggests boundary lines which surround most convex

regions on the surface. However, most of these boundaries are not completely connected. To

remedy this problem, w developed a general two-step connection procedure: the partial

boundaries are first dilated in such a way that the gaps between them are led. Second, the

resulting dilated outlines are skeletonized with the constraint that the pixels belonging to the

original boundary parts cannot be removed.

Page 3: 50220130402003

International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print),

ISSN 0976 – 6456(Online) Volume 4, Issue 2, May - December 2013, © IAEME

22

Feature extraction based on descriptive statistics [2]. This paper proposes a new

method of feature extraction for face recognition based on descriptive statistics of a face

image. This method works by first converting the face image with all the corresponding face

components such as eyes, nose and mouth to grayscale images. The features are then

extracted from the grayscale image, based on descriptive statistics of the image and its

corresponding face components. The edges of a face image and its corresponding face

components are detected by using the canny algorithm. In the recognition step, different

classifiers such as Multi Layer Perception (MLP), Support Vector Machine (SVM), k-

Nearest Neighbors (k-NN) and Pair Opposite Class-Nearest Neighbor (POC-NN) can be used

for face recognition. They evaluated this method with more conventional eigenface method

based upon the AT & T and Yale face databases. The evaluation clearly confirm that for both

databases our proposed method yields a higher recognition rate and requires led

computational time than the eigen face method.

A method to extract facial features using improved deformable templates is described

[3]. This method include two steps, first locating features using rectangle templates designed

by myself; them, extracting features using deformable templates. In the first step, they get

rectangle block including facial features from facial images, the rectangle block is our

template to locate features. In the second step extracting features, they describe the features

of interest by a parameterized template, they design energy function which links with edges,

weighted grads, weighted variance and etc, when the energy function gets its minimum, the

parameter values can be a good description for facial feature. The experiment results show

that this arithmetic can extract facial feature better and more quickly. A novel face

recognition method based on Gabor-wavelet and linear discriminate analysis (LDA) is

proposed in [4]. Given training face images, discriminant vectors are computed using LDA.

The function of the discriminant vectors is two-fold. First, discriminant vectors are used as a

transform matrix, and LDA features are extracted by projecting original intensity images on

to discriminant vectors. Second, discriminant vectors are used to select disrciminant pixels,

the number of which is much less than that of a whole image. Gabor features are extracted

only on these discriminant pixels. Then, applying LDA on the Gabor features, one can obtain

the Gabor-LDA features. Finally, a combined classifier is formed based on these two types of

LDA features.

Hidden Markov model (HMM) is a promising method [5] that works well for images

with variations in lighting, facial expression, and orientation. Face recognition draws

attention as a complex task due to noticeable changes produced on appearance by

illumination, facial expression, size, orientation and other external factors. To process images

using HMM, the temporal or space sequences ate to be considered. In simple term HMM can

be defined as set of finite states with associated probability distributions. Only the outcome is

visible to the external user not the and hence the name Hidden Markov Model. The work in

the method deals with various techniques and methodologies used for resolving the problem.

A face recognition system for personal identification and verification using Genetic

Algorithm and Back-propagation Neural Network is proposed [6]. The system consists of

three steps. At the very outset pre-processing are applied on the input image. Secondly face

feature are extracted, which will be taken as the input of the Back-propagation Neural

Network (BPN) and Genetic Algorithm (GA) in the third step and classification is carried out

by using BPN and GA. The proposed approaches are tasted on a number of face images.

Experimental results demonstrate the higher degree performance of this algorithm.

Page 4: 50220130402003

International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print),

ISSN 0976 – 6456(Online) Volume 4, Issue 2, May - December 2013, © IAEME

23

Biometric technology has been frequently utilized be researchers in identifying and

recognizing human features [7]. This technology identifies human`s unique and static body

parts, such fingerprints, eyes, and face. The identification and recognition of a human face

use face features` processing and analysis. This consists of determining face components`

region and their characteristics, which establishes the role of individual component in face

recognition. This research develops a system that separates face features into face

components, and extracts the eyes, nose, mouth and face boundary. This process conducted

on a frontal single still image. Distances between components are measure, and then

combined with other features to construct face semantic.

Distances between features are determined by going through the process of face

detection based on skin color, cropping to normalize face region, and extraction of eyes, nose

and mouth features. This research shows that the determination of face features and face

components` distances can be used to identity a face a subsystem of a face recognition

system.

In this face recognition research [8], the head is fixed when a photograph is taken.

The infrared diodes provide the only illumination. In front of the CCD camera, a light filter

lens is used to filter all other light. After the photograph is taken, the eyebrows, eyes, lips,

and contour are extracted separately. The shape, size, object-to-object distance, center and

orientation are found for each extracted object. The techniques to solve the object shifting

and rotating problem are investigated. Image subtraction is used to examine the geometric

difference of the two different faces. The obtained classifying data in this research can

accurately classify different people`s faces.

We propose a fast and improved facial feature extraction technique [9] for embedded

face-recognition applications. First, we introduce local texture attributer to a statistical face

model. A texture attribute characterizes the 2-D local feature structures and is used to guide

the model deformation. This provides more robustness and faster convergence than with

conventional ASM (Active Shape Model). Second, the local texture attributes are modeled by

Haar-wavelets, yielding faster processing and more robustness with respect to low-quality

images. Third, we use a gradient-based method for model initialization, which improves the

convergence. We have obtained good results dealing with test faces that are quite dissimilar

with the faces used for statistical training. The convergence area of our proposed method

almost quadruples compared to ASM. The Haar-wavelet transform successfully compensates

for additional cost of using 2-D texture features. The algorithm has also been tested in

practice with a webcam, giving (near) real-time performance and good extraction results.

The extraction of required features from the facial image is an important primitive

task for face recognition. The paper [10] evaluates different nonlinear feature extraction

approaches, namely wavelet transform, radon transform and cellular neural networks (CNN).

The scalability of the linear subspace techniques is limited as the computational load and

memory requirements increase dramatically with the large database. In this work, the

combination of radon and wavelet transform based approach is used to extract the multi-

resolution features, which are invariant to facial expression and illumination conditions. The

efficiency of the stated wavelet and radon based nonlinear approaches over the databases is

demonstrated with the simulation results performed over the FERET database. This paper

also presents the use of CNN in extracting the nonlinear facial features. The detailed

description of the proposed methodology is given in the next section.

Page 5: 50220130402003

International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print),

ISSN 0976 – 6456(Online) Volume 4, Issue 2, May - December 2013, © IAEME

24

3. PROPOSED METHODOLOGY FOR FACE RECOGNITION

The proposed method uses feed-forward back propagation neural network based

classifier for classification. The methodology is shown in Fig 1. The method involves two

phase namely training phase and testing phase. The detailed description of each phase is

given in the following sub sections.

3.1 Training

This involves processing of images of different person with different expressions,

extracting their features and finally developing suitable neural network models which

recognize the different persons. The classification makes use of features extracted using

discrete wavelet transform approach form face image samples. The original images are

converted into gray scale images. Each image is divided into 12 blocks of size 50*60.For

each block Discrete Wavelet Transform is applied. 24 features, 2 from each block are

extracted.

The neural network architecture that is most commonly used with the back

propagation algorithm is the multilayer feed-forward network. In training phase the artificial

neural network is trained using Back Propagation feed forward neural model. Two pair of

files, “input” and “output” are generated. These two pair of files is then given to the neural

network which then trains itself accordingly. The training takes place such that the neural

network learns that the neural network learns that each entry in the input file has a

corresponding entry in the output file.

Fig .1. Proposed Block Diagram for Recognition of face

3.2 Testing

In testing input image from testing set is selected and its features are extracted and

given them to the trained model, the trained ANN model classifies given sample as

corresponding person.

3.3 Database Face images of 20 different person with 20 different expressions are collected. Each

image is of size 200*180 in .jpg format. Database consists of frontal face images and with

same background. The sample images are shown in Fig 2.

Page 6: 50220130402003

International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print),

ISSN 0976 – 6456(Online) Volume 4, Issue 2, May - December 2013, © IAEME

25

Fig .2. Sample Images

3.4 Feature Extraction A pattern is a quantitative or structural description of an object or an entity of interest

in an image. One or more descriptors of an object or an entity an image from the pattern. In

other words, a pattern is an arrangement of descriptors. The descriptors also called features in

pattern recognition literature. The features are necessary for differentiating one class of

objects from another. A method must be used for describing the objects so that features of

interest are highlighted. The description is concerned with extracting of features from the

object/entity of an image.

Algorithm for feature extraction

Input : sample image

Output : Array containing features

Step 1 : Convert the RGB in to gray-scale image

Step 2 : Divide the image into 12 blocks of size 50*60

Step 3 : for i=1 to 12

Apply dwt2 for block

Calculate the energy function of horizontal and vertical co-efficient End.

Step 4 : These co-efficient are stored in an array.

Each image is of size 200*180. The original image is converted into gray scale image. Each

image is divided into 12 blocks each of size 50*60. For each block, Discrete Wavelet

Transform is applied. It computes approximation coefficients matrix and details coefficients

matrices (horizontal, vertical, and diagonal, respectively), of each block of the image. The

next page shows the 12 blocks with first level decomposition.

The discrete wavelet transformation is applied using the function dwt2

[ a h v d ] = dwt2 (m, `haar`);

Where m = block image

a = approximation co-efficient,h= horizontal co-efficient

v= vertical co-efficient, d= diagonal co-efficient

Energy functions (mean) are calculated using equations (1) to (2):

(1)

Page 7: 50220130402003

International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print),

ISSN 0976 – 6456(Online) Volume 4, Issue 2, May - December 2013, © IAEME

26

(2)

3.5. Classification Model The features are stored for each 15 image with different expressions of 20 different

persons. The classification is carried out using only one type of feature set that consists of all

24 features ie 2 features from each 12 blocks of the image. The output layer consists of 20

nodes represented in binary digits. The output is given in Table 1 for recognizing face.

Table 1. Output Pattern for Recognition Person Output Pattern Person Output Pattern

Person 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Person 11 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0

Person 2 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Person 12 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0

Person 3 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Person 13 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0

Person 4 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Person 14 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0

Person 5 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Person 15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0

Person 6 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Person 16 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0

Person 7 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 Person 17 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0

Person 8 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 Person 18 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0

Person 9 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 Person 19 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0

Person 10 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 Person 20 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1

4. EXPERIMENTAL RESULTS AND ANALYSIS

Face images of 20 different people with 20 different expressions are collected.

Database consists of frontal face images and with same background.

4.1. An Experimental Analysis for a Sample Face Image

Each image is of size 200*180. The original image is converted into gray scale image

as shown in Fig 3.

Fig. 3. a) A Sample Face Test Image b) Gray Image

Each image is divided into 12 blocks each of size 50*60. For each block, Discrete

Wavelet Transform is applied. It computes approximation coefficients matrix and details

coefficients matrices (horizontal, vertical, and diagonal, respectively), of each block of the

image. The Fig 4 shows the 12 blocks with first level decomposition.

Page 8: 50220130402003

International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print),

ISSN 0976 – 6456(Online) Volume 4, Issue 2, May - December 2013, © IAEME

27

Block First Level Decomposition of dwt2

Fig 4. The 12 blocks with first level decomposition

Page 9: 50220130402003

International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print),

ISSN 0976 – 6456(Online) Volume 4, Issue 2, May - December 2013, © IAEME

28

The Table 2 shows the recorded results of 5 persons with 5 different expressions.

TABLE 2. The recorded results of 5 persons with 5 different expressions

Page 10: 50220130402003

International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print),

ISSN 0976 – 6456(Online) Volume 4, Issue 2, May - December 2013, © IAEME

29

4.2. An Experimental Analysis dealing with various issues

Our database consists of total 400 images out of those 300 images have been used to

train the neural network and 100 images have been used for testing against the trained

images, and the following analysis have been obtained. From 300 trained images 297 images

are correctly matched only 3 images match is not found.

From 100 testing images 90 images have been perfectly matched. i.e; 10 images

matches are not found. Out for 400 total of 13 where mismatched, we have obtained an

accuracy of 96.75%. The overall performance of the system after conducting the

experimentation on the dataset is reported in Table 3.

TABLE 3. Overall System Performance

Classifier Total images Misclassification Accuracy (in %)

Neural Network 400 13 96.75

5. CONCLUSION

Finally we reach with conclusion. In this project we have designed one of the best

approaches to recognize the faces. This method uses wavelet transform for extracting feature

vectors. From the experimental results, it is seen that this method gets the best results

compared to the other face recognition methods, which are supposed to be the most

successive ones. This technique is not only computationally less extensive as compared with

other recognition techniques but also provides best recognition result of 96.75% on images

various constraints like sad, happy, sleepy, surprise, open/closed eyes, smiling and non

smiling face. Several open questions still remain in our face recognition system. The

robustness for image variation in rotations, illumination, etc. must be improved. Here, we

evaluated the recognition performance only for small database from the aspect of security

systems, such simple evaluations are less useful. Hence, the evaluation on the robustness for

the largest data sets is necessary in practical use.

REFERENCES

[1] Gordon, Gaile G; Vincent, Lue M., Application of morphology to feature extraction

for face recognition Proc. SPIE Vol. 1658, P.151-164 Nonlinear Image Processing III,

Edwared R. Dougherty; Jaako T. Astola; Charles G. Boncelet; Eds.

[2] Rojana Kam art and Thanpant Raicharoen, “Facial recognition using feature

extraction based on descriptive statistics of face image” 2009.

[3] Zhang Baizhen Ruan Qiuqi, “Facial recognition using improved deformable

templates” Vol.4 issue date 16-20 2006.

[4] Y. Pang et al., “Gabor based region covariance matrices for face recognition” IEEE

Transaction on circuit systems for video technology, vol. 18, no, 7, 2008.

[5] S. Sharavanan and M. Azath, “LDA Based face recognition by using hidden markov

model in current trends” International journal of engineering and technology Vol.

1(2), 2009, 77-85.

[6] Sarawat Anam, Md, Shjohidul Islam, M. A. Kashem, M.N.Islam, M.R. Islam,

M.S.Islam “Facial recognition using Genetic Algorithm and Back Propagation Neural

Network” Proceeding of the International Multi Conference of Engineers and

computer Scientists 2009 Vol I IMECS 2009, March 18-20, 2009, Hong Kong.

Page 11: 50220130402003

International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print),

ISSN 0976 – 6456(Online) Volume 4, Issue 2, May - December 2013, © IAEME

30

[7] Dewi Agushinta R.l, Adang Suhendra 2, Yuhilza Hanum “Facial features Distance

Extraction as a facial recognition system component”.

[8] Ching liang su “Facial recognition using Feature Orientation and Feature Geometry

Matching “ Vol. 28, issue 1-2 ISSN: 0921-0296, Issue date 30 June 2004.

[9] Fei Zuo, Eindhoven “Fast Facial Feature Extraction Using Deformable Shape Model

With Haar Wavled Based Local Texture Attributes.” Vol.2.

[10] Hima Deepthi Vankayalapati and Kyandogere Kyamakya “Nonlinear Feature

Extraction Approaches with Application to Face Recognition over Large Databases”.

[11] Gilbert Strang and Truong Nguyen, “Wavelets and Filter Banks” Wellesley-

Cambridge Press 1996.

[12] Burrus C.S. Gopinath R.A., Guo H, “Introduction to Wavelets and Wavelets

Transforms” A Primer. Prentice-Hall 1998.

[13] Charles K. Chui, “An Introduction to Wavelets” Academics Press 1992.

[14] Chan A.K. and Liu S.J., Wavelet Tool ware: Software for Wavelet Training.

Academics Press 1998.

[15] U.K. Jaliya and J.M. Rathod, “A Survey on Human Face Recognition Invariant to

Illumination”, International Journal of Computer Engineering & Technology (IJCET),

Volume 4, Issue 2, 2013, pp. 517 - 525, ISSN Print: 0976 – 6367, ISSN Online:

0976 – 6375.

[16] Sambhunath Biswas and Amrita Biswas, “Fourier Mellin Transform Based Face

Recognition”, International Journal of Computer Engineering & Technology (IJCET),

Volume 4, Issue 1, 2013, pp. 8 - 15, ISSN Print: 0976 – 6367, ISSN Online:

0976 – 6375.

[17] A.Hemlata and Mahesh Motwani, “Single Frontal Face Detection by Finding Dark

Pixel Group and Comparing XY-Value of Facial Features”, International Journal of

Computer Engineering & Technology (IJCET), Volume 4, Issue 2, 2013,

pp. 471 - 481, ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375.

[18] Prof. B.S Patil and Prof. A.R Yardi, “Real Time Face Recognition System using

Eigen Faces”, International Journal of Electronics and Communication Engineering &

Technology (IJECET), Volume 4, Issue 2, 2013, pp. 72 - 79, ISSN Print: 0976- 6464,

ISSN Online: 0976 –6472.

[19] Archana H. Sable and Dr. Girish V. Chowdhary, “A Two Phase Algorithm for Face

Recognition in Frequency Domain”, International Journal of Computer Engineering &

Technology (IJCET), Volume 4, Issue 6, 2013, pp. 127 - 135, ISSN Print:

0976 – 6367, ISSN Online: 0976 – 6375.

[20] Jyoti Verma and Vineet Richariya, “Face Detection and Recognition Model Based

on Skin Colour and Edge Information for Frontal Face Images”, International

Journal of Computer Engineering & Technology (IJCET), Volume 3, Issue 3, 2012,

pp. 384 - 393, ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375.