FACE DETECTION USING LAPLACIAN FACE APPROACH GUIDE Mrs.R.L.Leeja GROUP MEMBERS A.P.Athirson J.Sunderji S.Syed zia-ur rahman BATCH NO:T15
Nov 27, 2014
FACE DETECTION USING LAPLACIAN FACE
APPROACH GUIDE
Mrs.R.L.LeejaGROUP MEMBERS
A.P.AthirsonJ.Sunderji
S.Syed zia-ur rahmanBATCH NO:T15
Introduction
IMAGE An image is an artifact, usually two-dimensional, that has a similar
appearance to some subject usually a physical object or a person. Images may be two-dimensional, such as a photograph, screen display,
and as well as a three-dimensional, such as a statue. They may be captured by optical devices—such as cameras, mirrors,
lenses, telescopes, microscopes, etc. and natural objects and phenomena, such as the human eye or water surfaces.
The elements of a digital image are called as pixel(or)picture elements.
IMAGE PROCESSING
Image processing is any form of signal processing for which the input is an image , such as photographs or frames of video
The output of image processing can be either an image or a set of characteristics or parameters related to the image.
Most image-processing techniques involve treating the image as a two-dimensional signal and applying standard signal-processing techniques to it.
FACE RECOGNITION One of the most important building blocks of smart environments is a
person identification system, face recognition devices are ideal for such systems.
Facial recognition systems are built on computer programs that analyze images of human faces for the purpose of identifying them.
Facial recognition systems are computer-based security systems that are able to automatically detect and identify human faces. These systems depend on a recognition algorithm.
A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source.
One of the ways to do this is by comparing selected facial features from the image and a facial database .
It is typically used in security systems and can be compared to other biometrics such as fingerprint or eye iris recognition systems.
Usual Face Recognition
Why face recognition?
Verification of credit card, personal ID, passport Bank or store security Crowd surveillance Access control Human-computer-interaction
WORKING PROCESS
The programs take a facial image Measure characteristics such as the distance between the eyes, the length
of the nose, and the angle of the jaw, and create a unique file called a "template.“
Using templates, the software then compares that image with another image and produces a score that measures how similar the images are to each other.
Typical sources of images for use in facial recognition include video camera signals and pre-existing photos such as those in driver's license databases.
The first step for a facial recognition system is to recognize a human face and extract it fro the rest of the scene.
Next, the system measures nodal points on the face, such as the distance between the eyes, the shape of the cheekbones and other distinguishable features.
These nodal points are then compared to the nodal points computed from a database of pictures in order to find a match.
Techniques Involved Some facial recognition algorithms identify faces by extracting
landmarks, or features, from an image of the subject's face. For example, an algorithm may analyze the relative position,
size, and/or shape of the eyes, nose, cheekbones, and jaw. These features are then used to search for other images with
matching features. Other algorithms normalize a gallery of face images and then
compress the face data, only saving the data in the image that is useful for face detection. A probe image is then compared with the face data.
One of the earliest, successful systems is based on template matching techniques applied to a set of salient facial features, providing a sort of compressed face representation.
Popular recognition algorithms include eigenface, fisherface, the Hidden Markov model, and the neurona.
ABSTRACT Based on three main concepts-Locality Preserving Projections (LPP),
Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA).
LPP is a new algorithm for learning a locality preserving subspace and a general method for manifold learning ie, It helps in calculating the pixel values between the subspaces that could not be calculated using PCA.
PCA helps only in calculating the pixel values that are just near in a sub-space and it doesn’t helps in calculating the values of pixels that are far away.
PCA and LDA aim to discover the global structure of the manifold whereas LPP aim to discover the local structure of the manifold.
Here the unwanted variations resulting from changes in lighting, facial expression, and pose may be eliminated or reduced.
EXISTING SYSTEM The existing system used Principal Component Analysis Linear
Discriminant Analysis concept. The purpose of PCA is to reduce the large dimensionality of the data
space (observed variables) to the smaller intrinsic dimensionality of feature space (independent variables), which are needed to describe the data economically.
The jobs which PCA can do are prediction, redundancy removal, feature extraction, data compression, etc.
LDA searches for the project axes on which the data points of different classes are far from each other while requiring data points of the same class to be close to each other
But the most of the algorithm considers some what global data patterns while recognition process. This will not yield accurate recognition system.
Disadvantages
Less accurate Does not deal with manifold structure It does not deal with biometric characteristics
PROPOSED SYSTEM It uses the Principal Component Analysis method along with
Linear Discriminant Analysis and Locality Preserving Projections method.
It includes three concepts
PCA LDA LPP
PCA
It is a simple, non-parametric method of extracting relevant information from confusing datasets.
With minimal additional effort PCA provides a roadmap for how to reduce a complex data set to a lower
dimension to reveal the sometimes hidden,simplified dynamics that often underlie it.
Principal component analysis (PCA) involves a mathematical procedure that transforms a number of possibly correlated variables into a smaller number of uncorrelated variables called principal components.
Important consideration of using PCA as preprocessing is for noise reduction.
LDA Linear Discriminant Analysis aim to preserve the global structure. LDA explicitly attempts to model the difference between the classes
of data. PCA on the other hand does not take into account any difference in class.
LDA is a supervised learning algorithm. LDA searches for the project axes on which the data
points of different classes are far from each other while requiring data points of the same class to be
close to each other.
LPP Locality Preserving Projection (LPP) , a new algorithm for learning a
locality preserving subspace. LPP is a general method for manifold learning. It aim to preserve the local structure.
Advantages
High recognition rate is achieved
Error Rate is low compared to existing system
MODULES
Constructing Nearest Neighbor Graph
Input
Recognize the Image
Preprocessing
PCA Projection
Choosing the weights of Neighboring Pixel
MODULES PRE –PROCESSING
PCA PROJECTION
CONSTRUCTING THE NEAREST NEIGHBOR GRAPH
CHOOSING THE WEIGHT OF NEIGHBORING PIXELS
RECOGNIZING THE IMAGE
MODULE DESCRIPTION
PRE PROCESSSING In the preprocessing take the single gray image in 10 different
directions Eliminate the background And measure the points in 28 dimensions of each gray image The original face image and the cropped image.
MODULE DESCRIPTION
PCA PROJECTION• We project the image set into the PCA subspace by throwing away
the smallest principal components.• In our experiments, we kept 98 percent information in the sense of
reconstruction error.• For the sake of simplicity, we still use x to denote the images in the
PCA subspace in the following steps. • We denote by WPCA the transformation matrix of PCA.
MODULE DESCRIPTION
CONSTRUCTING THE NEAREST NEIGHBOR GRAPH Let G denote a graph with n nodes. The ith node corresponds to the face image xi. We put an edge between nodes i and j if xi and xj are “close,” i.e., xj is
among k nearest neighbors of xi, or xi is among k nearest neighbors of xj.
The constructed nearest neighbor graph is an approximation of the local manifold structure.
MODULE DESCRIPTION
Choosing the weights of neighboring pixel• Here we compare the images that has the nearest neighboring pixel
values.• The images in the test folder are compared with the images in the
train folder. Find the locations of eyes, nose and mouth, extract the pixel points. Use the width of head, the distances between eye corners,angles
between eye corners, etc. Calculate the pixel values.
MODULE DESCRIPTION
Recognize the image Then measure the value as from test which contain more gray image If it is match with any gray image then it recognize and show the
image or else it not recognize
Comparison with databases Here we compare the three methods (Eigen face,Fisher Face and
Laplacian Face) with the following three different data sets.
YALE face database constructed at the Yale
Center for Computational Vision and Control.
PIE Database(Pose,Illumination and Expression)
Yale Database
PIE Database
MSRA Database
Result• Three experiments on three databases have been systematically
performed. • These experiments reveal a number of interesting points: In all the three experiments, Laplacianfaces consistently performs
better than Eigenfaces and Fisherfaces. Especially, it significantly outperforms Fisherfaces and Eigenfaces on
Yale database and MSRA database. These experiments also show that our algorithm is especially suitable
for frontal face images. Moreover, our algorithm takes advantage of more training samples,
which is important to the real world face recognition systems.
Conclusion Our system is proposed to use Locality Preserving Projection in Face
Recognition which eliminates the flaws in the existing system. This system make the faces to reduce into lower dimensions and
algorithm for LPP is performed for recognition. The application is developed successfully and implemented as
mentioned above. This system seems to be working fine and successfully. This system can
able to provide the proper training set of data and test input for recognition.
The face matched or not is given in the form of picture image if matched and text message in case of any difference.
COMPILATION
BROWSING THE IMAGE
MATCHING THE IMAGE
FINAL RESULT