Top Banner
An Effective Face Detection Algorithm based on Skin Color Information Alaa Sagheer, IEEE member Center for Artificial Intelligence and Robotics (CAIRO), Aswan University Aswan, Egypt E-mail: [email protected] Saleh Aly Center for Artificial Intelligence and Robotics (CAIRO), Aswan University Aswan, Egypt E-mail: [email protected] Abstract— A face detection approach is presented in this paper combines skin color detection and neural network. The first motivation for our paper is to decide which color space is the best in order to build efficient skin color detector can be embedded in the overall face detection system. The proposed skin detection approach uses a chrominance distribution model of skin-color information in the input image in order to detect skin pixels over the entire image. Next, morphological operations are used in order to smooth the detected skin region and generate, finally, face candidates for face-base applications. Finally, neural network is used in order to verify these face candidates. Many experiments using color images gathered from the Internet and from our own database are conducted and give encouraging results. It is expected to combine the proposed face detector with face recognition approach to be embedded later in human computer interaction applications. Keywords: skin color detection; color spaces, morohplogical operations; self organizing maps I. INTRODUCTION Recently, face detection tasks are becoming required more frequently in nowadays applications. It’s caused by the development of security systems as an answer to acts of terrorism attacks. It is the step stone to all facial analysis algorithms, including face alignment, face modeling, face recognition, face identification, head tracking, facial expressions, and many more applications. However, most existing systems use a slide-window technique in order to search for faces in an image so that many existing face- detection techniques are time consuming [1]. In this paper, we propose a face detection approach combines skin color detection and neural network. Skin detection plays an important role in a wide range of image processing applications starting from face detection up to various human computer interaction domains [2]. In literature, there are many methods to build skin color detection module, but these methods are applied over a certain color space. There are many color spaces have been used by researchers with different ways to label pixels as skin or non-skin. However, there is not a concrete opinion about which color space is the best choice to achieve skin detection [3-5]. In addition, there are some challenges may prevent the skin detector to achieve face detection. One of these challenges is covering some parts of the subject’s face using traditional scarf which Muslim females are usually wear. Therefore, the first motivation in this paper is to decide which color space is the best choice to build an overall skin detector. Here, we have studied five of the most common color spaces in order to choose the optimum space. We conducted a comparison among them using the same database and under same circumstances. Namely, we have studied the following color spaces: RGB, NRGB, HSV, YC b C r and YIQ [6]. For generalization, the experiments of this paper show a pixel distribution of the skin database of each color space in case of using normal lighting and different lighting conditions. As it is discussed in this paper, we found experimentally that the YC b C r is the optimum color space which shows a fine representation for the input data. Therefore the first step in the proposed approach is to convert the input images from RGB format into YC b C r format. The second step using YC b C r color space, the assumed skin detector uses a chrominance distribution model of skin-color information in each input image in order to detect the skin pixels over the entire image and then generates skin candidates based on a learned threshold. Here, we dropped the luminance channel and employed the other two chrominance channels in order to reduce the effect of lighting changes. The next step is to use post image processing operations in order to smooth the image, after skin detection, and remove the noise from the image. Two morphological operations (dilation and erosion) have been used in order to refine the extracted skin regions. Finally, a face verification step is performed using a neural network, namely, self organizing map (SOM) [7]. In this step, facial features will be detected in order to verify the detected face candidates. Specifically, the system classifies the output face candidates into a face or a non face. If it is a face, the system detects the required facial features in the subject’s face. If it is a non face, it will be classified as a non-face. The paper is organized as follows: Section II shows the motivation for our work. An overview of some color spaces is given in section III. A Comparison among spaces and selection of the best one are given in section IV. The proposed skin detector via the chosen color space is provided in section V. The morphological operations used in this paper are described in section VI. Face verification step is described in section VII. Experimental results are shown in section VIII. Section IX concludes the paper. 2012 Eighth International Conference on Signal Image Technology and Internet Based Systems 978-0-7695-4911-8/12 $26.00 © 2012 IEEE DOI 10.1109/SITIS.2012.25 90
7

An Effective Face Detection Algorithm based on Skin Color Information

Jan 22, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: An Effective Face Detection Algorithm based on Skin Color Information

An Effective Face Detection Algorithm based on Skin Color Information

Alaa Sagheer, IEEE member Center for Artificial Intelligence and Robotics (CAIRO),

Aswan University Aswan, Egypt

E-mail: [email protected]

Saleh Aly Center for Artificial Intelligence and Robotics (CAIRO),

Aswan University Aswan, Egypt

E-mail: [email protected]

Abstract— A face detection approach is presented in this paper combines skin color detection and neural network. The first motivation for our paper is to decide which color space is the best in order to build efficient skin color detector can be embedded in the overall face detection system. The proposed skin detection approach uses a chrominance distribution model of skin-color information in the input image in order to detect skin pixels over the entire image. Next, morphological operations are used in order to smooth the detected skin region and generate, finally, face candidates for face-base applications. Finally, neural network is used in order to verify these face candidates. Many experiments using color images gathered from the Internet and from our own database are conducted and give encouraging results. It is expected to combine the proposed face detector with face recognition approach to be embedded later in human computer interaction applications.

Keywords: skin color detection; color spaces, morohplogical operations; self organizing maps

I. INTRODUCTION Recently, face detection tasks are becoming required more frequently in nowadays applications. It’s caused by the development of security systems as an answer to acts of terrorism attacks. It is the step stone to all facial analysis algorithms, including face alignment, face modeling, face recognition, face identification, head tracking, facial expressions, and many more applications. However, most existing systems use a slide-window technique in order to search for faces in an image so that many existing face-detection techniques are time consuming [1].

In this paper, we propose a face detection approach combines skin color detection and neural network. Skin detection plays an important role in a wide range of image processing applications starting from face detection up to various human computer interaction domains [2]. In literature, there are many methods to build skin color detection module, but these methods are applied over a certain color space. There are many color spaces have been used by researchers with different ways to label pixels as skin or non-skin. However, there is not a concrete opinion about which color space is the best choice to achieve skin detection [3-5].

In addition, there are some challenges may prevent the skin detector to achieve face detection. One of these challenges is covering some parts of the subject’s face using traditional scarf

which Muslim females are usually wear. Therefore, the first motivation in this paper is to decide which color space is the best choice to build an overall skin detector. Here, we have studied five of the most common color spaces in order to choose the optimum space. We conducted a comparison among them using the same database and under same circumstances. Namely, we have studied the following color spaces: RGB, NRGB, HSV, YCbCr and YIQ [6].

For generalization, the experiments of this paper show a pixel distribution of the skin database of each color space in case of using normal lighting and different lighting conditions. As it is discussed in this paper, we found experimentally that the YCbCr is the optimum color space which shows a fine representation for the input data. Therefore the first step in the proposed approach is to convert the input images from RGB format into YCbCr format.

The second step using YCbCr color space, the assumed skin detector uses a chrominance distribution model of skin-color information in each input image in order to detect the skin pixels over the entire image and then generates skin candidates based on a learned threshold. Here, we dropped the luminance channel and employed the other two chrominance channels in order to reduce the effect of lighting changes. The next step is to use post image processing operations in order to smooth the image, after skin detection, and remove the noise from the image. Two morphological operations (dilation and erosion) have been used in order to refine the extracted skin regions.

Finally, a face verification step is performed using a neural network, namely, self organizing map (SOM) [7]. In this step, facial features will be detected in order to verify the detected face candidates. Specifically, the system classifies the output face candidates into a face or a non face. If it is a face, the system detects the required facial features in the subject’s face. If it is a non face, it will be classified as a non-face.

The paper is organized as follows: Section II shows the motivation for our work. An overview of some color spaces is given in section III. A Comparison among spaces and selection of the best one are given in section IV. The proposed skin detector via the chosen color space is provided in section V. The morphological operations used in this paper are described in section VI. Face verification step is described in section VII. Experimental results are shown in section VIII. Section IX concludes the paper.

2012 Eighth International Conference on Signal Image Technology and Internet Based Systems

978-0-7695-4911-8/12 $26.00 © 2012 IEEE

DOI 10.1109/SITIS.2012.25

90

Page 2: An Effective Face Detection Algorithm based on Skin Color Information

II. MOTIVATION Using the raw input image in the RGB color space is not suitable tool for skin detection. This is due to that the RGB color space is highly sensitive to intensity difference [6]. In addition, using skin information alone is not a sufficient tool achieves face detection. In other words, we need two consecutive steps:

1- Adopting suitable color space replaces the RGB space,

2- Verifying the user facial features.

Figure 1 shows the traditional skin color detection in the RGB color space. It is clear that there are some non-skin parts are included in the bounding box, such as user’s scarf and user’s shirt. Certainly, this may cause confusion if there are further face processing steps are needed. These preliminary results motivated us in order to investigate the suitable color space for our database.

Figure 1. Traditional skin detection module. (left) Input image (right) skin detection

III. OVERVIEW OF THE COLOR SPACES To prepare for the proposed skin detector, we have studied five of the most common color spaces in order to choose the best one. We conducted a comparison among them using the same database and under same circumstances. Namely, we studied the spaces: RGB and NRGB as representatives for basic color spaces, HSV as a representative for perceptual color spaces and, finally, YCbCr and YIQ as representative for orthogonal color spaces. Before conducting the comparison, we first give an overview of each color space as shown in [3] [6].

1. RGB color space

Colors are specified in terms of the three primary colors red (R), green (G) and blue (B). The RGB model is the most extended and used color format because it is the one used in displays technology, however, it is not very robust when light conditions changes [3][6]. It is known that, all colors are created from these primary colors combining them in various ways. For example, red (255, 0, 0) and green (0, 255, 0) combined in equal amounts create yellow (255, 255, 0). The amount of each primary color gives its intensity. If all components are of highest intensity, then the color white results [3].

2. NRGB color space

NRGB stands for Normalized RGB model. We use it when we want to get rid from distortions caused by lights and shadows in the RGB image. Normalizing the RGB values of an image can be an effective way of achieving tasks in hands. Toward normalization, we divide each pixel’s value by the sum of the pixel’s value over all channels. So if you have a pixel with intensities R, G, and B in the respective channels, its normalized values will be R/S, G/S and B/S, S=R+G+B.

3. HSV model

HSV color space represents the colors in terms of Hue (color-depth), Saturation (color-purity) and intensity of the Value (color-brightness). Hue refers to color type, such as red, blue, yellow and takes values from 0 to 360 (but it is normalized to 0–100% in some applications). Saturation refers to the vibrancy or purity of the color and takes values from 0 to 100%. The lower the saturation of a color, the more “grayness” the color is represented and the more faded the color will appear. Finally, value component refers to the brightness of the color and takes the same range as the saturation. [3]

4. YIQ color space

This color space includes one luminance (Y channel) represents the luminance information, while both I (in-phase) and Q (quadrature) represent the chrominance information. Here, the I and Q can be thought as a second pair of axes on the same graph, rotated 33 , therefore IQ represent different coordinate systems on the same plane. ‘I’ is in the range orange–blue, and ‘Q’ is in the range purple–green. Therefore, transformations between RGB and YIQ spaces are linear [3] [6].

5. YCbCr color space

The YCbCr model is a family of color spaces used as a part of the color image pipeline in video and digital photography systems, therefore it is widely used for digital video encoding. In this model, the colors are specified in terms of luminance (Y channel) and chrominance (Cb and Cr channels). The Cb channel represents the value for the blue component (i.e. B Y) and the Cr channel represents the value for the red color component (i.e. R Y). Please note that the Y image is essentially a grayscale copy of the original color image [6].

IV. COMPARISON AND SELECTION OF THE COLOR SPACE There are a lot of studies in literature describe and use many

color spaces to find and detect human skin color [8-10]. Each study is usually focused on a specific color space, showing the results obtained by this color space. But the approach presented in this paper is something different. Here, we carry out a detailed study of how good are different color spaces to represent pixels using the same conditions (i.e. the same images and the same method to find skin color). To hold the comparison among the chosen color spaces, we have used enough number of images where each image includes patches of skin. Figure 2 shows samples of such input images where each one includes a skin type in different lighting conditions. We cropped these skin patches from different places in the subject’s face across normal and different lightening conditions.

91

Page 3: An Effective Face Detection Algorithm based on Skin Color Information

(a)

(b)

Figure 2. Samples of skin images in (a) normal lighting conditions (b) different lighting conditions

From practical point of view, it has been demonstrated that skin colors differ more in intensity than in chrominance. Hence, it has been a common practice to drop the luminance component for skin classification [2]. Therefore in our experiments here, we distributed the skin pixels via the two channels for chrominance and neglect the third channel for luminance in all color spaces. In addition, to ensure generalization, we show the pixel distribution of the skin database of each color space in two different cases:

1) Normal lighting conditions (normal light) 2) Different lighting conditions (weak light)

Please note that the total number of pixels collected from all the images of the first case (normal light) is 698329 pixels whereas the total number of pixels collected from all the images of the second case (different light) is 1052646 pixels. Figure 3 shows the skin pixel distribution along the two chrominance channels of each color space in both cases. The reason behind choosing only two channels out from three channels is that these two channels are the most important channels which include color of skin, i.e. include main information. For example, in RGB color space we chose the red channel and the green channel. This is because red is the most relevant color in skin tone: R component in skin color is usually between 120 and 255, G component is usually between 90 and 180 and blue is usually between 70 and 140. Therefore, B (blue) channel shows the worst result in RGB model because blue is the less predominant channel in skin color.

It is easy to notice in Figure 3 that the first three distributions are stretched or scattered over a wide range (left panels). The situation became worst in case of different lighting conditions (right panels). Namely, this occurred in case of RGB, NRGB and HSV color spaces. It seems also that the last distributions of YIQ and YCbCr are the most appropriate spaces to find skin color for our database used in this paper. There is a little improvement in case of the YCbCr color space. Hence, we adopted here the YCbCr as the optimum color space of our experiments.

V. EXPLICIT SKIN COLOR DETECTION VIA YCBCR SPACE From a classification point of view, skin color detection can be viewed as a two class problem: skin-pixel vs. non-skin pixel classification. It is known that the human skin color of different individuals cluster in a small region in a color space provided that the images are taken under illumination controlled environments [2]. Toward this goal, in this paper we define the skin color cluster decision boundaries for the YCbCr color space channels.

Figure 3. Skin color distribution under (left) normal lighting conditions (right) different lighting conditions

In order to reduce the effect of lighting changes, the Y channel is dropped and both the Cb and Cr channels are only employed in order to represent skin chrominance. Single or multiple ranges of threshold values for each channel are defined and the image pixel values that fall within these predefined ranges for the chosen color channel are defined as skin pixels. Namely, we chose the ranges Cb = [110,120] and Cr [135,150] in case of images include normal lighting and the ranges Cb = [110,160] and Cr = [120,150] in case of images include different lighting conditions.

92

Page 4: An Effective Face Detection Algorithm based on Skin Color Information

To evaluate these decision boundaries of the YCbCr color space, we used a lot of test images, gathered from the Internet each includes many subjects. Figure 4 shows just two examples where it is easy to notice that the proposed approach detects effectively all the skin candidate regions, even hands and necks, in the input image via plotting red squares around them.

As it depicted in Figure 4, in the cases where there are connected skin regions, i.e. two or three faces are near to each other, they will be detected as a one skin region not as separated regions. Then connected components labeling technique is used to scan the image and gather pixels into components based on these pixels connectivity [11].

Figure 4. Skin detection for image has multiple subjects: (left) input image and (right) output image

In other words, all pixels in a connected component share similar pixel intensity values and are in some way connected with each other. After all groups have been determined, each pixel is labeled with a color labeling according to the component it was assigned to. Once each component is determined, the system plots a red rectangle around it.

For more validation for the proposed approach, we tried a different kind of images, which images are including only one subject. Please note that this kind of images is our goal in this paper since the proposed approach considers the first step in an overall approach for face detection and recognition in color images. The samples of such images are chosen from our database AVAS database [12]. As it is depicted in Figure 5, for each raw input image the CbCr version of this image has been shown. Applying morphological operation on the CbCr version is much simpler and effective than using the YCbCr version.

VI. MORPHOLOGICAL OPERATIONS After the skin region is detected, the following step in our approach is to use some morphological operations in order to smooth the detected skin region. In other words, it is possible that skin color detection produces image with noise, therefore, morphological operations will remove such noise.

Morphological operations usually apply a structuring element to an input image, creating an output image of the same

size. The value of each pixel in the output image is based on a comparison among the corresponding pixel in the input image with its neighbors. By choosing the size and shape of the neighborhood, we can construct a morphological operation that is sensitive to specific shapes in the input image [13].

Figure 5. Converting from RGB space into the YCbCr space. (Left) original image (right) CbCr version of the original image

In our experiments here, we used two consecutive morphological operations denoted as dilation and erosion. Dilation adds pixels to the boundaries of the face in an image, while erosion, which is known as a closing operation, removes pixels on the face boundaries. The number of pixels added to or removed from the face in an image depends on the size and shape of the structuring element used to process the image. By this way, all the black pixels in the output image are removed and, thereby, the detected skin, or face candidate, region is enlarged. Please pay attention that if these black pixels did not removed effectively, classification process will not be performed well since these pixels will be considered as noise.

On the other hand, if these morphological operations are applied on a CbCr version of the image, instead of raw image, this will has a great impact on the image. For experimental results, Figure 6 below shows applying the morphological operations, erosion and dilation, on a raw image directly (part a) and on a CbCr version (part b) instead. It is clear that, dropping the Y channel and adopting the CbCr version enhances the skin detection process. Please notice the difference between the second and third panel in part (a) and their counterparts the third and the fourth panels in part (b).

(a) (b) Figure 6. Skin detection after morphological operations. (a) raw input image, erosion and dilation effect on input image. (b) raw input image, its CbCr version

and then erosion and dilation effect on the CbCr version

93

Page 5: An Effective Face Detection Algorithm based on Skin Color Information

In such a case, the detector detects the skin regions and neglects the rest of information. In other words, it neglects those regions which do not represent a skin or those determined as non-skin pixels. It is easy to remark how effectiveness of the proposed detector especially in case of the subject female.

VII. FACE VERIFICATION We proceed now to face verification element in the proposed system. In this element, facial features will be detected in order to verify the detected face candidates. Specifically, the system classifies the output face candidates into a face or a non face. If it is a face, the system detects the facial features in the subject’s face. If it is a non face, it will be classified as a non-face. This verification process is achieved using neural networks, namely, we used the self organizing map (SOM) [7].

Using a well trained SOM, the SOM will directly deal with the output image shown in the fourth panel in Figure 6-b. It supposed that the SOM will detect the facial features included in this image. In our experiment, the SOM achieves this task according to its training phase which is achieved before the current testing phase. The target of the SOM’s training phase is to build the face codebook which will be used for verifying the detected objects. The feature map or codebook of SOM includes all the important and general features in the input data. To build this map we assume a two dimensional SOM includes 10x10 neurons, such that each neuron has a size of 19x19 pixels. To conduct the training experiments of P1DSOM, we randomly choose 10 samples for any 10 subjects of our database for training use. Therefore, total number of images used in training phase is: 10 (subject) x 10 (samples) = 100 images.

The SOM testing phase uses the face feature map yielded from training phase. To conduct the testing experiments, we used one single image for each 15 subjects we have. Each test image passes all the previous steps precede the using of SOM shown in Figure 6. By the same way in training phase, SOM will extract and detect the facial features from the test image.

The SOM decision that the current object is a face or a non-face is taken using a quantization error. To find the optimum threshold, the quantization error of both face images and any non-face images are calculated during training and the histogram of these quantization errors are analyzed to find the best threshold values which decide if the detected object is a face or a non-face. Choosing the quantization error or threshold is described in Figure 7.

2 3 4 5 6 7 80

5

10

15

20

25

30

35

40

45

50

Quantization Error for nonface images

Quantization Error for face images

Figure 7. Quantization error during SOM experiments

VIII. EXPERIMENTAL RESULTS Prior to showing the experimental results, let us conclude the scenario of the proposed approach as follows: If an input image is introduced to the system, the following steps are run in order:

Step 1: Convert the input image from RGB into YCbCr space, Step 2: Achieve face localization using skin color detection via

the YCbCr color space and drop the Y channel to reduce illumination changes,

Step 3: Perform morphological operations (dilation & erosion) in order to remove the noise from the output image,

Step 4: Verify the user’s face using SOM neural network.

Figure 7, at the end of this paper; shows the results of skin and face detection using our own database AVAS [12]. As the figure shows, the skin and face detection is performed successfully with a rate of 100 %. In other words, the system encloses face candidates totally inside the blue square in each case.

IX. CONCLUSION AND FUTURE WORKS The paper presented an automatics skin detection approach can be considered as the first step in any human computer interaction applications. The paper started with studying and deciding which color space is best for the human skin-base applications and database in hands. We found that the YCbCr space is the most optimum in case of using normal lighting and different lighting conditions. Using learned range boundaries for both the Cb and the Cr channels, we threshold the image pixel values that fall within these learned ranges for the chosen color channel are defined as skin pixels. Outside these ranges the pixels are defined as non-skin pixels.

The proposed skin detection approach consists of two consecutive steps. In the first step, a chrominance distribution model of skin-color information is used in order to detect skin pixels over the entire image. By this way, the system successfully detects the skin region and neglects all regions which determined as a non-skin. Next, two morphological operations (dilation and erosion) are used in order to smooth the detected skin region and, then, generate face candidates region includes facial features only for further applications.

Finally, face verification process is achieved on the skin candidates regions in order to judge the detected skin is a face or non-face. Experimental results using our own database are highly encouraging. In the coming research, we are planning to integrate the proposed face detection approach with a user identification element in order to represent a new face recognition system. This system may also combine with a speech recognizer to represent a novel audio visual speech system adopts the Arabic language.

REFERENCES [1] C.Zhang, Z. Zhang, "A Survey of Recent Advances in Face Detection,"

Microsoft, Technical Report MSR-TR-2010-66, 2010. [2] P. Kakumanu, S. Makrogiannis, N. Bourbakis. “A survey of skin-color

modeling and detection methods,” Pattern Recognition, vol. 40, no. 3, pp. 1106-1122, 2007.

Threshold

face

Non-face

94

Page 6: An Effective Face Detection Algorithm based on Skin Color Information

[3] J. M. Chaves-González, M. A. Vega-Rodríguez, J. A. Gómez-Pulido, J. M. Sánchez-Pérez, “Detecting skin in face recognition systems: A colour spaces study,” Digital Signal Processing, vol. 20, pp. 806–823, 2010.

[4] J. C. Terrillon, M. David and S. Akamatsu, “Automatic Detection of Human Faces in Natural Scene Images by Use of a Skin Color Model and of Invariant Moments,” The IEEE International Conference on Automatic Face and Gesture Recognition, (AFGR98), pp. 112-117, 1998.

[5] H. Stokman and T. Gevers, “Selection and Fusion of Color Models for Image Feature Detection,” the IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no.3, pp. 371-381, 2007.

[6] M. Tkalcic and J.F. Tasic, “Colour spaces: Perceptual, historical and applicational background,” in the IEEE Region 8 Eurocon Conference, Computer as a Tool, vol. 1, pp. 304-308, 2003.

[7] A. Albiol, L. Torres and E.J. Delp, “Optimum color spaces for skin detection,” Proceedings of the IEEE International Conference on Image Processing 2001 (ICIP01), vol. 1, pp. 122–124, 2001.

[8] T. Kohonen. Self-Organizing Maps. 3rd Edition, Springer, 2001. [9] S. Jayaram, S. Schmugge, M.C. Shin and L.V. Tsap, “Effect of color

space transformation, the illuminance component and color modeling on skin detection,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 813-818, 2004.

[10] S.L. Phung, A. Bouzerdoum and D. Chai, “Skin segmentation using color pixel classification: Analysis and comparison,” the IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 1, pp. 148-154, 2005.

[11] K. Suzuki, I. Horiba, N. Sugie, "Linear-time connected-component labeling based on sequential local operations," Computer Vision and Image Understanding, vol. 89, no. 1, pp. 1-23, 2003.

[12] A. Sagheer, A. Saleh, “AVAS: Audio Visual Arabic Speech database for Multimodal Speech Recognition,” Unpublished.

[13] W.K. Pratt, Digital Image Processing, 3rd edition, John Wiley & Sons, ISBN: 0-471-37407-5, 2001.

95

Page 7: An Effective Face Detection Algorithm based on Skin Color Information

(a) (b) (c) (d) (e)

Figure 8. Face detection in the proposed system. (a) raw input image (b) CbCr image (c) input image after “Dilation” (d) input image after “Erosion” (e) Image after facial feature detection, a bounding box around face

96