ISSN (Print) : 2320 – 3765 ISSN (Online): 2278 – 8875 International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering (An ISO 3297: 2007 Certified Organization) Vol. 4, Issue 4, April 2015 Copyright to IJAREEIE 10.15662/ijareeie.2015.0404017 1966 Emotion Recognition-A Selected Review Manoj Prabhakaran.K 1 , Manoj Kumar Rajagopal 2 Research Associate, School of Electronic Engineering, VIT University, Chennai Campus, Tamil Nadu, India 1 Associate Professor, School of Electronic Engineering, VIT University, Chennai Campus, Tamil Nadu, India 2 ABSTRACT: In Recent years, the research interest to automated analysis of human affective behaviour has attracted increasing attention in psychology, bio-medical, human computer interaction, linguistics, neuroscience, computer science and related disciplines. Since emotions plays an important role in the daily life of human being, the need and importance of automatic emotion recognition has grown with increasing role of human computer interaction applications. Emotion places major role of brain cognition for medically monitoring patients emotional states or stress levels, creating Animated movies, monitoring automobile drivers alertness / drowsiness levels, Human-Robot interaction, Advanced Gaming which enabling very young children to interact with computers, designing techniques for forensic identification, etc. This paper survey from the published work papers, since 2013 till date. The paper surveys about the six prototype expression in human through the various modes of extracting emotion. The paper detailed about that the face detection and tracking, features extraction mechanisms and the technique used for expression classification by the face modelling of state-of-the-art classification. The paper also discussed on multimodal expression classifier of human, i.e. from body and face. Notes also presented the tracking human through sensor and ends with the challenges and future with intention of helping students and researchers who are new to this field. KEYWORDS: Modes of extracting emotion, expression recognition, face modelling, state-of-the-art, basic six emotion feature tracking, body tracking, and Microsoft Kinect sensor. I.INTRODUCTION In early Aristotelian era (4 th century BC) study on the Physiognomy and Facial Expression. [1] Physiognomy is the assessment of a person personality or character from their outer appearance, especially face. But over the year, Physiognomy has been not interested, but facial expression continuously been active topic. From the foundational studies that formed the basis of today‟s research on facial expression can be tracked to 17 th century. In 1649, John Bulwer in his book “Pathomyotmia” was given details of muscle movement of head and various expressions in human. [2] In 1667, Le Brun‟s lectures the physiognomy; later reproduce the book in 1734 [3]. In 18 th century actors and artist referred to his book to achieve “the perfect imitation of „genuine‟ facial expression”. In 19 th century, a facial expression analysis has direct relationship to the automatic face analysis and face expression recognition this was brought by Darwin. [4] In 1894, Darwin states that principles of expression in both humans and animals also groped the various kind of expression and cataloged the facial deformation. [5] In 1884, William James states “James Lange theory” the all emotion are derived from the presence of stimulus in body which evokes the physiological response. [6]Another important study of facial expression analysis and human emotion is done by the Paul Ekman in 1967. He developed the facial expression recognizer, and analyst the facial expression muscular movement showed that different emotion in human through the photographic stimuli. In 1969, [7]Wallbott and Scherer determined emotion from the body movement and speech. In 1978, Suwa established the automatic recognition of facial expression system for analysis the facial expression from a sequence of image of tacking points. Although this system might not be clearly seen the tracking points till 1990s. Later 1992, Samal and Iyengar presented the Facial Feature and Expression analysis of tacking points in movie frames and they also states that the automatic recognition of facial expression requires robust face detection and face tracking system Early and late of 1990, led to the development of robotic face detection, face tracking, face expression analysis and at the same time become very popularity of fields are the Human –computer Interaction (HCI), Affective Computing, etc.,. Since 1990s, researchers are started to develop of interest on automatic facial expression recognition become very active. From 1990 to till now, researchers are mostly concentrating on automatic facial expression recognition and emotion in human using various modes of extraction
15
Embed
2015 - Paper - Emotion Recognition - A Selected Review
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
ISSN (Print) : 2320 – 3765
ISSN (Online): 2278 – 8875
International Journal of Advanced Research in Electrical,
Electronics and Instrumentation Engineering
(An ISO 3297: 2007 Certified Organization)
Vol. 4, Issue 4, April 2015
Copyright to IJAREEIE 10.15662/ijareeie.2015.0404017 1966
Emotion Recognition-A Selected Review
Manoj Prabhakaran.K1, Manoj Kumar Rajagopal
2
Research Associate, School of Electronic Engineering, VIT University, Chennai Campus, Tamil Nadu, India 1
Associate Professor, School of Electronic Engineering, VIT University, Chennai Campus, Tamil Nadu, India 2
ABSTRACT: In Recent years, the research interest to automated analysis of human affective behaviour has attracted
increasing attention in psychology, bio-medical, human computer interaction, linguistics, neuroscience, computer
science and related disciplines. Since emotions plays an important role in the daily life of human being, the need and
importance of automatic emotion recognition has grown with increasing role of human computer interaction
applications. Emotion places major role of brain cognition for medically monitoring patients emotional states or stress
[3] J Montagu., "“The Expression of the Passions: The Origin and Influence of Charles Le Brun‟s „Conférence sur l'expression générale et particulière,”," Yale
University press, 1994.
[4] Charles Darwin, “The Expression of the Emotions in Man and Animals,”, 2nd ed., J Murray, Ed. London , 1904.
[5] William James, "What is Emotion," Mind, vol. 9, no. 34, pp. 188-205, 1884.
[6] Paul Ekman and Wallace.V. Friesen, "Pan-Cultural Elements In Facial Display Of Emotion," Science, pp. 86-88, 1969.
[7] Harald G., and Klaus R. Scherer Wallbott, ""Cues and channels in emotion recognition."," Journal of personality and social psychology, pp. 69-70, 1986.
[8] F., Malfaz, M., Sequeira, J., Gorostiza, J. F., & Salichs, M. A. Alonso-Martín, "A Multimodal Emotion Detection System during Human-Robot Interaction,"
Sensors, vol. 13(11), no. 15549., 2013.
[9] Stefanos Kollias and Kostas Karpouzis, "Multimodal Emotion Recognition And Expressivity Analysis," IEEE International Conference on Mulitimedia and
Expo, pp. 779-783, 2005.
[10] Say Wei Foo,LiyanageC. De Silva Tin Lay New, "Speech emotion recognition using hidden Markov models," Speech communication , no. 41.4, pp. 603-623,
2003.
[11] K. Takahashi and R. Nakatsu Nicholson, "Emotion Recognition in Speech Using Neural Networks," Neural Computing & Applications , no. 9.4, pp. 290-296,
2000.
[12] Ze-Jing Chuang and Chung-Hsien Wu, "Multi-Modal Emotion Recognition from Speech and Text," International Journal of Computational Linguistics and
Chinese Language Processing, no. 9.2 , pp. 1-18, 2004..
[13] Michael, and Mei Si, Garber Barron, "Using body movement and posture for emotion detection in non-acted scenarios. ," , 2012.
[14] I. Kotsia and I. Pitas, "Facial Expression Recognition in Image Sequences Using Geometric Deformation Features and Support Vector Machines," IEEE
Trans. Image Processing, vol. 16, no. 1, pp. 172-187, 2007.
[15] George Caridakis, "Multimodal emotion recognition from expressive faces, body gestures and speech.", Artificial intelligence and innovations 2007: From
theory to applications Springer US, pp. 375-388, 2007.
[16] Peter C. W. Fung, Siew E. Chua and Grainne M. McAlonan Teresa K. W. Wong, "Abnormal spatiotemporal processing of emotional facial expressions in
childhood autism: dipole source analysis of event‐related potentials," European Journal of Neuroscience ():, no. 28.2, pp. 407-416., 2008.
[17] Panagiotis C. Petrantonakis and Leontios J. Hadjileontiadis, "Emotion Recognition from Brain Signals Using Hybrid Adaptive Filtering and Higher Order
[18] Marini, and Abdul Wahab Othman, "Affective face processing analysis in autism using electroencephalogram.," in Information and Communication
Technology for the Muslim World (ICT4M), 2010 International Conference on. IEEE, 2010.
[19] Hanan Salam, "Multi-Object modeling of the face," Université Rennes 1, Signal and Image processing, Thesis 2013.
[20] Peter W.Hallinan and David S.Cohen Alan L Yulie, "Feature extraction from faces using deformable templates", International journal of computer vision, vol.
8(2), pp. 99–111, 1992.
[21] Zhang, and Ruan Qiuqi Baizhen, "Facial feature extraction using improved deformable templates.," in Signal Processing, 2006 8th International Conference
on,Vol 4, 2006.
[22] Marius, and Francoise J. Preteux. Malciu, "Tracking facial features in video sequences using a deformable-model-based approach," in International
Symposium on Optical Science and Technology. International Society for Optics and Photonics, 2000.
[23] A. Witkin, and D. Terzopoulos M. Kass, "Snakes: Active contour models," International journal of computer vision, vol. 1(4), pp. 321-331, 1988.
[24] Petia, and Eric Martí Radeva, "Facial features segmentation by model-based snakes," in International Conference on Computing Analysis and Image
Processing, Prague, 1995.
[25] Won kim,Changmok Oh and Ju-Jang Lee Kap-ho Seo, "Face detection and facial feature extraction using color snake," in Industrial Electronics, 2002. ISIE
2002. Proceedings of the 2002 IEEE International Symposium, 2002.
[26] Rein-Lien, and Anil K. Jain Hsu, "Generating discriminating cartoon faces using interacting snakes," Pattern Analysis and Machine Intelligence, IEEE
Transactions, vol. 25, no. 11, pp. 1388-1398, 2003.
[27] D.H. Cooper, C.J. Taylor, and J. Graha, T.F. Cootes, "Trainable method of parametric shape description," Image and Vision Computing, vol. 10, no. 5, pp.
289–294, 1992.
[28] C.J. Taylor, D.H. Cooper, and J. Graham T.F. Cootes, "Active shape models-their training and application," Computer vision and image understanding, vol.
61, no. 1, pp. 38–59, 1995.
[29] Yuanzhong, and Wataru Ito Li, "Shape parameter optimization for adaboosted active shape model.", in Computer Vision, 2005. ICCV 2005. Tenth IEEE
International Conference on vol 1, 2005.
[30] David, and Timothy F. Cootes Cristinacce, "Boosted Regression Active Shape Models," British Machine Vision Conference, 2007.
[31] David, and Timothy F. Cootes Cristinacce, "Feature Detection and Tracking with Constrained Local Models," British Machine Vision Conference, vol. 2, no.
5, 2006.
[32] David, and Timothy F. Cootes Cristinacce, "Facial feature detection and tracking with automatic template selection.," in Automatic Face and Gesture
Recognition, 2006. FGR 2006. 7th International Conference on. IEEE, 2006.
ISSN (Print) : 2320 – 3765
ISSN (Online): 2278 – 8875
International Journal of Advanced Research in Electrical,
Electronics and Instrumentation Engineering
(An ISO 3297: 2007 Certified Organization)
Vol. 4, Issue 4, April 2015
Copyright to IJAREEIE 10.15662/ijareeie.2015.0404017 1980
[33] D. Cristinacce and T.F. Cootes, "A comparison of shape constrained facial feature detectors," In Automatic Face and Gesture Recognition, 2004. Proceedings.
Sixth IEEE International Conference on, vol. 375–380, 2004.
[34] P.F. Felzenszwalb and D.P. Huttenlocher., "Pictorial structures for object recognition," International Journal of Computer Vision, vol. 61, no. 1, pp. 55–79,
2005.
[35] Vahid, and Josephine Sullivan Kazemi, "Face alignment with part-based modeling," in Proceedings of the British Machine Vision Conference. BMVA Press,
2011.
[36] T. Maurer and C. Von der Malsburg, "Tracking and learning graphs and pose on image sequences of faces," in In Automatic Face and Gesture Recognition,
1996., Proceedings of the Second International Conference on, 1996, pp. 176–181.
[37] Jean-Marc Fellous,Norbert Kruger, Laurenz Wiskott, "Face recognition by elastic bunch graph matching," Pattern Analysis and Machine Intelligence, IEEE
Transactions, vol. 19, no. 7, pp. 775-779, 1997.
[38] Ilse, and Hichem Sahli Ravyse, "A biomechanical model for image-based estimation of 3d face deformations," in Acoustics, Speech and Signal Processing,
2008. ICASSP 2008. IEEE International Conference on. IEEE, 2008.
[39] Ping Fan,Ilse Ravyse,Valentin Enescu,Rongchun Zhao and Hichem Sahli Yunshu Hou, "Smooth adaptive fitting of 3d face model for the estimation of rigid
and nonrigid facial motion in video sequences," Signal Processing: Image Communication , vol. 26, no. 8, pp. 550-566, 2011.
[40] W.C. Tien, M. Desbrun, and F. Pighin Joshi, "Learning controls for blend shape based realistic facial animation," In ACM SIGGRAPH 2005 Courses, p. 8,
2005.
[41] Baran and J. Popovic, "Automatic rigging and animation of 3d characters," ACM Transactions on Graphics (TOG), vol. 26, no. 3, p. 72, 2007.
[42] Frederic I Parke, "A Parametric Model for Human Faces," University of Utah, PhD thesis 1974.
[43] Frederic I Parke, "Parameterized models for facial animation," Computer Graphics and Applications , vol. 2, no. 9, pp. 61-68., 1982).
[44] Paul Ekman and W.V. Friesen, "Facial action coding system," Stanford University, Palo Alto, Consulting Psychologists Press 1977.
[45] M. Rydfalk, "Candide, a parameterized face. Technical report," Dept. of Electrical Engineering, Linkoping University, Sweden, LiTH-ISY-I-866 Technical
report 1987.
[46] J. Ahlberg, "Candide-3–an updated parameterized face," Dept. of Electrical Engineering, Linkoping University, Sweden, LiTH-ISY Technical report 2001.
[47] J. Ahlberg, "An active model for facial feature tracking," EURASIP Journal on Applied Signal Processing, pp. 566–571, 2002.
[48] Atlantic City, 1998), MPEG Video and SNHC, Text of ISO/IEC FDISDoc. Audio (MPEG Mtg, ISO/MPEG N2503. 14 496–3,.
[49] Nadia, E. Primeau, and Daniel Thalmann. Magnenat-Thalmann, "Abstract muscle action procedures for human face animation," The Visual Computer, vol. 3,
no. 5, pp. 290-297., 1988.
[50] P. Kalra, "An Interactive Multimodal Facial Animation System," Ecole Polytechnique F´ed´erale de Lausanne, Switzerland, PhD thesis 1993.
[51] N. Stoiber, "Modeling Emotional Facial Expressions and their Dynamics for Realistic Interactive Facial Animation on Virtual Characters. ," Universit´e de
Rennes, PhD thesis 2010.
[52] S.M. Platt and N.I. Badler., "Animating facial expressions," ACM SIGGRAPH computer graphics, vol. 15, no. 3, pp. 245–252, 1981.
[53] Mathew A.Turk and Alex P.Pentland, "Face recognition using eigenfaces," in Computer Vision and Pattern Recognition, 1991. Proceedings CVPR '91., IEEE
Computer Society Conference on, 1991, pp. 586-591.
[54] S. Sclaroff and J. Isidoro., "Active blobs. ," in In Computer Vision,1998. Sixth International Conference , 1146–1153, p. 1998.
[55] V. Blanz and T. Vetter, "A morphable model for the synthesis of 3d faces," in In Proceedings of the 26th annual conference on Computer graphics and
interactive techniques, 1999, pp. 187–194.
[56] G. Edwards, and C.J. Taylor T.F. Cootes, "A comparative evaluation of active appearance model algorithms," in In British Machine Vision Conference, vol 2,
1998, pp. 680–689.
[57] G.J. Edwards, and C.J. Taylor T.F. Cootes, "Active appearance models," in In IEEE European Conference on Computer Vision (ECCV ‟98), 1998, p. 484.
[58] Dr.Amitabha Mukherjee and Dr.Prithwijit Guha, Abhishek Kar, "Skeleton tracking using Microsoft Kinect," Springer , pp. 1-11, 2011.
[59] Patrick Horain, Manoj Kumar Rajagopal, Senanayak Sesh Kumar Karri David Antonio G´omezJ´auregui, "Real-Time Particle Filtering with Heuristics for
3D Motion Capture by Monocular Vision," Multimedia Signal Processing (MMSP), IEEE International Workshop., pp. 139 – 144, 2010.
[60] David J.Fleet, Pascal Fua Raquel Urtasun, "Monocular tracking of the golf swing," Computer Vision and Pattern Recognition, CVPR 2005. IEEE Computer
Society Conference , pp. 932 - 938, 2005.
[61] Ling Shao,Dong Xu,Jamie Shotton Jungong Han, "Enhanced Computer Vision with Microsoft Kinect Sensor: A Review," IEEE Transactions On
Cybernetics, vol. 43, no. 5, 2013.
[62] David Catuhe, "Programming with the Kinect for windows software development Kit," Microsoft Press , 2012.
[63] Henk Luinge, and Per Slycke Daniel Roetenberg, "Xsens MVN: Full 6DOF Human Motion Tracking," Xsens Motion Technologies BV, technical report
2009.
[64] Abhijit Jana, Kinect for Windows SDK Programming Guide., 2012.