CFP last date
20 May 2024
Reseach Article

Minimum AUs for Real-Time Facial Expression Recognition in Frame Sequence

by Saghir Ahmed Alfasly, Suresha M.
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 145 - Number 5
Year of Publication: 2016
Authors: Saghir Ahmed Alfasly, Suresha M.
10.5120/ijca2016910630

Saghir Ahmed Alfasly, Suresha M. . Minimum AUs for Real-Time Facial Expression Recognition in Frame Sequence. International Journal of Computer Applications. 145, 5 ( Jul 2016), 34-38. DOI=10.5120/ijca2016910630

@article{ 10.5120/ijca2016910630,
author = { Saghir Ahmed Alfasly, Suresha M. },
title = { Minimum AUs for Real-Time Facial Expression Recognition in Frame Sequence },
journal = { International Journal of Computer Applications },
issue_date = { Jul 2016 },
volume = { 145 },
number = { 5 },
month = { Jul },
year = { 2016 },
issn = { 0975-8887 },
pages = { 34-38 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume145/number5/25277-2016910630/ },
doi = { 10.5120/ijca2016910630 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-06T23:48:00.065277+05:30
%A Saghir Ahmed Alfasly
%A Suresha M.
%T Minimum AUs for Real-Time Facial Expression Recognition in Frame Sequence
%J International Journal of Computer Applications
%@ 0975-8887
%V 145
%N 5
%P 34-38
%D 2016
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Automatic emotion recognition is one of the most challenging tasks in computer vision and robotics. Although the verbal communication is an essential element of information exchange, the communication would be more effective and efficient by involving non-verbal communication including facial expression interpretation. Many approaches and methodologies have been proposed in terms of face segmentation, facial features extraction, and emotion classification. This article discusses a model for Facial Expression Recognition, which recognize the emotions from human facial expressions in live video acquired with web camera or using recorded videos. This article explains an entirely automated system for emotion recognition of six emotions (anger, disgust, fear, happiness, sadness and surprise) plus neutral state, involving image acquiring, preprocessing, face detection, segmentation, features extraction, encode AUs and finally classification. This system acquires images either with web camera or import recorded videos. Finally, it utilizes the minimum AUs for emotion classification with Rule-Based Classifier.

References
  1. E. Boyle, A.H. Anderson, A. Newlands, The effects of visibility on dialogue and performance in a co-operative problem solving task, Language and Speech 37 (1) (1994) 1–20.
  2. G.M. Stephenson, K. Ayling, D.R. Rutter, The role of visual communication in social exchange, Britain Journal of Social Clinical Psychology 15 (1976) 113–120.
  3. S. Roweis, L. Saul, Nonlinear dimensionality reduction by locally linear embedding, Science 290 (2000) 2323–2326.
  4. B. Fasel, J. Luettin, Automatic facial expression analysis: a survey, Pattern Recognition 36 (1) (2003) 259–275.
  5. Y.L. Tian, T. Kanade, J. Cohn, Evaluation of Gabor wavelet-based facial action unit recognition in image sequences of increasing complexity, in: Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition, 2002, pp. 229–234.
  6. G. Littlewort, M. Bartlett, I. Fasel, J. Susskind, J. Movellan, Dynamics of facial expression extracted automatically from video, in: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Workshop on Face Processing in Video, 2004.
  7. Ekman, P., 1993. Facial expression and emotion. Am. Psychol., 48(4):384-392.
  8. Zeng, Z., Pantic, M., Roisman, G.I., et al., 2009. A survey of affect recognition methods: audio, visual, and spontaneous expressions. IEEE Trans. Patt. Anal. Mach. Intell., 31(1):39-58. [doi:10.1109/TPAMI.2008.52]
  9. Vinciarelli, A., Pantic, M., Heylen, D., et al., 2012. Bridging the gap between social animal and unsocial machine: a survey of social signal processing. IEEE Trans. Affect. Comput., 3(1):69-87.
  10. P. Lucey, J.F. Cohn, I. Matthews, S. Lucey, S. Sridharan, J. Howlett, K.M. Prkachin, Automatically detecting pain in video through facial action units, IEEE Trans. Syst. Man. Cybern. B Cybern. 41 (2011) 664–674.
  11. T. Ahonen, A. Hadid, M. Pietikainen, Face description with local binary patterns: Application to face recognition, Pattern Anal. Mach. Intell., IEEE Trans. 28 (2006a) 2037–2041.
  12. N. Dalal, B. Triggs, Histograms of oriented gradients for human detection, Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, IEEE, 2005, pp. 886–893.
  13. M.S. Bartlett, G. Littlewort, I. Fasel, J.R. Movellan, Real time face detection and facial expression recognition: development and applications to human computer interaction, IEEE Computer Vision and Pattern Recognition Workshop, 2003. CVPRW’03., 2003, p. 53.
  14. Y.l. Tian, T. Kanade, J.F. Cohn, Recognizing action units for facial expression analysis, Pattern Anal. Mach. Intell., IEEE Trans. 23 (2001) 97–115.
  15. M.F. Valstar, I. Patras, M. Pantic, Facial action unit detection using probabilistic actively learned support vector machines on tracked facial point data, IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2005. CVPR Workshops, IEEE, 2005, p. 76.
  16. B. Jiang, M.F. Valstar, M. Pantic, Action unit detection using sparse appearance descriptors in space-time video volumes, Automatic Face & Gesture Recognition and Workshops (FG 2011), 2011 IEEE International Conference on, IEEE, 2011, pp. 314– 321.
  17. G. Zhao, M. Pietikainen, Dynamic texture recognition using local binary patterns with an application to facial expressions, IEEE Trans. Pattern Anal. Mach. Intell. 29 (2007) 915–928.
  18. Y.l. Tian, T. Kanade, J.F. Cohn, Evaluation of Gabor-wavelet-based facial action unit recognition in image sequences of increasing complexity, Proceedings Automatic Face and Gesture Recognition, 2002., 2002, pp. 229–234.
  19. T.R. Almaev, M.F. Valstar, Local Gabor binary patterns from three orthogonal planes for automatic facial expression recognition, Affective Computing and Intelligent Interaction (ACII), 2013 Humaine Association Conference on, IEEE, 2013, pp. 356–361.
  20. P. Ekman, W.V. Friesen, Constants across cultures in the face and emotion, J. Pers. Soc. Psycho. 17 (1971) 124.
  21. D. Sanchez-Mendoza et al. Emotion recognition from mid-level features, Pattern Recognition letters. 67 (2015) 66–74.
  22. Z. Zhang, M. Lyons, M. Schuster, S. Akamatsu, Comparison between geometry-based and gabor-wavelets-based facial expression recognition using multi-layer perceptron, Automatic Face and Gesture Recognition, 1998. Proceedings. Third IEEE International Conference on, IEEE, 1998, pp. 454–459.
  23. L.A. Jeni, D. Takacs, A. Lorincz, High quality facial expression recognition in video streams using shape related information only, Computer Vision Workshops (ICCV Workshops), 2011 IEEE International Conference on, IEEE, 2011, pp. 2168– 2174.
  24. I. Kotsia, I. Pitas, Facial expression recognition in image sequences using geometric deformation features and support vector machines, Image Process., IEEE Trans. 16 (2007) 172–187.
  25. C. Shan, S. Gong, P.W. McOwan, Robust facial expression recognition using local binary patterns, IEEE International Conference on Image Processing, 2005. ICIP 2005, IEEE, 2005, pp. II–370.
  26. M. Pantic, L.J. Rothkrantz, Expert system for automatic analysis of facial expressions, Image Vis. Comput. 18 (2000)8
  27. Alfasly S.A, Suresha M, A simple approach for facial features detection, International Journal of Advanced Research in Computer and Communication Engineering, 5(6):154-158, June 2016.
Index Terms

Computer Science
Information Sciences

Keywords

Facial Expression Recognition Facial Action Units Rule-Based Classifier.