CFP last date
22 April 2024
Reseach Article

A Survey paper on Facial Expression Synthesis using Artificial Neural Network

Published on July 2015 by Deepti Chandra, Rajendra Hegadi, Sanjeev Karmakar
National Conference on Knowledge, Innovation in Technology and Engineering (NCKITE 2015)
Foundation of Computer Science USA
NCKITE2015 - Number 1
July 2015
Authors: Deepti Chandra, Rajendra Hegadi, Sanjeev Karmakar
7fe5d41b-0eec-4c6f-8c62-8f36bc83ed83

Deepti Chandra, Rajendra Hegadi, Sanjeev Karmakar . A Survey paper on Facial Expression Synthesis using Artificial Neural Network. National Conference on Knowledge, Innovation in Technology and Engineering (NCKITE 2015). NCKITE2015, 1 (July 2015), 19-26.

@article{
author = { Deepti Chandra, Rajendra Hegadi, Sanjeev Karmakar },
title = { A Survey paper on Facial Expression Synthesis using Artificial Neural Network },
journal = { National Conference on Knowledge, Innovation in Technology and Engineering (NCKITE 2015) },
issue_date = { July 2015 },
volume = { NCKITE2015 },
number = { 1 },
month = { July },
year = { 2015 },
issn = 0975-8887,
pages = { 19-26 },
numpages = 8,
url = { /proceedings/nckite2015/number1/21478-2646/ },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Proceeding Article
%1 National Conference on Knowledge, Innovation in Technology and Engineering (NCKITE 2015)
%A Deepti Chandra
%A Rajendra Hegadi
%A Sanjeev Karmakar
%T A Survey paper on Facial Expression Synthesis using Artificial Neural Network
%J National Conference on Knowledge, Innovation in Technology and Engineering (NCKITE 2015)
%@ 0975-8887
%V NCKITE2015
%N 1
%P 19-26
%D 2015
%I International Journal of Computer Applications
Abstract

Facial expressions are a kind of nonverbal communication. They carry the state of emotion of a person. Facial expression plays an important role in face-to face human-computer communication. Automatic facial expression synthesis became popular research area nowadays. It can be used in many areas such that physiology, education, murder squad, analysis of tendency to crime to get a clue about mental signals of a person. Although considerable efforts have been made to enable computers to speak like human beings, how to express the rich semantic information through facial expression still remains a challenging problem. This paper presents a novel approach using artificial neural network. This paper proposes two different approaches with different methods for facial expressions synthesis based on artificial neural networks (ANN). Firstly, Modeling using Hidden Markov Models is proposed. Secondly, modeling using Recurrent Neural.

References
  1. B. Abboud, F. Davoine, and M Dang. Expressive Face Recognition and Synthesis. In Proceedings of IEEE CVPR workshop on Computer Vision and Pattern Recognition forHuman Computer Interaction, Madison, U. S. A. , 2003.
  2. J. Ahlberg. Extracting MPEG-4 FAPS from Video. In I. S. Pandzic, and R. Forchheimer, editors, MPEG-4 Facial Animation – the Standard, Implementation and Applications,John Wiley & Sons, 2002.
  3. E. André, T. Rist, S. van Mulken, M. Klesen, and S. Baldes. The Automated Design of Believable Dialogues for Animated Presentation Teams. In S. Prevost J. Cassell, J. Sullivan and E. Churchill, editors, Embodied Conversational Characters. MIT press, Cambridge, MA, 2000.
  4. G. Ball and J. Breese. Emotion and Personality in a Conversational Agent. In S. Prevost J. Cassell, J. Sullivan and E. Churchill, editors, Embodied Conversational Characters. MITpress, Cambridge, MA, 2000.
  5. K. Balci. Xface: Open Source Toolkit for Creating 3D Faces of an Embodied Conversational Agent. In Proceedings of Smart Graphics, 2005.
  6. K. Balci. Xface: MPEG-4 based Open Source Toolkit for 3D Facial Animation. In Proceedings of Advance Visual Interfaces, Bari, Italy, 2004.
  7. J. Beskow, L. Cerrato, P. Cosi, E. Costantini, M. Nordstrand, F. Pianesi, M. Prete, and G. Svanfeldt. Preliminary Cross-cultural Evaluation of Expressiveness in Synthetic Faces. In E. Andrè, L. Dybkiaer, W. Minker, and P. Heisterkamp, editors, Affective Dialogue Systems ADS'04, Springer Verlag, 2004.
  8. E. Bevacqua, M. Mancini, and C. Pelachaud. Speaking with Emotions. AISB 2004 Convention: Motion, Emotion and Cognition. Leeds, United Kingdom, 2004.
  9. T. Bickmore, and J, Cassell. Social Dialogue with Embodied Conversational Agents. In J. van Kuppevelt, L. Dybkjaer, and N. Bernsen, editors, Advances in Natural,Multimodal Dialogue Systems. , Kluwer Academic Publishers, 2005.
  10. V. Blanz, C. Basso, T. Poggio, and T. Vetter. Reanimating Faces in Images and Video. In Proceedings of EuroGraphics'03, 2003.
  11. M. Brand. Voice Puppetry. In Proceedings of ACM SIGGRAPH'99, 1999.
  12. T. D. Bui, D. Heylen, M. Poel, and A. Nijholt. Generation of Facial Expressions from Emotion Using a Fuzzy Rule Based System. In Proceedings of the 14th Australian Joint Conference on Artificial Intelligence (AI 2001), Adelaide, Australia, 2001.
  13. J. Cassell, H. Vilhjalmsson, and T. Bickmore. BEAT the Behavior Expression Animation Toolkit. In Proceedings of SIGHHRAPH 01, 2001.
  14. J. Cassell, T. Bickmore, L. Cambell, H. Vilhjalmsson, and H. Yan. Human Conversation as a System Framework: Designing Embodied Conversational Agents. In J. Cassel, J. Sullivan, S. Prevost, and E. Churchill, editors, Embodied Conversational Agents, MIT Press, 2000.
  15. J. Cassell, M. Stone and H. Yan. Coordination and Context dependence in the Generation of Embodied Conversation. In Proceedings of First International Conference on Natural Language Generation, 2000.
  16. J. Cassell, J. Sullivan, S. Prevost, and E. Churchill, editors,Embodied Conversational Agents. MIT Press, 2000.
  17. E. S. Chuang, H. Deshpande, and C. Bregler. Facial Expression Space Learning. In Proceedings of Pacific Graphics'02, 2002.
  18. I. Cohen, A. Garg, and T. Huang. Emotion Recognition from Facial Expressions using Multilevel HMM. 2000.
  19. M. M. Cohen, and D. W. Massaro. Modelling Coarticulation in Synthetic Visual Speech. In N. Magnenat-Thalmann, and D. Thalmann, editors, Models and Techniques in Computer Animation, Springer-Verlag, 1993.
  20. M. M. Cohen, D. W. Massaro, and R. Clark. Training a Talking Head. Proceedings of 4th IEEE International Conference on Multimodal Interfaces (ICMI'02),Pittsburgh, PA, 2002
  21. J. Cohn, K. Schmidt, R. Gross, and P. Ekman. Individual Differences in Facial Expression: Stability over Time, Relation to self-reported Emotion, and Ability to inform Person Identification. In Proceedings of the International Conference on Multimodal User Interfaces (ICMI 2002),
  22. T. F. Cootes, and C. J. Taylor. Statistical Models of Appearance for Computer Vision, Wolfson Image Analysis Unit, Imaging Science and Biomedical Engineering, University of Manchester, Manchester M13 9PT, U. K. ,2001.
  23. R. R. Cornelious. Theoretical Approaches to Emotion. In Proceeding of ISCA Workshop on Speech and Emotion, Belfast, 2000.
  24. E. Cosatto, J. Ostermann, H. P. Graf, and J. Schroeter. Lifelike Talking Faces for Interactive Services. In Proceedings of the IEEE, volume 91, number 9, 2003.
  25. P. Cosi, A. Fusaro, D. Grigoletto, and G. Tisato. Data-Driven Tools for Designing Talking Heads Exploiting Emotional Attitudes. In Proceedings of Tutorial and Research Workshop "Affective Dialogue Systems", Germany, 2004.
  26. Cosi P. , Fusaro A. , Tisato G. LUCIA a New Italian Talking-Head Based on a Modified Cohen-Massaro's Labial Coarticulation Model. In Proceedings of Eurospeech 2003, Geneva, Switzerland, 2003.
  27. E. Costantini, F. Pianesi, and P. Cosi. Evaluation of Synthetic Faces: Human Recognition of Emotional Facial Displays. In E. Andrè, L. Dybkiaer, W. Minker, and P. Heisterkamp, editors, Affective Dialogue Systems ADS '04, Springer-Verlag, 2004.
  28. M. D'Amico, and G. Ferrigno. A Technique for the Evaluation of Derivatives from Noisy Biomechanical Data by a Model-Based Bandwidth-Selection Procedure. In Med. & Biol. Eng. & Comp. , 28, 1990.
  29. X. -B. Gao, B. Xiao et. al. A Comparative Study of Three Graph Edit Distance Algorithms. Foundations on Computational Intelligence (Edited by A. Abraham, Aboul-Ella Hassanien et al. ), ISBN: 978-3-642-01535-9, Springer, Vol. 5, SCI 205, pp. 223-242, 2009.
  30. X. -B. Gao, W. Lu et al. Image Quality Assessment: A Multiscale Geometric Analysis based Framework and Examples. Handbook of Natural Computing (Edited by Rozenberg Grzegorz, Back Thomas H. W. , Kok Joost N), ISBN: 978-3-54092911-6, Springer-Verlag, 2011
  31. C. Deng, X. -B. Gao et al. Robust Image Watermarking Based on Feature Regions. Multimedia Analysis, Processing and Communications(Edited by W. Lin, D. Tao, J. Kacprzyk, Z. Li, E. Izquierdo and H. Wang), ISBN: 978-3-642-19550-1, Springer-Verlag Berlin Heidelberg 2011, SCI 346, pp. 111-137.
  32. B. Xiao, X. -B. Gao et al. Recognition of Sketches in Photos. Multimedia Analysis, Processing and Communications (Edited by W. Lin, D. Tao, J. Kacprzyk, Z. Li, E. Izquierdo and H. Wang), ISBN: 978-3-642-19550-1, Springer-Verlag Berlin Heidelberg 2011, SCI 346, pp. 239-262.
  33. Salvador E. Ayala-Raggi, Leopoldo Altamirano-Robles and Janeth Cruz-Enriquez, "Face Image Synthesis and Interpretation Using 3D Illumination-Based AAM Models", New Approaches to Characterization and Recognition of Faces, InTech, 2011.
  34. Mingli Song , Dacheng Tao ; Shengpeng Sun ; Chun Chen ; Jiajun Bu "Joint Sparse Learning for 3-D Facial Expression Generation" Image Processing, IEEE Transacation,2013
  35. ,Qming Hou,Kun Zhou," Displaced dynamic expression regression for real-time facial tracking and animation", ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH 2014
  36. Jun li,Weiwei Zu,Zhaiqun CheinLightweight wrinkle synthesis for 3D facial modeling and animation ,Elesvier ,pp117-122,2015,2015
Index Terms

Computer Science
Information Sciences

Keywords

Emotional Facial Expression Modeling Face Synthesis Facial Animation Hidden Markov Models Recurrent Neural Networks.