Call for Paper - December 2020 Edition
IJCA solicits original research papers for the December 2020 Edition. Last date of manuscript submission is November 20, 2020. Read More

A Survey paper on Facial Expression Synthesis using Artificial Neural Network

Print
PDF
IJCA Proceedings on National Conference on Knowledge, Innovation in Technology and Engineering (NCKITE 2015)
© 2015 by IJCA Journal
NCKITE 2015 - Number 1
Year of Publication: 2015
Authors:
Deepti Chandra
Rajendra Hegadi
Sanjeev Karmakar

Deepti Chandra, Rajendra Hegadi and Sanjeev Karmakar. Article: A Survey paper on Facial Expression Synthesis using Artificial Neural Network. IJCA Proceedings on National Conference on Knowledge, Innovation in Technology and Engineering (NCKITE 2015) NCKITE 2015(1):19-26, July 2015. Full text available. BibTeX

@article{key:article,
	author = {Deepti Chandra and Rajendra Hegadi and Sanjeev Karmakar},
	title = {Article: A Survey paper on Facial Expression Synthesis using Artificial Neural Network},
	journal = {IJCA Proceedings on National Conference on Knowledge, Innovation in Technology and Engineering (NCKITE 2015)},
	year = {2015},
	volume = {NCKITE 2015},
	number = {1},
	pages = {19-26},
	month = {July},
	note = {Full text available}
}

Abstract

Facial expressions are a kind of nonverbal communication. They carry the state of emotion of a person. Facial expression plays an important role in face-to face human-computer communication. Automatic facial expression synthesis became popular research area nowadays. It can be used in many areas such that physiology, education, murder squad, analysis of tendency to crime to get a clue about mental signals of a person. Although considerable efforts have been made to enable computers to speak like human beings, how to express the rich semantic information through facial expression still remains a challenging problem. This paper presents a novel approach using artificial neural network. This paper proposes two different approaches with different methods for facial expressions synthesis based on artificial neural networks (ANN). Firstly, Modeling using Hidden Markov Models is proposed. Secondly, modeling using Recurrent Neural.

References

  • B. Abboud, F. Davoine, and M Dang. Expressive Face Recognition and Synthesis. In Proceedings of IEEE CVPR workshop on Computer Vision and Pattern Recognition forHuman Computer Interaction, Madison, U. S. A. , 2003.
  • J. Ahlberg. Extracting MPEG-4 FAPS from Video. In I. S. Pandzic, and R. Forchheimer, editors, MPEG-4 Facial Animation – the Standard, Implementation and Applications,John Wiley & Sons, 2002.
  • E. André, T. Rist, S. van Mulken, M. Klesen, and S. Baldes. The Automated Design of Believable Dialogues for Animated Presentation Teams. In S. Prevost J. Cassell, J. Sullivan and E. Churchill, editors, Embodied Conversational Characters. MIT press, Cambridge, MA, 2000.
  • G. Ball and J. Breese. Emotion and Personality in a Conversational Agent. In S. Prevost J. Cassell, J. Sullivan and E. Churchill, editors, Embodied Conversational Characters. MITpress, Cambridge, MA, 2000.
  • K. Balci. Xface: Open Source Toolkit for Creating 3D Faces of an Embodied Conversational Agent. In Proceedings of Smart Graphics, 2005.
  • K. Balci. Xface: MPEG-4 based Open Source Toolkit for 3D Facial Animation. In Proceedings of Advance Visual Interfaces, Bari, Italy, 2004.
  • J. Beskow, L. Cerrato, P. Cosi, E. Costantini, M. Nordstrand, F. Pianesi, M. Prete, and G. Svanfeldt. Preliminary Cross-cultural Evaluation of Expressiveness in Synthetic Faces. In E. Andrè, L. Dybkiaer, W. Minker, and P. Heisterkamp, editors, Affective Dialogue Systems ADS'04, Springer Verlag, 2004.
  • E. Bevacqua, M. Mancini, and C. Pelachaud. Speaking with Emotions. AISB 2004 Convention: Motion, Emotion and Cognition. Leeds, United Kingdom, 2004.
  • T. Bickmore, and J, Cassell. Social Dialogue with Embodied Conversational Agents. In J. van Kuppevelt, L. Dybkjaer, and N. Bernsen, editors, Advances in Natural,Multimodal Dialogue Systems. , Kluwer Academic Publishers, 2005.
  • V. Blanz, C. Basso, T. Poggio, and T. Vetter. Reanimating Faces in Images and Video. In Proceedings of EuroGraphics'03, 2003.
  • M. Brand. Voice Puppetry. In Proceedings of ACM SIGGRAPH'99, 1999.
  • T. D. Bui, D. Heylen, M. Poel, and A. Nijholt. Generation of Facial Expressions from Emotion Using a Fuzzy Rule Based System. In Proceedings of the 14th Australian Joint Conference on Artificial Intelligence (AI 2001), Adelaide, Australia, 2001.
  • J. Cassell, H. Vilhjalmsson, and T. Bickmore. BEAT the Behavior Expression Animation Toolkit. In Proceedings of SIGHHRAPH 01, 2001.
  • J. Cassell, T. Bickmore, L. Cambell, H. Vilhjalmsson, and H. Yan. Human Conversation as a System Framework: Designing Embodied Conversational Agents. In J. Cassel, J. Sullivan, S. Prevost, and E. Churchill, editors, Embodied Conversational Agents, MIT Press, 2000.
  • J. Cassell, M. Stone and H. Yan. Coordination and Context dependence in the Generation of Embodied Conversation. In Proceedings of First International Conference on Natural Language Generation, 2000.
  • J. Cassell, J. Sullivan, S. Prevost, and E. Churchill, editors,Embodied Conversational Agents. MIT Press, 2000.
  • E. S. Chuang, H. Deshpande, and C. Bregler. Facial Expression Space Learning. In Proceedings of Pacific Graphics'02, 2002.
  • I. Cohen, A. Garg, and T. Huang. Emotion Recognition from Facial Expressions using Multilevel HMM. 2000.
  • M. M. Cohen, and D. W. Massaro. Modelling Coarticulation in Synthetic Visual Speech. In N. Magnenat-Thalmann, and D. Thalmann, editors, Models and Techniques in Computer Animation, Springer-Verlag, 1993.
  • M. M. Cohen, D. W. Massaro, and R. Clark. Training a Talking Head. Proceedings of 4th IEEE International Conference on Multimodal Interfaces (ICMI'02),Pittsburgh, PA, 2002
  • J. Cohn, K. Schmidt, R. Gross, and P. Ekman. Individual Differences in Facial Expression: Stability over Time, Relation to self-reported Emotion, and Ability to inform Person Identification. In Proceedings of the International Conference on Multimodal User Interfaces (ICMI 2002),
  • T. F. Cootes, and C. J. Taylor. Statistical Models of Appearance for Computer Vision, Wolfson Image Analysis Unit, Imaging Science and Biomedical Engineering, University of Manchester, Manchester M13 9PT, U. K. ,2001.
  • R. R. Cornelious. Theoretical Approaches to Emotion. In Proceeding of ISCA Workshop on Speech and Emotion, Belfast, 2000.
  • E. Cosatto, J. Ostermann, H. P. Graf, and J. Schroeter. Lifelike Talking Faces for Interactive Services. In Proceedings of the IEEE, volume 91, number 9, 2003.
  • P. Cosi, A. Fusaro, D. Grigoletto, and G. Tisato. Data-Driven Tools for Designing Talking Heads Exploiting Emotional Attitudes. In Proceedings of Tutorial and Research Workshop "Affective Dialogue Systems", Germany, 2004.
  • Cosi P. , Fusaro A. , Tisato G. LUCIA a New Italian Talking-Head Based on a Modified Cohen-Massaro's Labial Coarticulation Model. In Proceedings of Eurospeech 2003, Geneva, Switzerland, 2003.
  • E. Costantini, F. Pianesi, and P. Cosi. Evaluation of Synthetic Faces: Human Recognition of Emotional Facial Displays. In E. Andrè, L. Dybkiaer, W. Minker, and P. Heisterkamp, editors, Affective Dialogue Systems ADS '04, Springer-Verlag, 2004.
  • M. D'Amico, and G. Ferrigno. A Technique for the Evaluation of Derivatives from Noisy Biomechanical Data by a Model-Based Bandwidth-Selection Procedure. In Med. & Biol. Eng. & Comp. , 28, 1990.
  • X. -B. Gao, B. Xiao et. al. A Comparative Study of Three Graph Edit Distance Algorithms. Foundations on Computational Intelligence (Edited by A. Abraham, Aboul-Ella Hassanien et al. ), ISBN: 978-3-642-01535-9, Springer, Vol. 5, SCI 205, pp. 223-242, 2009.
  • X. -B. Gao, W. Lu et al. Image Quality Assessment: A Multiscale Geometric Analysis based Framework and Examples. Handbook of Natural Computing (Edited by Rozenberg Grzegorz, Back Thomas H. W. , Kok Joost N), ISBN: 978-3-54092911-6, Springer-Verlag, 2011
  • C. Deng, X. -B. Gao et al. Robust Image Watermarking Based on Feature Regions. Multimedia Analysis, Processing and Communications(Edited by W. Lin, D. Tao, J. Kacprzyk, Z. Li, E. Izquierdo and H. Wang), ISBN: 978-3-642-19550-1, Springer-Verlag Berlin Heidelberg 2011, SCI 346, pp. 111-137.
  • B. Xiao, X. -B. Gao et al. Recognition of Sketches in Photos. Multimedia Analysis, Processing and Communications (Edited by W. Lin, D. Tao, J. Kacprzyk, Z. Li, E. Izquierdo and H. Wang), ISBN: 978-3-642-19550-1, Springer-Verlag Berlin Heidelberg 2011, SCI 346, pp. 239-262.
  • Salvador E. Ayala-Raggi, Leopoldo Altamirano-Robles and Janeth Cruz-Enriquez, "Face Image Synthesis and Interpretation Using 3D Illumination-Based AAM Models", New Approaches to Characterization and Recognition of Faces, InTech, 2011.
  • Mingli Song , Dacheng Tao ; Shengpeng Sun ; Chun Chen ; Jiajun Bu "Joint Sparse Learning for 3-D Facial Expression Generation" Image Processing, IEEE Transacation,2013
  • ,Qming Hou,Kun Zhou," Displaced dynamic expression regression for real-time facial tracking and animation", ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH 2014
  • Jun li,Weiwei Zu,Zhaiqun CheinLightweight wrinkle synthesis for 3D facial modeling and animation ,Elesvier ,pp117-122,2015,2015