Survey of Approaches for Building Emotion Detection Applications using a Multi-modal Approach

Print
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Year of Publication: 2018
Authors:
Siddhesh More, Shruti Subramanyam, Kalyani Reddy, Komal Patil, Archana Chaugule
10.5120/ijca2018916849

Siddhesh More, Shruti Subramanyam, Kalyani Reddy, Komal Patil and Archana Chaugule. Survey of Approaches for Building Emotion Detection Applications using a Multi-modal Approach. International Journal of Computer Applications 179(37):18-20, April 2018. BibTeX

@article{10.5120/ijca2018916849,
	author = {Siddhesh More and Shruti Subramanyam and Kalyani Reddy and Komal Patil and Archana Chaugule},
	title = {Survey of Approaches for Building Emotion Detection Applications using a Multi-modal Approach},
	journal = {International Journal of Computer Applications},
	issue_date = {April 2018},
	volume = {179},
	number = {37},
	month = {Apr},
	year = {2018},
	issn = {0975-8887},
	pages = {18-20},
	numpages = {3},
	url = {http://www.ijcaonline.org/archives/volume179/number37/29282-2018916849},
	doi = {10.5120/ijca2018916849},
	publisher = {Foundation of Computer Science (FCS), NY, USA},
	address = {New York, USA}
}

Abstract

Emotion detection of users is a challenging and exciting field where user’s data is analyzed to recognize emotions such as happy, sad, angry etc. This data could be in one or multiple formats such as audio, video, text, still images etc. Relevant features are extracted and fused together to give a label. Fusing data from two or more sources(modalities) is another challenge, feature level or decision level fusion is employed. This paper inspects and studies the various approaches to multi-modal extraction of emotions.

References

  1. R. Zatarain-Cabada, M. Lucia Barrn-Estrada, J. Garca-Lizrraga, G.Muoz-Sandoval, “Java Tutoring System with Facial and Text Emotion Recognition”, Instituto Tecnolgico de Culiacn,pp. 49–58”2015.
  2. C. Busso, Z. Deng , S. Yildirim, M. Bulut, C. Min Lee, A. Kazemzadeh, S. Lee, U. Neumann, S.Narayanan , “Analysis of Emotion Recognition Using Facial Expressions, Speech and Multimodal Information”, Proceedings of the 6th international conference on Multimodal interfaces,Pages 205-211,2004 .
  3. S. Emerich1, E. Lupu1, A. Apatean1, “Emotions recognition by speech and facial expression analysis” , Signal Processing Conference, 2009 17th European,2009.
  4. S. Poria, A. Hussain, E. Cambria, ”Text-based sentiment analysis: Towards multimodal systems,”Proceedings of the 13th international conference on multimodal interfaces, Pages 169-176, 2011.
  5. L. Tian, D. Zheng, C. Zhu, “Image Classification Based on the Combination of Text Features and Visual Features”, ISI Journal, Volume 28, Pages 242–256, 2013.
  6. L. Philippe Morency, R. Mihalcea, P. Doshi, “Multimodal Sentiment Analysis: Harvesting Opinions from the Web”, ICMI '11 Proceedings of the 13th international conference on multimodal interfaces, Pages 169-176, 2011.
  7. P. Khorrami, T. Le Paine, K. Brady, C. Dagli, T.S. Huang1, “HOW DEEP NEURALNETWORKS CAN IMPROVE EMOTION RECOGNITION ON VIDEO DATA”, Image Processing (ICIP), 2016 IEEE International Conference in 2016.
  8. R. Gupta, N. Malandrakis, and B. Xiao.,” Multimodal prediction of affective dimensions and depression in human-computer interactions”, Proceeding AVEC '14 Proceedings of the 4th International Workshop on Audio/Visual Emotion Challenge, Pages 33-40,2014.

Keywords

Sentimental analysis, Emotion detection, Multi-modal approach, Decision level fusion, Text mining, Image Classification