Call for Paper - August 2022 Edition
IJCA solicits original research papers for the August 2022 Edition. Last date of manuscript submission is July 20, 2022. Read More

An Accurate Algorithm for Generating a Music Playlist based on Facial Expressions

Print
PDF
International Journal of Computer Applications
© 2014 by IJCA Journal
Volume 100 - Number 9
Year of Publication: 2014
Authors:
Anukriti Dureha
10.5120/17557-8163

Anukriti Dureha. Article: An Accurate Algorithm for Generating a Music Playlist based on Facial Expressions. International Journal of Computer Applications 100(9):33-39, August 2014. Full text available. BibTeX

@article{key:article,
	author = {Anukriti Dureha},
	title = {Article: An Accurate Algorithm for Generating a Music Playlist based on Facial Expressions},
	journal = {International Journal of Computer Applications},
	year = {2014},
	volume = {100},
	number = {9},
	pages = {33-39},
	month = {August},
	note = {Full text available}
}

Abstract

Manual segregation of a playlist and annotation of songs, in accordance with the current emotional state of a user, is labor intensive and time consuming. Numerous algorithms have been proposed to automate this process. However the existing algorithms are slow, increase the overall cost of the system by using additional hardware (e. g. EEG systems and sensors) and have less accuracy. This paper presents an algorithm that automates the process of generating an audio playlist, based on the facial expressions of a user, for rendering salvage of time and labor, invested in performing the process manually. The algorithm proposed in this paper aspires to reduce the overall computational time and the cost of the designed system. It also aims at increasing the accuracy of the designed system. The facial expression recognition module of the proposed algorithm is validated by testing the system against user dependent and user independent dataset. Experimental results indicate that the user dependent results give 100% accuracy, while user independent results for joy and surprise are 100 %, but for sad, anger and fear are 84. 3 %, 80 % and is 66% respectively. The overall accuracy of the emotion recognition algorithm, for user independent dataset is 86%. In audio, 100 % recognition rates are obtained for sad, sad-anger and joy-anger but for joy and anger, recognition rates obtained are 95. 4% and 90 % respectively. The overall accuracy of the audio emotion recognition algorithm is 98%. Implementation and testing of the proposed algorithm is carried out using an inbuilt camera. Hence, the proposed algorithm reduces the overall cost of the system successfully. Also, on average, the proposed algorithm takes 1. 10 sec to generate a playlist based on facial expression. Thus, it yields better performance, in terms of computational time, as compared to the algorithms in the existing literature.

References

  • Alvin I. Goldmana, b. Chandra and SekharSripadab, "Simulationist models of face-based emotion recognition".
  • A. habibzad, ninavin, Mir kamalMirnia," A new algorithm to classify face emotions through eye and lip feature by using particle swarm optimization. "
  • Byeong-jun Han, Seungmin Rho, Roger B. Dannenberg and Eenjun Hwang, "SMERS: music emotion recognition using support vector regression", 10thISMIR , 2009.
  • Chang, C. Hu, R. Feris, and M. Turk, "Manifold based analysis of facial expression," Image Vision Comput ,IEEE Trans. Pattern Anal. Mach. Intell. vol. 24, pp. 05–614, June 2006.
  • Carlos A. Cervantes and Kai-Tai Song , "Embedded Design of an Emotion-Aware Music Player", IEEE International Conference on Systems, Man, and Cybernetics, pp 2528-2533 ,2013.
  • Fatma Guney, "Emotion Recognition using Face Images", Bogazici University, Istanbul, Turkey 34342.
  • Jia-Jun Wong, Siu-Yeung Cho," Facial emotion recognition by adaptive processing of tree structures".
  • K. Hevener,"The affective Character of the major and minor modes in music", The American Journal of Psychology,Vol 47(1) pp 103-118,1935.
  • Kuan-Chieh Huang, Yau-Hwang Kuo, Mong-Fong Horng ,"Emotion Recognition by a novel triangular facial feature extraction method".
  • Michael lyon and Shigeru Akamatsu, "Coding Facial expression with Gabor wavelets. ", IEEE conf. on Automatic face and gesture recognition, March 2000.
  • P. Ekman, W. V. Friesen and J. C. Hager, "The Facial Action Coding System: A Technique for the Measurement of Facial Movement",2002.
  • Simon baker and Iain Matthews," Lucas-Kanade 20 Years On: A Unifying Framework", International Journal of Computer Vision,,vol 56(3), pp 221–255, 2004.
  • Spirosv. Ionnau, Amaryllis T. Raouzaiou, VazilisA. tzouvaras,"Emotion Recognition though facial expression analysis based on neurofuzzy network".
  • Samuel Strupp, Norbert Schmitz, and KarstenBerns, "Visual-Based Emotion Detection for Natural Man-Machine Interaction".
  • Russell, "A circumplex model of affect", Journal of Personality and Social Psychology Vol-39(6), pp1161–1178 , 1980.
  • Renuka R. Londhe, Dr. Vrushshen P. Pawar, "Analysis of Facial Expression and Recognition Based On Statistical Approach", International Journal of Soft Computing and Engineering (IJSCE) Volume-2, May 2012.
  • S. Dornbush, K. Fisher, K. McKay, A. Prikhodko and Z. Segall "Xpod- A Human Activity and Emotion Aware Mobile Music Player", UMBC Ebiquity, November 2005.
  • Sanghoon Jun, Seungmin Rho, Byeong-jun Han and Eenjun Hwang, "A fuzzy inference-based music emotion recognition system",VIE,2008.
  • Thayer " The biopsychology of mood & arousal", Oxford University Press ,1989.
  • Z. Zeng, M. Pantic, G. I. Roisman, and T. S. Huang, "A survey of affect recognition methods: Audio, visual, and spontaneous expressions," IEEE. Transaction Pattern Analysis, vol 31, January 2009.