CFP last date
20 May 2024
Reseach Article

Automatic Music Mood Recognition using Support Vector Regression

by Manisha Sarode, D. G. Bhalke
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 163 - Number 5
Year of Publication: 2017
Authors: Manisha Sarode, D. G. Bhalke
10.5120/ijca2017913533

Manisha Sarode, D. G. Bhalke . Automatic Music Mood Recognition using Support Vector Regression. International Journal of Computer Applications. 163, 5 ( Apr 2017), 32-35. DOI=10.5120/ijca2017913533

@article{ 10.5120/ijca2017913533,
author = { Manisha Sarode, D. G. Bhalke },
title = { Automatic Music Mood Recognition using Support Vector Regression },
journal = { International Journal of Computer Applications },
issue_date = { Apr 2017 },
volume = { 163 },
number = { 5 },
month = { Apr },
year = { 2017 },
issn = { 0975-8887 },
pages = { 32-35 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume163/number5/27394-2017913533/ },
doi = { 10.5120/ijca2017913533 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-07T00:09:23.299831+05:30
%A Manisha Sarode
%A D. G. Bhalke
%T Automatic Music Mood Recognition using Support Vector Regression
%J International Journal of Computer Applications
%@ 0975-8887
%V 163
%N 5
%P 32-35
%D 2017
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Music is a dialect of feelings, and henceforth music feeling could be helpful in music understanding, proposal, recovery and some other music-related applications. Numerous issues for music feeling acknowledgment have been tended to by various teaches, for example, physiology, brain science, intellectual science and musicology. Music emotion regression is considered more appropriate than classification for music emotion retrieval, since it resolves some of the ambiguities of emotion classes. We present a music emotion recognition system based on support vector regression (SVR) method. The process of recognition consists of three steps: (i) Several music features have been extracted from music signal; (ii) those features have been mapped into various emotion categories on Thayer’s two-dimensional emotion model; (iii) two regression functions have been trained using SVR and then arousal and valence values are predicted.

References
  1. Y. Feng, Y. Zhuang, and Y. Pan, “Popular music retrieval by detecting mood,” Proc. ACM SIGIR, pp. 375–376, 2003.
  2. T. Li and M. Ogihara, “Content-based music similarity search and emotion detection,” in Proc. Int. Conf. Acoust., Speech, Signal Process., Toulouse, France, 2006, pp. 17–21.
  3. L. Lu, D. Liu, and H.-J. Zhang, “Automatic mood detection and tracking of music audio signals,” IEEE Trans. Audio, Speech, Lang. Process., vol. 14, no. 1, pp. 5–18, Jan. 2006.
  4. Yi-Hsuan Yang; Yu-Ching Lin; Ya-Fan Su; Chen, H.H., "A Regression Approach to Music Emotion Recognition," in Audio, Speech, and Language Processing, IEEE Transactions on, vol.16, no.2, pp.448-457, Feb. 2008
  5. T.-L. Wu and S.-K. Jeng, “Extraction of segments of significant emotional expressions in music,” in Proc. Int. Workshop Comput. Music Audio Technol., 2006, pp. 76–80.
  6. M.-Y. Wang, N.-Y. Zhang, and H.-C. Zhu, “User-adaptive music emotion recognition,” in Proc. Int. Conf. Sig. Process., 2004, pp.1352–1355.
  7. Y.-H. Yang, C.-C. Liu, and H. H. Chen, “Music emotion classification: A fuzzy approach,” in Proc. ACMMultimedia, Santa Barbara, CA, 2006, pp. 81–84.
  8. A. Hanjalic and L.-Q. Xu, “Affective video content representation and modeling,” IEEE Trans. Multimedia, vol. 7, no. 1, pp. 143–154, Feb.2005.
  9. M. D. Korhonen, D. A. Clausi, and M. E. Jernigan, “Modeling emotional content of music using system identification,” IEEE Trans. Syst. Man., Cybern., vol. 36, no. 3, pp. 588–599, Jun. 2006.
  10. E. Schubert, “Measurement and time series analysis of emotion in music,” Ph.D. dissertation, School of Music Music Education, Univ. New South Wales, Sydney, NSW, Australia, 1999.
  11. R. E. Thayer, The Biopsychology of Mood and Arousal. New York: Oxford Univ. Press, 1989.
  12. A. J. Smola and B. Schölkopf, “A tutorial on support vector regression,” Statist. Comput., pp. 199–222, 2004.
  13. G. Tzanetakis and P. Cook, “Musical genre classification of audio signals,” IEEE Trans. Speech Audio Process., vol. 10, no. 5, pp. 293–302,Jul. 2002.
  14. A. Meng, P. Ahrendt, J. Larsen, and L. K. Hansen, “Temporal feature integration for music genre classification,” IEEE Trans. Audio, Speech, Lang. Process., vol. 15, no. 5, pp. 1654–1663, Jul. 2007.
  15. Hsu, Chih-Wei., and Lin, Chih-Jen: “A comparison of methods for multiclass Support vector machines,” IEEE Trans. on Neural Networks, Vol.13(2), pp.415-425, 2002.
  16. A. Sen and M. Srivastava, Regression Analysis: Theory, Methods, And Applications. New York: Springer, 1990.
  17. N. C. Maddage, C. Xu, M. S. Kankanhalli, and X. Shao, “Content-based music structure analysis with applications to music semantics understanding,” in Proc. ACM Multimedia, 2004, pp. 112–119
Index Terms

Computer Science
Information Sciences

Keywords

Arousal Valence music emotion recognition (MER) support vector regression.