CFP last date
20 May 2024
Reseach Article

Combining Speech and Gender Classification for Effective Emotion Recognition

by Rahul Vivek Purohit, Syed A Imam
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 64 - Number 4
Year of Publication: 2013
Authors: Rahul Vivek Purohit, Syed A Imam
10.5120/10621-5341

Rahul Vivek Purohit, Syed A Imam . Combining Speech and Gender Classification for Effective Emotion Recognition. International Journal of Computer Applications. 64, 4 ( February 2013), 15-18. DOI=10.5120/10621-5341

@article{ 10.5120/10621-5341,
author = { Rahul Vivek Purohit, Syed A Imam },
title = { Combining Speech and Gender Classification for Effective Emotion Recognition },
journal = { International Journal of Computer Applications },
issue_date = { February 2013 },
volume = { 64 },
number = { 4 },
month = { February },
year = { 2013 },
issn = { 0975-8887 },
pages = { 15-18 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume64/number4/10621-5341/ },
doi = { 10.5120/10621-5341 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-06T21:15:31.115630+05:30
%A Rahul Vivek Purohit
%A Syed A Imam
%T Combining Speech and Gender Classification for Effective Emotion Recognition
%J International Journal of Computer Applications
%@ 0975-8887
%V 64
%N 4
%P 15-18
%D 2013
%I Foundation of Computer Science (FCS), NY, USA
Abstract

The applications of emotion recognition in consumer electronics are increasing day by day. However the accuracy and stability of the decisions made by appliances largely depends on the efficient recognition of these emotions. The performance may degrade drastically due to interfering noise. This paper proposes a method which may improve the accuracy significantly. Results have confirmed that this system may help to improve the recognition results.

References
  1. Mohammad soleymani, J. Liichtenauer, T. Pun, and M. Pantic, "A multimodal database for affect recognition and implicit tagging" in IEEE transaction on affective computing, Vol 3, No. 1, January-March 2012.
  2. S. McGilloway, R. Cowie, E. Douglas-Cowie, S. Gielen, M. Westerdijk, and S. Stroeve, "Approaching automatic recognition of emotion from voice: A rough benchmark," in Proc. ISCA Workshop Speech Emotion, Belfast, U. K. , 2000, pp. 207–212.
  3. D. Ververidis and C. Kotropolos, "Automatic emotional speech classification," in Proc. IEEE int. Conf. Acoust. , Speech Signal Process. , May 2004, vol. 1, pp. 593–596.
  4. Zhou, G. , Hansen, J. H. L. , Kaiser, J. F. : Nonlinear Feature Based Classification of Speech under Stress. IEEE Transactions on speech and audio processing, vol. 9(3), IEEE Computer Society Press, Los Alamitos (2001)
  5. Yacoub, S. , Simske, S. , Lin, X. , Burns, J. : Recognition of emotions in interactive voice response system. Eurospeech 2003 Proc. (2003)
  6. Dellaert, F. , Polzin, T. , Waibel, A. : Recognizing emotion in Speech. Proc. International Conf. on Spoken Language Processing, pp. 1970– 1973 (1996)
  7. Scherer, K. R. : Adding the affective dimension: A new look in speech analysis and synthesis. In: Proc. International Conf. on Spoken Language Processing, pp. 1808–1811 (1996)
  8. T. Kostoulas, I. Mporas , T. Ganchev , N. Fakotakis, The Effect of Emotional Speech on a Smart-Home Application, Lecture Notes in Computer Science Vol. 5027, pp. 305-310, 2008
  9. J. S Park, J. H. Kim, Y. H. O, "Feature Vector Classification based Speech Emotion Recognition for Service Robots", IEEE Transactions on Consumer Electronics, Vol. 55, No. 3, August 2009
  10. Hua A. , D. Litman, "Using System and User Performance Features to Improve Emotion Detection in Spoken Tutoring Dialogs", ICSLP2006, Pittsburgh, Pennsylvania, USA, 2006.
  11. Kostov, V. , Fukuda, S. : Emotion in user interface, Voice Interaction system. IEEE Intl. Conf. on systems, Man, Cybernetics Representation, vol. 2, pp. 798–803 (2000)
  12. Oriyama, T. M. , Oazwa,: Emotion recognition and synthesis system on speech. IEEE Intl. Conference on Multimedia Computing and Systems, pp. 840–844 (1999)
  13. Lee, C. M. , Narayanan, S. , Pieraccini, R. : Classifying emotions in human-machine spoken dialogs. ICME'02 (2002)
  14. Gu, Li. , Zahorian, S. A. : A new robust algorithm for isolated word endpoint detection. ICASSP2002, Orlando, USA (2002)
  15. Noll, M. : Pitch determination of human speech by the harmonic product spectrum, the harmonic sum spectrum, and a maximum likelihood estimate. Proceedings of the Symposium on Computer Processing Communications, pp. 779–797 (1969)
  16. Ross, M. J. , Shaer, H. L. , Cohen, A. , Freudberg, R. , Manley, H. J. : Average magnitude difference function pitch extractor. ASSP-22, 353– 362 (1974)
  17. Xuejing Sun, "A pitch determination algorithm based on subharmonic-to harmonic ratio", ICSLP, pp. 676-679, 2000
  18. B. S. Kang, "A text-independent emotion recognition algorithm using speech signal", MS Thesis, Yonsei University, 2000.
  19. T. -F. Wu, C. J. Lin and R. C. Weng Probability Estimates for Multiclass Classification by Pair wise Coupling", Journal of Machine Learning Research, 2004
  20. Sannella, M. J. 1994 Constraint Satisfaction and Debugging for Interactive User Interfaces. Doctoral Thesis. UMI Order Number: UMI Order No. GAX95-09398. , University of Washington.
  21. Forman, G. 2003. An extensive empirical study of feature selection metrics for text classification. J. Mach. Learn. Res. 3 (Mar. 2003), 1289-1305.
  22. Brown, L. D. , Hua, H. , and Gao, C. 2003. A widget framework for augmented interaction in SCAPE.
  23. Y. T. Yu, M. F. Lau, "A comparison of MC/DC, MUMCUT and several other coverage criteria for logical decisions", Journal of Systems and Software, 2005, in press.
  24. Spector, A. Z. 1989. Achieving application requirements. In Distributed Systems, S. Mullender
Index Terms

Computer Science
Information Sciences

Keywords

Speech Recognition Gender classification subharmonic-to-harmonic ratio