Call for Paper - January 2023 Edition
IJCA solicits original research papers for the January 2023 Edition. Last date of manuscript submission is December 20, 2022. Read More

Audio-Visual Speech Recognition for People with Speech Disorders

Print
PDF
International Journal of Computer Applications
© 2014 by IJCA Journal
Volume 96 - Number 2
Year of Publication: 2014
Authors:
Elham S. Salama
Reda A. El-khoribi
Mahmoud E. Shoman
10.5120/16770-6337

Elham S Salama, Reda A El-khoribi and Mahmoud E Shoman. Article: Audio-Visual Speech Recognition for People with Speech Disorders. International Journal of Computer Applications 96(2):51-56, June 2014. Full text available. BibTeX

@article{key:article,
	author = {Elham S. Salama and Reda A. El-khoribi and Mahmoud E. Shoman},
	title = {Article: Audio-Visual Speech Recognition for People with Speech Disorders},
	journal = {International Journal of Computer Applications},
	year = {2014},
	volume = {96},
	number = {2},
	pages = {51-56},
	month = {June},
	note = {Full text available}
}

Abstract

Speech recognition of disorder people is a difficult task due to the lack of motor-control of the speech articulators. Multimodal speech recognition can be used to enhance the robustness of disordered speech. This paper introduces an automatic speech recognition system for people with dysarthria speech disorder based on both speech and visual components. The Mel-Frequency Cepestral Coefficients (MFCC) is used as features representing the acoustic speech signal. For the visual counterpart, the Discrete Cosine Transform (DCT) Coefficients are extracted from the speaker's mouth region. Face and mouth regions are detected using the Viola-Jones algorithm. The acoustic and visual input features are then concatenated on one feature vector. Then, the Hidden Markov Model (HMM) classifier is applied on the combined feature vector of acoustic and visual components. The system is tested on isolated English words spoken by disorder speakers from UA-Speech data. Results of the proposed system indicate that visual features are highly effective and can improve the accuracy to reach 7. 91% for speaker dependent experiments and 3% for speaker independent experiments.

References

  • A. N. Mishra, Mahesh Chandra, Astik Biswas, and S. N. Sharan. 2013. Hindi phoneme-viseme recognition from continuous speech. International Journal of Signal and Imaging Systems Engineering (IJSISE).
  • Estellers, Virginia, Thiran, and Jean-Philippe. 2012. Multi-pose lipreading and audio-visual speech recognition. EURASIP Journal on Advances in Signal Processing.
  • H. Kim, M. Hasegawa-Johnson, A. Perlman, J. Gunderson, T. Huang, K. Watkin, and S. Frame. 2008. Dysarthric speech database for universal access research. In Proceedings of Interspeech. Brisbane. Australia.
  • H. V. Sharma and M. Hasegawa-Johnson. 2010. State transition interpolation and map adaptation for hmm-based dysarthric speech recognition. NAACL HLT Workshop on Speech and Language Processing for Assistive Technologies (SLPAT).
  • G. Jayaram and K. Abdelhamied. 1995. Experiments in dysarthric speech recognition using arti?cial neural networks. Journal of rehabilitation research and development.
  • Heidi Christensen, Stuart Cunningham, Charles Fox, Phil Green, and Thomas Hain. 2012. A comparative study of adaptive, automatic recognition of disordered speech. INTERSPEECH. ISCA.
  • F. Rudzicz. 2011. Production knowledge in the recognition of dysarthric speech. Ph. D. thesis. University of Toronto. Department of Computer Science.
  • Potamianos G. , Neti C. , Luettin J. , and Matthews I. 2004. Audio-visual automatic speech recognition: an overview. Issues in Visual and Audio-Visual Speech Processing. MIT Press Cambridge. MA.
  • H. McGurk and J. W. MacDonald. 1976. Hearing lips and seeing voices. Nature.
  • A. Q. Summerfield. 1987. Some preliminaries to a comprehensive account of audio-visual speech perception. In Hearing by eye. The psychology of lip-reading.
  • Massaro DW. , and Stork DG. 1998. Speech recognition and sensory integration. American Scientist.
  • Chikoto Miyamoto, Yuto Komai, Tetsuya Takiguchi, Yasuo Ariki, and Ichao Li. 2010. Multimodal speech recognition of a person with articulation disorders using AAM and MAF. In proceeding of Multimedia Signal Processing (MMSP).
  • Ahmed Farag, Mohamed El Adawy, and Ahmed Ismail. 2013. A robust speech disorders correction system for Arabic language using visual speech recognition. Biomedical Research.
  • Davis, S. B. , and Mermelstein, P. 1980. Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences. IEEE Transactions on Acoustics, Speech and Signal Processing, (ASSP).
  • Paul Viola and Michael J. Jones. 2001. Rapid Object Detection using a Boosted Cascade of Simple Features. IEEE CVPR.
  • G. Potamianos, C. Neti, G. Gravier, A. Garg, and A. W. Senior. 2003. Recent advances in the automatic recognition of audiovisual speech.
  • Potamianos G. , Graf HP. , and Cosatto E. 1998. An image transform approach for HMM based automatic lipreading. In IEEE International Conference on Image Processing.
  • Scanlon P, Ellis D, and Reilly R. 2003. Using mutual information to design class speci?c phone recognizers. In Proceedings of Eurospeech.
  • P. Scanlon and G. Potamianos. 2005. Exploiting lower face symmetry in appearance-based automatic speechreading. Proc. Works. Audio-Visual Speech Process. (AVSP).
  • Rabiner, L. 1989. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE.
  • Tian, Y. , Zhou, J. L. , Lin, H. , and Jiang, H. 2006. Tree-Based Covariance Modeling of Hidden Markov Models. IEEE transactions on Audio, Speech and Language Processing.
  • S. Young, D. Kershaw, J. Odell, V. Valtchev, and P. Woodland. 2006. The HTK Book Version 3. 4. Cambridge University Press.
  • 2013. The OpenCV Reference Manual. Release 2. 4. 6. 0. [Online]. Available: http://docs. opencv. org/opencv2refman. pdf.