Call for Paper - November 2014 Edition
IJCA solicits original research papers for the November 2014 Edition. Last date of manuscript submission is October 20, 2014. Read More

Voice Recognition In Automobiles

Print
PDF
International Journal of Computer Applications
© 2010 by IJCA Journal
Number 6 - Article 2
Year of Publication: 2010
Authors:
Sarbjeet Singh
Sukhvinder Singh
Mandeep Kour
Sonia Manhas
10.5120/1084-1414

Sarbjeet Singh, Sukhvinder Singh, Mandeep Kour and Sonia Manhas. Article:Voice Recognition In Automobiles. International Journal of Computer Applications 6(6):7–11, September 2010. Published By Foundation of Computer Science. BibTeX

@article{key:article,
	author = {Sarbjeet Singh and Sukhvinder Singh and Mandeep Kour and Sonia Manhas},
	title = {Article:Voice Recognition In Automobiles},
	journal = {International Journal of Computer Applications},
	year = {2010},
	volume = {6},
	number = {6},
	pages = {7--11},
	month = {September},
	note = {Published By Foundation of Computer Science}
}

Abstract

To create a car controlled by voice of humans is a innovative concept. In this paper we use the concept of speech recognition algorithm and algorithms that will worn on for the command of the users. The switching concept is used initially, the remote is provided with the button , when that button is pressed after that the speech recognition process starts. Then after user will command for opening window , the speech recognition system will process accordingly and the respective window will open. Accordingly the other commands will be processed.

Reference

  • W. Stokoe, D. Casterline, and C. Croneberg, A Dictionary of American Sign Language on Linguistic Principles, Gallaudet College Press, Washington D.C., USA, 1965.
  • S. Ong and S. Ranganath, “Automatic sign language analysis: A survey and the future beyond lexical meaning,” IEEE Trans. PAMI, vol. 27, no. 6, pp. 873–891, June 2005.
  • T.S. Huang Y. Wu, “Vision-based gesture recognition: a review,” in Gesture Workshop, Gif-sur-Yvette, France, Mar. 1999, vol. 1739 of LNCS, pp. 103–115.
  • G. Yao, H. Yao, X. Liu, and F. Jiang, “Real time large vocabulary continuous sign language recognition based on op/viterbi algorithm,” in Intl. Conf. Pattern Recognition, Hong Kong, Aug. 2006, vol. 3, pp. 312–315.
  • C. Vogler and D. Metaxas, “A framework for recognizing the simultaneous aspects of american sign language,” Computer Vision & Image Understanding, vol. 81, no. 3, pp. 358–384, Mar. 2001.
  • R. Bowden, D. Windridge, T. Kadir, A. Zisserman, and M. Brady, “A linguistic feature vector for the visual interpretation of sign language,” in European Conf. Computer Vision, 2004, vol. 1, pp. 390–401.
  • S. B. Wang, A. Quattoni, Louis-Philippe Morency, David Demirdjian, and Trevor Darrell, “Hidden conditional random fields for gesture recognition,” in Computer Vision & Pattern Recognition, New York, USA, June 2006, vol. 2, pp. 1521–1527.
  • J. L¨o¨of, M. Bisani, C. Gollan, G. Heigold, B. Hoffmeister, C. Plahl, R. Schluter, and H. Ney, “The 2006 RWTH parliamentary speeches transcription system,” in ICSLP, Pittsburgh, PA, USA, Sept. 2006.
  • D. Keysers, T. Deselaers, C. Gollan, and H. Ney, “Deformation models for image recognition,” IEEE Trans. PAMI, p. to appear, 2007.
  • P. Dreuw, T. Deselaers, D. Rybach, D. Keysers, and H. Ney, “Tracking using dynamic programming for appearance-based sign language recognition,” in IEEE Intl. Conf. on Automatic Face and Gesture Recognition, Southampton, Apr. 2006, pp. 293–298.
  • C. Neidle, J. Kegl, D. MacLaughlin, B. Bahan, and R.G. Lee, The Syntax of American Sign Language, MIT Press, 1999.
  • T. K¨olsch, D. Keysers, H. Ney, and R. Paredes, “Enhancements for local feature based image classification,” in Intl. Conf. Pattern Recognition, Cambridge, UK, Aug. 2004, vol. 1, pp. 248–251.
  • A. Zolnay, R. Schl¨uter, and H. Ney, “Acoustic feature combination for robust speech recognition,” in ICASSP, Philadelphia, PA, Mar. 2005, vol. 1, pp. 457–460.
  • D. Klakow and J. Peters, “Testing the correlation of word error rate and perplexity,” Speech Communication, vol. 38, pp. 19–28, 2002.
  • A. Agarwal and B. Triggs, “Recovering 3d human pose from monocular images,” IEEE Trans. PAMI, vol. 28, no. 1, pp. 44–58, Jan. 2006.
  • P. Dreuw, D. Stein, and H. Ney, “Enhancing a sign language translation system with vision-based features,” in Intl. Workshop on Gesture in HCI and Simulation 2007, Lisbon, Portugal, May 2007, p. to appear.
  • Dillon, T.W., & Norcio, A. F. (1997, October). User performance and acceptance of a speech-input interface in a health assessment task. International Journal of Human-Computer Studies, 47(4), 591-602.