CFP last date
22 April 2024
Reseach Article

Article:Voice Recognition In Automobiles

by Sarbjeet Singh, Sukhvinder Singh, Mandeep Kour, Sonia Manhas
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 6 - Number 6
Year of Publication: 2010
Authors: Sarbjeet Singh, Sukhvinder Singh, Mandeep Kour, Sonia Manhas
10.5120/1084-1414

Sarbjeet Singh, Sukhvinder Singh, Mandeep Kour, Sonia Manhas . Article:Voice Recognition In Automobiles. International Journal of Computer Applications. 6, 6 ( September 2010), 7-11. DOI=10.5120/1084-1414

@article{ 10.5120/1084-1414,
author = { Sarbjeet Singh, Sukhvinder Singh, Mandeep Kour, Sonia Manhas },
title = { Article:Voice Recognition In Automobiles },
journal = { International Journal of Computer Applications },
issue_date = { September 2010 },
volume = { 6 },
number = { 6 },
month = { September },
year = { 2010 },
issn = { 0975-8887 },
pages = { 7-11 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume6/number6/1084-1414/ },
doi = { 10.5120/1084-1414 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-06T19:54:41.621880+05:30
%A Sarbjeet Singh
%A Sukhvinder Singh
%A Mandeep Kour
%A Sonia Manhas
%T Article:Voice Recognition In Automobiles
%J International Journal of Computer Applications
%@ 0975-8887
%V 6
%N 6
%P 7-11
%D 2010
%I Foundation of Computer Science (FCS), NY, USA
Abstract

To create a car controlled by voice of humans is a innovative concept. In this paper we use the concept of speech recognition algorithm and algorithms that will worn on for the command of the users. The switching concept is used initially, the remote is provided with the button , when that button is pressed after that the speech recognition process starts. Then after user will command for opening window , the speech recognition system will process accordingly and the respective window will open. Accordingly the other commands will be processed.

References
  1. W. Stokoe, D. Casterline, and C. Croneberg, A Dictionary of American Sign Language on Linguistic Principles, Gallaudet College Press, Washington D.C., USA, 1965.
  2. S. Ong and S. Ranganath, “Automatic sign language analysis: A survey and the future beyond lexical meaning,” IEEE Trans. PAMI, vol. 27, no. 6, pp. 873–891, June 2005.
  3. T.S. Huang Y. Wu, “Vision-based gesture recognition: a review,” in Gesture Workshop, Gif-sur-Yvette, France, Mar. 1999, vol. 1739 of LNCS, pp. 103–115.
  4. G. Yao, H. Yao, X. Liu, and F. Jiang, “Real time large vocabulary continuous sign language recognition based on op/viterbi algorithm,” in Intl. Conf. Pattern Recognition, Hong Kong, Aug. 2006, vol. 3, pp. 312–315.
  5. C. Vogler and D. Metaxas, “A framework for recognizing the simultaneous aspects of american sign language,” Computer Vision & Image Understanding, vol. 81, no. 3, pp. 358–384, Mar. 2001.
  6. R. Bowden, D. Windridge, T. Kadir, A. Zisserman, and M. Brady, “A linguistic feature vector for the visual interpretation of sign language,” in European Conf. Computer Vision, 2004, vol. 1, pp. 390–401.
  7. S. B. Wang, A. Quattoni, Louis-Philippe Morency, David Demirdjian, and Trevor Darrell, “Hidden conditional random fields for gesture recognition,” in Computer Vision & Pattern Recognition, New York, USA, June 2006, vol. 2, pp. 1521–1527.
  8. J. L¨o¨of, M. Bisani, C. Gollan, G. Heigold, B. Hoffmeister, C. Plahl, R. Schluter, and H. Ney, “The 2006 RWTH parliamentary speeches transcription system,” in ICSLP, Pittsburgh, PA, USA, Sept. 2006.
  9. D. Keysers, T. Deselaers, C. Gollan, and H. Ney, “Deformation models for image recognition,” IEEE Trans. PAMI, p. to appear, 2007.
  10. P. Dreuw, T. Deselaers, D. Rybach, D. Keysers, and H. Ney, “Tracking using dynamic programming for appearance-based sign language recognition,” in IEEE Intl. Conf. on Automatic Face and Gesture Recognition, Southampton, Apr. 2006, pp. 293–298.
  11. C. Neidle, J. Kegl, D. MacLaughlin, B. Bahan, and R.G. Lee, The Syntax of American Sign Language, MIT Press, 1999.
  12. T. K¨olsch, D. Keysers, H. Ney, and R. Paredes, “Enhancements for local feature based image classification,” in Intl. Conf. Pattern Recognition, Cambridge, UK, Aug. 2004, vol. 1, pp. 248–251.
  13. A. Zolnay, R. Schl¨uter, and H. Ney, “Acoustic feature combination for robust speech recognition,” in ICASSP, Philadelphia, PA, Mar. 2005, vol. 1, pp. 457–460.
  14. D. Klakow and J. Peters, “Testing the correlation of word error rate and perplexity,” Speech Communication, vol. 38, pp. 19–28, 2002.
  15. A. Agarwal and B. Triggs, “Recovering 3d human pose from monocular images,” IEEE Trans. PAMI, vol. 28, no. 1, pp. 44–58, Jan. 2006.
  16. P. Dreuw, D. Stein, and H. Ney, “Enhancing a sign language translation system with vision-based features,” in Intl. Workshop on Gesture in HCI and Simulation 2007, Lisbon, Portugal, May 2007, p. to appear.
  17. Dillon, T.W., & Norcio, A. F. (1997, October). User performance and acceptance of a speech-input interface in a health assessment task. International Journal of Human-Computer Studies, 47(4), 591-602.
Index Terms

Computer Science
Information Sciences

Keywords

Speaker Independent Speech Recognition Dragon Naturally Speaking Software