Call for Paper - March 2023 Edition
IJCA solicits original research papers for the March 2023 Edition. Last date of manuscript submission is February 20, 2023. Read More

Multimodal Interpretation Gesture Recognition System: A Review

Print
PDF
IJCA Proceedings on National Conference on Recent Trends in Computing
© 2012 by IJCA Journal
NCRTC - Number 7
Year of Publication: 2012
Authors:
S. A. Chhabria
R. V. Dharaskar
V. M. Thakre

S A Chhabria, R V Dharaskar and V M Thakre. Article: Multimodal Interpretation Gesture Recognition System: A Review. IJCA Proceedings on National Conference on Recent Trends in Computing NCRTC(7):35-39, May 2012. Full text available. BibTeX

@article{key:article,
	author = {S. A. Chhabria and R. V. Dharaskar and V. M. Thakre},
	title = {Article: Multimodal Interpretation Gesture Recognition System: A Review},
	journal = {IJCA Proceedings on National Conference on Recent Trends in Computing},
	year = {2012},
	volume = {NCRTC},
	number = {7},
	pages = {35-39},
	month = {May},
	note = {Full text available}
}

Abstract

Multimodal systems allow humans to interact with machines through multiple modalities such as speech, gesture and gaze. Different multimodal systems that have been developed so far will be discussed in this paper. These include put that there,Cubricon,Xtra, Quickset,RIA with MIND The growing interest in multimodal interface design is inspired in large part by the goals of supporting more transparent, flexible, efficient, and powerfully expressive means of human–computer interaction than in the past. Multimodal interfaces are expected to support a wider range of diverse applications, be usable by a broader spectrum of the average population, and function more reliably under realistic and challenging usage conditions. We also describe a diverse collection of state-of-the-art multimodal systems that process users' spoken and gestural input. These applications range from map-based and virtual reality systems for engaging in simulations and training, to field medic systems for mobile use in noisy environments, to web-based transactions and standard text-editing applications that will reshape daily computing and have a significant commercial impact. To realize successful multimodal systems of the future, many key research challenges remain to be addressed. Among these challenges are the development of cognitive theories to guide multimodal system design, and the development of effective natural language processing, dialogue processing, and error-handling techniques. In addition, new multimodal systems will be needed that can function more robustly and adaptively, and with support for collaborative multiperson use. Gesture interpretation can be seen as a way for computers to begin to understand human body language, thus building a richer bridge between machines and humans than primitive text user interfaces or even GUIs, which still limit the majority of input to keyboard and mouse. It has also become increasingly evident that the difficulties encountered in the analysis and interpretation of individual sensing modalities may be overcome by integrating them into a multimodal human–computer interface. This research can benefit from many disparate fields of study that increase our understanding of the different human communication modalities and their potential role in Human Computer Interface which can be used for handicapped persons to control their wheel-chair, expert to have computer assisted surgery, mining etc .

References

  • Christopher A. Robbins. ,2004 Speech and Gesture Based Multimodal Interface Design
  • Joyce Y. Chai, MIND: A Context-Based Multimodal Interpretation Framework In Conversational Systems
  • Marcelo Worsley And Michael Johnston " Multimodal Interactive Spaces: Magictv And Magicmap "IEEE Vol 978-1-4244-7903-2010.
  • Meng and Yuyang Zhang and Yaochu Jin "Autonomous Self –Reconfiguration of Modular Robots by evolving a Heirarchical Model" 2011.
  • Boucher, R. Canal, T. -Q. Chu, A. Drogoul, B. Gaudou, V. T. Le, V. Moraru, N. Van Nguyen, Q. A. N. Vu, P. Taillandier, F. Sempe, and S. Stinckwich. :"A Real- Time Hand Gesture System based on Evolutionary Search". In Safety, Security Rescue Robotics (SSRR), 2011 IEEE International Workshop on, pages 16, 2011.
  • Rami Abielmona ,Emilm. Petriu, Moufid Harb and Slawo Ye solkowki,"Mission Diven Robotics for Territorial Security" Model"IEEE transaction on Computational Intelligence Magzine ,pp 55-67 Feb 2011.
  • Surakka, Martti Juhola, and Jukka Lekkal "A Wearable, Wireless Gaze Tracker with Integrated Selection Command Source for Human–Computer Interaction ", IEEE transaction on Computational Intelligence Magzine , 2011.
  • Sharon Oviatt and Phil Cohen ,Lizhong Wu ,John Vergo Lisbeth Duncan ,Bernhard Suhm and Josh Bers ,Thomas Holzman ,Terry Winograd ,James Landay ,Jim Larson David Ferro , Designing the User Interface for Multimodal Speech and Pen-Based Gesture Applications: State-of-the-Art Systems and Future Research Directions, HUMAN-COMPUTER INTERACTION, 2000, Volume 15, pp. 263–322
  • Joyce Chai, Shimei Pan, Michelle X. Zhou and Keith Houck," Context-based Multimodal Input Understanding in Conversational Systems", Proceedings of the Fourth IEEE International Conference on Multimodal Interfaces (ICMI'02) ,0-7695-1834-6/02 $17. 00 © 2002 IEEE