CFP last date
22 April 2024
Reseach Article

Multimodal Interpretation Gesture Recognition System: A Review

Published on May 2012 by S. A. Chhabria, R. V. Dharaskar, V. M. Thakre
National Conference on Recent Trends in Computing
Foundation of Computer Science USA
NCRTC - Number 7
May 2012
Authors: S. A. Chhabria, R. V. Dharaskar, V. M. Thakre
55ebf140-062a-4151-83f0-16b36759dba5

S. A. Chhabria, R. V. Dharaskar, V. M. Thakre . Multimodal Interpretation Gesture Recognition System: A Review. National Conference on Recent Trends in Computing. NCRTC, 7 (May 2012), 35-39.

@article{
author = { S. A. Chhabria, R. V. Dharaskar, V. M. Thakre },
title = { Multimodal Interpretation Gesture Recognition System: A Review },
journal = { National Conference on Recent Trends in Computing },
issue_date = { May 2012 },
volume = { NCRTC },
number = { 7 },
month = { May },
year = { 2012 },
issn = 0975-8887,
pages = { 35-39 },
numpages = 5,
url = { /proceedings/ncrtc/number7/6568-1058/ },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Proceeding Article
%1 National Conference on Recent Trends in Computing
%A S. A. Chhabria
%A R. V. Dharaskar
%A V. M. Thakre
%T Multimodal Interpretation Gesture Recognition System: A Review
%J National Conference on Recent Trends in Computing
%@ 0975-8887
%V NCRTC
%N 7
%P 35-39
%D 2012
%I International Journal of Computer Applications
Abstract

Multimodal systems allow humans to interact with machines through multiple modalities such as speech, gesture and gaze. Different multimodal systems that have been developed so far will be discussed in this paper. These include put that there,Cubricon,Xtra, Quickset,RIA with MIND The growing interest in multimodal interface design is inspired in large part by the goals of supporting more transparent, flexible, efficient, and powerfully expressive means of human–computer interaction than in the past. Multimodal interfaces are expected to support a wider range of diverse applications, be usable by a broader spectrum of the average population, and function more reliably under realistic and challenging usage conditions. We also describe a diverse collection of state-of-the-art multimodal systems that process users' spoken and gestural input. These applications range from map-based and virtual reality systems for engaging in simulations and training, to field medic systems for mobile use in noisy environments, to web-based transactions and standard text-editing applications that will reshape daily computing and have a significant commercial impact. To realize successful multimodal systems of the future, many key research challenges remain to be addressed. Among these challenges are the development of cognitive theories to guide multimodal system design, and the development of effective natural language processing, dialogue processing, and error-handling techniques. In addition, new multimodal systems will be needed that can function more robustly and adaptively, and with support for collaborative multiperson use. Gesture interpretation can be seen as a way for computers to begin to understand human body language, thus building a richer bridge between machines and humans than primitive text user interfaces or even GUIs, which still limit the majority of input to keyboard and mouse. It has also become increasingly evident that the difficulties encountered in the analysis and interpretation of individual sensing modalities may be overcome by integrating them into a multimodal human–computer interface. This research can benefit from many disparate fields of study that increase our understanding of the different human communication modalities and their potential role in Human Computer Interface which can be used for handicapped persons to control their wheel-chair, expert to have computer assisted surgery, mining etc .

References
  1. Christopher A. Robbins. ,2004 Speech and Gesture Based Multimodal Interface Design
  2. Joyce Y. Chai, MIND: A Context-Based Multimodal Interpretation Framework In Conversational Systems
  3. Marcelo Worsley And Michael Johnston " Multimodal Interactive Spaces: Magictv And Magicmap "IEEE Vol 978-1-4244-7903-2010.
  4. Meng and Yuyang Zhang and Yaochu Jin "Autonomous Self –Reconfiguration of Modular Robots by evolving a Heirarchical Model" 2011.
  5. Boucher, R. Canal, T. -Q. Chu, A. Drogoul, B. Gaudou, V. T. Le, V. Moraru, N. Van Nguyen, Q. A. N. Vu, P. Taillandier, F. Sempe, and S. Stinckwich. :"A Real- Time Hand Gesture System based on Evolutionary Search". In Safety, Security Rescue Robotics (SSRR), 2011 IEEE International Workshop on, pages 16, 2011.
  6. Rami Abielmona ,Emilm. Petriu, Moufid Harb and Slawo Ye solkowki,"Mission Diven Robotics for Territorial Security" Model"IEEE transaction on Computational Intelligence Magzine ,pp 55-67 Feb 2011.
  7. Surakka, Martti Juhola, and Jukka Lekkal "A Wearable, Wireless Gaze Tracker with Integrated Selection Command Source for Human–Computer Interaction ", IEEE transaction on Computational Intelligence Magzine , 2011.
  8. Sharon Oviatt and Phil Cohen ,Lizhong Wu ,John Vergo Lisbeth Duncan ,Bernhard Suhm and Josh Bers ,Thomas Holzman ,Terry Winograd ,James Landay ,Jim Larson David Ferro , Designing the User Interface for Multimodal Speech and Pen-Based Gesture Applications: State-of-the-Art Systems and Future Research Directions, HUMAN-COMPUTER INTERACTION, 2000, Volume 15, pp. 263–322
  9. Joyce Chai, Shimei Pan, Michelle X. Zhou and Keith Houck," Context-based Multimodal Input Understanding in Conversational Systems", Proceedings of the Fourth IEEE International Conference on Multimodal Interfaces (ICMI'02) ,0-7695-1834-6/02 $17. 00 © 2002 IEEE
Index Terms

Computer Science
Information Sciences

Keywords

Human Computer Interaction Gesture Interpretation Multimodality Noise