CFP last date
20 May 2024
Reseach Article

Vision based Robot Navigation for Disaster Scenarios

by Sudheesh.p, Gireesh Kumar.t
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 49 - Number 6
Year of Publication: 2012
Authors: Sudheesh.p, Gireesh Kumar.t
10.5120/7635-0717

Sudheesh.p, Gireesh Kumar.t . Vision based Robot Navigation for Disaster Scenarios. International Journal of Computer Applications. 49, 6 ( July 2012), 36-39. DOI=10.5120/7635-0717

@article{ 10.5120/7635-0717,
author = { Sudheesh.p, Gireesh Kumar.t },
title = { Vision based Robot Navigation for Disaster Scenarios },
journal = { International Journal of Computer Applications },
issue_date = { July 2012 },
volume = { 49 },
number = { 6 },
month = { July },
year = { 2012 },
issn = { 0975-8887 },
pages = { 36-39 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume49/number6/7635-0717/ },
doi = { 10.5120/7635-0717 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-06T20:45:37.220452+05:30
%A Sudheesh.p
%A Gireesh Kumar.t
%T Vision based Robot Navigation for Disaster Scenarios
%J International Journal of Computer Applications
%@ 0975-8887
%V 49
%N 6
%P 36-39
%D 2012
%I Foundation of Computer Science (FCS), NY, USA
Abstract

This paper aims at providing real time vision based robot-navigation for disaster management scenarios. The task is to navigate a robot in unstructured environment by using gestures. Navigation is an important task that is to be performed while traversing a particular path in disaster management scenario. There are various methodologies like autonomous mapping and SLAM techniques in which the robot is trained itself to create the path by making a map, but training the robot and creating a map itself requires a lot of time and is a tedious process. Meanwhile in this approach a real time video streaming is done by the robot itself that is being transmitted to user who in turn controls the robot using gestures. Apart from streaming the video we also find the closest obstacle distance using IR sensors. For the purpose of performing a particular task for a detected gesture, the robot needs to have intelligence. This intelligence is the algorithm that is loaded into the robot to make it perform the task assigned to it. Here we make use of principle component analysis along with image moments for identifying the gestures and thereby controlling the robot. Real-time implementation is done on iRobot platform.

References
  1. Dardas. N. H, Petriu. E. M, "Hand gesture detection and recognition using principal component analysis", International Conference on Computational Intelligence for Measurement Systems and Applications (CIMSA), 2011 IEEE, pp. 1-6.
  2. Deng-Yuan Huang, Wu-Chih Hu , Sung-Hsiang Chang, "Vision-Based Hand Gesture Recognition Using PCA+Gabor Filters and SVM", Fifth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, 2009. IIH-MSP '09, pp. 1-4.
  3. Tudor Ioan Cerlinca, Stefan Gheorghe Pentiuc, Marius Cristian Cerlinca, Radu Daniel Vatavu "Hand Posture Recognition For Human Robot Interaction" ,Universitatea "Stefan cel Mare", 2006.
  4. Y. Wei,"Vision-based Human-robot Interaction and Navigation of Intelligent Service Robots", PhD Dissertation, Institute of Automation, Chinese Academic of Sciences, 2004.
  5. Priyal, S. P, Bora P. K, O. "A study on static hand gesture recognition using moments". International Conference on Signal Processing and Communications (SPCOM), pp: 1 – 5 July 2010.
  6. T. Cerlinca, S. Pentiuc, and M. Cerlinca, "Hand posture recognition for human-robot interaction," in Proceedings of the 2007 workshop on Multimodal interfaces in semantic interaction, 2007.
  7. H. K. Kim, J. D. Kim, D. G. Sim, and D. I. Oh, "A modified Zernike moment shape descriptor invariant to translation, rotation and scale for similarity-based image retrieval", in Proceedings of the IEEE International Conference on Multimedia and Expo, Vol. 1, 2000, pp. 307-310.
  8. C. H. Teh and R. T. Chin, "On image analysis by the methods of moments," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 10, 1988, pp. 496-513.
  9. Pai-Hsiang Hsiao, H. T. Kung and Koan-Sin Tan, "Video over TCP with Receiver-based Delay Control," in Proceedings of ACM NOSSDAV, 2001.
  10. http://www. video. ja. net/intro/#top, Introduction to Video Conferencing System.
  11. Gurmeet Singh. "Secure Video Conferencing for Web Based Security Surveillance System". M. Tech thesis, Department of Computer Science and Engineering Indian Institute of Technology, Kanpur July, 2006.
  12. Harrison. C, Hudson. S. E, "Pseudo-3D Video Conferencing with a Generic Webcam", Tenth IEEE International Symposium on Multimedia, 2008. ISM 2008. pp- 236 – 241.
  13. Monwar, A Real-Time Face Recognition Approach from Video Sequence using Skin Colour model and Eigenface Method. In: Canadian Conference on Electrical and Computer Engineering pp. 2181 - 2185 (2006).
  14. Siddharth Swarup Rautaray, Anupam Agrawal, A Real Time Hand Tracking System for Interactive Applications. in International Journal of Computer Applications (0975 – 8887) Volume 18– No. 6, March 2011.
  15. Yoav Freund, Robert E. Schapire, "A Short Introduction to Boosting", Journal of Japanese Society for Artificial Intelligence, 14(5):771-780, September, 1999.
  16. Freund Y, Schapire R E. "Experiments with a New Boosting Algorithm", Proceedings of the 13th Conference on Machine Learning 1996.
  17. J. L. Raheja, R. Shyam, U. Kumar, "Hand Gesture Capture and Recognition Technique for Real-Time Video Stream", Proceeding (683) Artificial Intelligence and Soft Computing – 2009.
  18. Zhengmao Zou, Premaratne, P. , Monaragala, R. , Bandara, N. ; Premaratne, M. , "Dynamic hand gesture recognition system using moment invariants", 5th International Conference on Information and Automation for Sustainability (ICIAFs), 2010. pp. - 108-113.
Index Terms

Computer Science
Information Sciences

Keywords

PCA Image moments video streaming Disaster management