Call for Paper - January 2023 Edition
IJCA solicits original research papers for the January 2023 Edition. Last date of manuscript submission is December 20, 2022. Read More

iCrux – An HCI Based Novel Virtual Screen Interface with Artificial Intelligence

Print
PDF
International Conference and Workshop on Emerging Trends in Technology
© 2011 by IJCA Journal
Number 14 - Article 5
Year of Publication: 2011
Authors:
Shashank Prasad
Amit Sawant
Rakshith Shettigar
Rahul Khokale
Shubhra Sinha

Shashank Prasad, Amit Sawant, Rakshith Shettigar, Rahul Khokale and Shubhra Sinha. iCrux An HCI Based Novel Virtual Screen Interface With Artificial Intelligence. IJCA Proceedings on International Conference and workshop on Emerging Trends in Technology (ICWET) (14):28-35, 2011. Full text available. BibTeX

@article{key:article,
	author = {Shashank Prasad and Amit Sawant and Rakshith Shettigar and Rahul Khokale and Shubhra Sinha},
	title = {iCrux  An HCI Based Novel Virtual Screen Interface With Artificial Intelligence},
	journal = {IJCA Proceedings on International Conference and workshop on Emerging Trends in Technology (ICWET)},
	year = {2011},
	number = {14},
	pages = {28-35},
	note = {Full text available}
}

Abstract

The paper proposes state-of-the-art architecture of a novel virtual screen technology called ‘iCrux’, capable of turning any computer powered screen into an artificially intelligent virtual screen. Virtual Screen Technology enables a user to interact with the operating system and operate the computer using fingers, thus making the need for any traditional hardware input devices like mouse, keyboard, touchpad and touch-screen obsolete. The fingers are virtually linked to the mouse pointer on the screen and fully capable of performing any mouse operation without any form of touch. iCrux Technology aims at the development of an open-source, platform independent and artificially intelligent virtual screen technology based on pure real-time image processing and computer vision and without the use of any mechanical aids such as sensors, robotic arm, electronic devices, motion trackers, sound recorders, infrared light, lasers, etc. The proposed technology has been implemented and tested by our research team using real-time video processing and a single camera, to operate in unknown, random, non-plain, changing environment with light variant conditions. A comprehensive artificial intelligence module built into the technology constantly monitors the changing environment and can respond and adapt intuitively, making the system highly robust and suitable for seamless deployment into any computer system.

Reference

  • Andrea T, Zoltan Foley-F, Carol S, 2005, Polymorphic letters: transforming pen movements to extend written expression, Conference on Human Factors in Computing Systems.
  • Andrew W, Nuria O, 2003, GWindows: robust stereo vision for gesture-based control of windows, Proceedings of the 5th international conference on Multimodal interfaces.
  • Bandara G.E.M.D.C., Pathirana S.D., Ranawa R. M, 2002. Use of Fuzzy Feature Descriptions to recognize handwritten Alphanumeric Characters, 1st Conference on Fuzzy Systems And knowledge Discovery, (Singapore, November 2002).
  • Daniel C K. , Yueng D. 2002 , Bidirectional Deformable Matching with Application to Character Extraction, IEEE Transactions on Pattern Analysis and Machine Intelligence, (August 2002).
  • J. Zhong and S. Sclaroff, 2003. Segmenting Foreground Objects from a Dynamic Textured Background via a Robust Kalman Filter, ICCV, 2003.
  • Ognian B, Strahil S, Georgy G. 2007. Combined face recognition using wavelet packets and radial basis function neural network. CompSysTech '07: Proceedings of the 2007 International Conference on Computer systems and Tech.
  • M. A. Turk, A. P. Pentland. 1991. Face recognition using eigenfaces. In Proceedings, Eleventh International Conference on Pattern Recognition, pages 586–591, 1991.
  • M. H. Yang, N. Ahuja. 1998. Detecting human face in color images. In Proceedings, IEEE International Conference on Image Processing, volume 1, pages 127–130, 1998.
  • Florian B, Hans G, Nicolas V, 2010, Touch-display keyboards: transforming keyboards into interactive surfaces, Proceedings of the 28th international conference on Human factors in computing systems.
  • Frank L, Frederic M, 2007, Hands-free mouse-pointer manipulation using motion-tracking and speech recognition, Proceedings of the 19th Australasian conference on Computer-Human Interaction: Entertaining User Interfaces.
  • N. Friedman and S. Russell, 1997. Image Segmentation in Video Sequences: A Probabilistic Approach, Proceedings of the 13th Conference on Uncertainty in Artificial Intelligence.
  • Jean R W, Barry B, 1986, Interactive recognition of handprinted characters for computer input, ACM SIGCHI Bulletin, Volume 18 , (July 1986).
  • J. S. Bomba, 1959, Alpha-numeric character recognition using local operations, IRE-AIEE-ACM computer conference, December 1-3, 1959.
  • Masakazu I,Tomohiko T, Koichi K, 2010, Memory-based recognition of camera-captured characters, Proceedings of the 9th IAPR International Workshop on Document Analysis Systems.
  • Paul V and Michael J, Robust Real-time Object Detection, 2001. Second International Workshop On Statistical And Computational Theories Of Vision – Modeling, Learning, Computing And Sampling,Vancouver,Canada, July 13, 2001.
  • Y. Ren ,C-S. Chua and Y-K. Ho, 2003. Motion Detection with Non-stationary Background, MVA, Springer-Verlag.
  • Papageorgiou, M. Oren, and T. Poggio. 1998. “General Framework for Object Detection. In International Conference on Computer Vision.
  • Romesh R, Vasile Palade1, and G.E.M.D.C. Bandara, 2004, An Efficient Fuzzy Method for Handwritten Character Recognition, M.G h. Negoita et al. (Eds.): KES 2004, LNAI 3214, pp. 698–707, 2004 Springer-Verlag Berlin Heidelberg.
  • Sami RJ H, S K, Ashley C, Jukka L, 2007, Tap input as an embedded interaction method for mobile devices, Proc. of the 1st International Conf. on Tangible and embedded interaction.
  • Hrvoje B, Andrew D. W, Patrick B, 2006, Precise selection techniques for multi-touch screens, Proceedings of the SIGCHI conference on Human Factors in computing systems (April 22-27, 2006).