Call for Paper - January 2024 Edition
IJCA solicits original research papers for the January 2024 Edition. Last date of manuscript submission is December 20, 2023. Read More

Efficient Object Recognition using Convolution Neural Networks Theorem

Print
PDF
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Year of Publication: 2017
Authors:
Aarushi Thakral, Shaurya Shekhar, Akila Victor
10.5120/ijca2017913123

Aarushi Thakral, Shaurya Shekhar and Akila Victor. Efficient Object Recognition using Convolution Neural Networks Theorem. International Journal of Computer Applications 161(2):36-47, March 2017. BibTeX

@article{10.5120/ijca2017913123,
	author = {Aarushi Thakral and Shaurya Shekhar and Akila Victor},
	title = {Efficient Object Recognition using Convolution Neural Networks Theorem},
	journal = {International Journal of Computer Applications},
	issue_date = {March 2017},
	volume = {161},
	number = {2},
	month = {Mar},
	year = {2017},
	issn = {0975-8887},
	pages = {36-47},
	numpages = {12},
	url = {http://www.ijcaonline.org/archives/volume161/number2/27123-2017913123},
	doi = {10.5120/ijca2017913123},
	publisher = {Foundation of Computer Science (FCS), NY, USA},
	address = {New York, USA}
}

Abstract

Object recognition is the process of identification of an object in an image. There exist various algorithms for the same. Appearance based algorithms have demonstrated good efficiency, however, their performance gets affected adversely in the presence of clutter or when background changes are affected. We hope to overcome this issue by using Convolution Neural Network (CNN) Theorem. The approach is shape based and has been proven to work well under broad range of circumstances: varied lighting conditions, affine transformations, etc. It involves tiling, which is the phenomenon of the use of multiple layers of neurons to process small portions of the image, which are then used to obtain better representations of the image. This allows CNN to be translation-tolerant. The neural elements learn to recognize objects about which they have no previous information, this ‘learning’ mechanism is affected by the fact that representations of the image are learned by the inner layers of the deep architectures of neurons. Unlike RBM and Auto-encoder, which are capable of learning only single global weight matrix layers, the CNN theorem makes use of shared weight in convolution layers, which means that the same filter (weight bank) is used for each pixel in the layer, which reduces the memory footprint and improves performance.

References

  1. Intille, S. S., & Bobick, A. F. (1999). A framework for recognizing multi-agent action from visual evidence. AAAI/IAAI, 99, 518-525. 

  2. Dai, J., He, K., & Sun, J. (2015). Convolutional feature masking for joint object and stuff segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 3992-4000). 

  3. Ramadevi, Y., Sridevi, T., Poornima, B., & Kalyani, B. (2010). Segmentation and object recognition using edge detection techniques. International Journal of Computer Science & Information Technology (IJCSIT), 2(6), 153-161. 

  4. Gevers, T., & Smeulders, A. W. (1999). Color-based object recognition. Pattern recognition, 32(3), 453-464. 

  5. Andrianov, D. E., Eremeev, S. V., & Kuptsov, K. V. (2015). The Review of Spatial Objects Recognition Models and Algorithms. Procedia Engineering, 129, 374-379. 

  6. Oji, R. (2012). An automatic algorithm for object recognition and detection based on ASIFT keypoints. arXiv preprint arXiv:1211.5829. 

  7. Dollár, P., Wojek, C., Schiele, B., & Perona, P. (2009, June). Pedestrian detection: A benchmark. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on (pp. 304-311). IEEE. 

  8. Dollár, P., Tu, Z., Perona, P., & Belongie, S. (2009). Integral channel features. 

  9. Zhou, B., Lapedriza, A., Xiao, J., Torralba, A., & Oliva, A. (2014). Learning deep features for scene recognition using places database. In Advances in neural information processing systems (pp. 487-495). 

  10. Kim, J., & Grauman, K. (2012, October). Shape sharing for object segmentation. In European Conference on Computer Vision (pp. 444- 458). Springer Berlin Heidelberg. 

  11. Oquab, M., Bottou, L., Laptev, I., & Sivic, J. (2014). Weakly supervised object recognition with convolutional neural networks. In Proc. of NIPS. 

  12. Dai,J.,He,K.,&Sun,J.(2015).Convolutionalfeaturemaskingforjoint object and stuff segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 3992-4000). 

  13. Wei, Y., Xia, W., Huang, J., Ni, B., Dong, J., Zhao, Y., & Yan, S. (2014). CNN: Single-label to multi-label. arXiv preprint arXiv:1406.5726.
  14. Zhou, B., Lapedriza, A., Xiao, J., Torralba, A., & Oliva, A. (2014). Learning deep features for scene recognition using places database. In Advances in neural information processing systems (pp. 487-495). 

  15. Tao, Q. Q., Zhan, S., Li, X. H., & Kurihara, T. (2016). Robust face detection using local CNN and SVM based on kernel combination. Neurocomputing. 

  16. Tang, S., & Yuan, Y. Object Detection based on Convolutional Neural Network. 

  17. Matsugu, M., Mori, K., Mitari, Y., & Kaneda, Y. (2003). Subject independent facial expression recognition with robust face detection using a convolutional neural network. Neural Networks, 16(5), 555-559 

  18. Guo, Q., Wang, F., Lei, J., Tu, D., & Li, G. (2016). Convolutional feature learning and Hybrid CNN-HMM for scene number recognition.Neurocomputing, 184, 78-90. 

  19. Behzadi, S., & Ali Alesheikh, A. (2013). Introducing AN Agent-Based Object Recognition Operator for Proximity Analysis. ISPRS- International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences,1(3), 91-95.
  20. Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105).

Keywords

Recognition, Object, Neural, Features, Dataset, Training, Image