Call for Paper - November 2022 Edition
IJCA solicits original research papers for the November 2022 Edition. Last date of manuscript submission is October 20, 2022. Read More

Visual Tracking using Corner based Centrist Descriptor with a Robust Localization Algorithm

Print
PDF
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Year of Publication: 2016
Authors:
Mahdi Tanbakuchi, Mojtaba Lotfizad
10.5120/ijca2016912072

Mahdi Tanbakuchi and Mojtaba Lotfizad. Visual Tracking using Corner based Centrist Descriptor with a Robust Localization Algorithm. International Journal of Computer Applications 153(6):1-11, November 2016. BibTeX

@article{10.5120/ijca2016912072,
	author = {Mahdi Tanbakuchi and Mojtaba Lotfizad},
	title = {Visual Tracking using Corner based Centrist Descriptor with a Robust Localization Algorithm},
	journal = {International Journal of Computer Applications},
	issue_date = {November 2016},
	volume = {153},
	number = {6},
	month = {Nov},
	year = {2016},
	issn = {0975-8887},
	pages = {1-11},
	numpages = {11},
	url = {http://www.ijcaonline.org/archives/volume153/number6/26404-2016912072},
	doi = {10.5120/ijca2016912072},
	publisher = {Foundation of Computer Science (FCS), NY, USA},
	address = {New York, USA}
}

Abstract

In this paper an algorithm for object tracking in the visual domain based on a novel localization method is proposed. First a part of the search area, preferably the interest points is chosen. The proposed approach drastically speeds up the process of tracking, meanwhile the intensity histogram and Centrist descriptor which is known for good coding capability of small patches of an image will be used for target’s description. In order to increase the accuracy of the descriptor, this descriptor is applied to small blocks of image to encode most of the image around the target’s interest points. By providing the description of object’s interest points, a 1-NN classifier is used to distinguish the corresponding target’s interest points in each frame. Given the matched corresponding interest points, a convolution problem is formulated to detect the center of the target. Experiments on a challenging dataset against several state-of-theart methods demonstrate the efficiency of the proposed algorithm.

References

  1. N. Jiang,W. Liu, and Y.Wu. Learning adaptive metric for robust visual tracking. IEEE Trans Image Process, 20(8):2288– 300, 2011. Jiang, Nan Liu, Wenyu Wu, Ying eng Research Support, Non-U.S. Gov’t Research Support, U.S. Gov’t, Non- P.H.S. 2011/02/22 06:00 IEEE Trans Image Process. 2011 Aug;20(8):2288-300. doi: 10.1109/TIP.2011.2114895. Epub 2011 Feb 17.
  2. Javier Cruz-Mota, Michel Bierlaire, and Jean-Philippe Thiran. Sample and pixel weighting strategies for robust incremental visual tracking. Circuits and Systems for Video Technology, IEEE Transactions on, 23(5):898 – 911, 2013.
  3. Can-Long Zhang, Zhong-Liang Jing, Han Pan, Bo Jin, and Zhi-Xin Li. Robust visual tracking using discriminative stable regions and k-means clustering. Neurocomputing, 111(0):131–143, 2013.
  4. A. Satpathy, X. Jiang, and H. L. Eng. Lbp-based edge-texture features for object recognition. IEEE Trans Image Process, 23(5):1953–64, 2014. Satpathy, Amit Jiang, Xudong Eng, How-Lung eng 2014/04/03 06:00 IEEE Trans Image Process. 2014 May;23(5):1953-64. doi: 10.1109/TIP.2014.2310123.
  5. Kaihua Zhang, Lei Zhang, Qingshan Liu, David Zhang, and Ming-Hsuan Yang. Fast Visual Tracking via Dense Spatiotemporal Context Learning, pages 127–141. Springer International Publishing, Cham, 2014.
  6. X. Lan, A. J. Ma, P. C. Yuen, and R. Chellappa. Joint sparse representation and robust feature-level fusion for multi-cue visual tracking. IEEE Trans Image Process, 24(12):5826–41, 2015. Lan, Xiangyuan Ma, Andy J Yuen, Pong C Chellappa, Rama eng Research Support, Non-U.S. Gov’t 2015/09/29 06:00 IEEE Trans Image Process. 2015 Dec;24(12):5826-41. doi: 10.1109/TIP.2015.2481325. Epub 2015 Sep 23.
  7. Si Chen, Shaozi Li, Songzhi Su, Donglin Cao, and Rongrong Ji. Online semi-supervised compressive coding for robust visual tracking. Journal of Visual Communication and Image Representation, 25(5):793–804, 2014.
  8. Y. Sui, S. Zhang, and L. Zhang. Robust visual tracking via sparsity-induced subspace learning. IEEE Trans Image Process, 24(12):4686–700, 2015. Sui, Yao Zhang, Shunli Zhang, Li eng Research Support, Non-U.S. Gov’t 2015/08/11 06:00 IEEE Trans Image Process. 2015 Dec;24(12):4686-700. doi: 10.1109/TIP.2015.2462076. Epub 2015 Jul 30.
  9. Y Wu, B Shen, and H Ling. Visual tracking via online nonnegative matrix factorization. Circuits and Systems for Video Technology, IEEE Transactions on, 24(3):374 – 383, 2013.
  10. D. Wang, H. Lu, Z. Xiao, and M. H. Yang. Inverse sparse tracker with a locally weighted distance metric. IEEE Trans Image Process, 24(9):2646–57, 2015. Wang, Dong Lu, Huchuan Xiao, Ziyang Yang, Ming-Hsuan eng Research Support, Non-U.S. Gov’t 2015/05/03 06:00 IEEE Trans Image Process. 2015 Sep;24(9):2646-57. doi: 10.1109/TIP.2015.2427518. Epub 2015 Apr 28.
  11. L. Zhang, H. Lu, D. Du, and L. Liu. Sparse hashing tracking. IEEE Trans Image Process, 25(2):840–9, 2016. Zhang, Lihe Lu, Huchuan Du, Dandan Liu, Luning eng Research Support, Non-U.S. Gov’t 2015/12/20 06:00 IEEE Trans Image Process. 2016 Feb;25(2):840-9. doi: 10.1109/TIP.2015.2509244. Epub 2015 Dec 17.
  12. W. Guo, L. Cao, T. X. Han, S. Yan, and C. Xu. Maxconfidence boosting with uncertainty for visual tracking. IEEE Trans Image Process, 24(5):1650–9, 2015. Guo, Wen Cao, Liangliang Han, Tony X Yan, Shuicheng Xu, Changsheng eng Research Support, Non-U.S. Gov’t 2015/03/15 06:00 IEEE Trans Image Process. 2015 May;24(5):1650-9.
  13. Z. Kalal, J. Matas, and K. Mikolajczyk. P-n learning: Bootstrapping binary classifiers by structural constraints. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 49–56.
  14. Herbert Bay, Andreas Ess, Tinne Tuytelaars, and Luc Van Gool. Speeded-up robust features (surf). Computer Vision and Image Understanding, 110(3):346–359, 2008.
  15. David G Lowe. Distinctive image features from scaleinvariant keypoints. International journal of computer vision, 60(2):91–110, 2004.
  16. L. C. Chiu, T. S. Chang, J. Y. Chen, and N. Y. Chang. Fast sift design for real-time visual feature extraction. IEEE Trans Image Process, 22(8):3158–67, 2013. Chiu, Liang- Chi Chang, Tian-Sheuan Chen, Jiun-Yen Chang, Nelson Yen- Chung eng 2013/06/08 06:00 IEEE Trans Image Process. 2013 Aug;22(8):3158-67. doi: 10.1109/TIP.2013.2259841.
  17. S. Ehsan, N. Kanwal, A. F. Clark, and K. D. McDonald- Maier. An algorithm for the contextual adaption of surf octave selection with good matching performance: best octaves. IEEE Trans Image Process, 21(1):297–304, 2012. Ehsan, Shoaib Kanwal, Nadia Clark, Adrian F McDonald-Maier, Klaus D eng Research Support, Non-U.S. Gov’t 2011/06/30 06:00 IEEE Trans Image Process. 2012 Jan;21(1):297-304. doi: 10.1109/TIP.2011.2160869. Epub 2011 Jun 27.
  18. Chris Harris and Mike Stephens. A combined corner and edge detector. In Alvey vision conference, volume 15, page 50. Manchester, UK.
  19. Shi Jianbo and C. Tomasi. Good features to track. In Computer Vision and Pattern Recognition, 1994. Proceedings CVPR ’94., 1994 IEEE Computer Society Conference on, pages 593–600. Institute of Electrical and Electronics Engineers (IEEE), 1994.
  20. E. Rosten, R. Porter, and T. Drummond. Faster and better: a machine learning approach to corner detection. IEEE Trans Pattern Anal Mach Intell, 32(1):105–19, 2010. Rosten, Edward Porter, Reid Drummond, Tom eng 2009/11/21 06:00 IEEE Trans Pattern Anal Mach Intell. 2010 Jan;32(1):105-19. doi: 10.1109/TPAMI.2008.275.
  21. Edward Rosten and Tom Drummond. Machine Learning for High-Speed Corner Detection, pages 430–443. Springer Berlin Heidelberg, Berlin, Heidelberg, 2006.
  22. Stephen M. Smith and J. Michael Brady. Susana new approach to low level image processing. International Journal of Computer Vision, 23(1):45–78, 1997.
  23. D. Comaniciu, V. Ramesh, and P. Meer. Kernel-based object tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(5):564–577, 2003.
  24. Jifeng Ning, L. E. I. Zhang, David Zhang, and Chengke Wu. Robust object tracking using joint color-texture histogram. International Journal of Pattern Recognition and Artificial Intelligence, 23(07):1245–1263, 2009.
  25. Timo Ojala, Matti Pietikainen, and Topi Maenpaa. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 24(7):971–987, 2002.
  26. Mu Yadong, Yan Shuicheng, Liu Yi, T. Huang, and Zhou Bingfeng. Discriminative local binary patterns for human detection in personal album. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1–8.
  27. N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 1, pages 886–893 vol. 1. Institute of Electrical and Electronics Engineers (IEEE).
  28. J. Wu and J. M. Rehg. Centrist: A visual descriptor for scene categorization. IEEE Trans Pattern Anal Mach Intell, 33(8):1489–501, 2011. Wu, Jianxin Rehg, James M eng Research Support, Non-U.S. Gov’t Research Support, U.S. Gov’t, Non-P.H.S. 2010/12/22 06:00 IEEE Trans Pattern Anal Mach Intell. 2011 Aug;33(8):1489-501. doi: 10.1109/TPAMI.2010.224. Epub 2010 Dec 23.
  29. Victor H. Diaz-Ramirez, Kenia Picos, and Vitaly Kober. Target tracking in nonuniform illumination conditions using locally adaptive correlation filters. Optics Communications, 323(0):32–43, 2014.
  30. Du Yong Kim and Moongu Jeon. Spatio-temporal auxiliary particle filtering with-norm-based appearance model learning for robust visual tracking. Image Processing, IEEE Transactions on, 22(2):511–522, 2013.
  31. D. Comaniciu and P. Meer. Mean shift: a robust approach toward feature space analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(5):603–619, may 2002.
  32. Huiyu Zhou, Yuan Yuan, and Chunmei Shi. Object tracking using sift features and mean shift. Computer Vision and Image Understanding, 113(3):345–352, 2009.
  33. J. Wu, N. Liu, C. Geyer, and J. M. Rehg. C4: a realtime object detection framework. IEEE Trans Image Process, 22(10):4096–107, 2013. Wu, Jianxin Liu, Nini Geyer, Christopher Rehg, James M eng Research Support, Non-U.S. Gov’t 2013/06/26 06:00 IEEE Trans Image Process. 2013 Oct;22(10):4096-107. doi: 10.1109/TIP.2013.2270111. Epub 2013 Jun 19.
  34. G. Bradski. Dr. Dobb’s Journal of Software Tools, 2000.
  35. Y. Wu, J. Lim, and M. H. Yang. Online object tracking: A benchmark. In Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pages 2411–2418.
  36. B. Babenko, M. H. Yang, and S. Belongie. Visual tracking with online multiple instance learning. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 983–990.
  37. Kaihua Zhang, Lei Zhang, and Ming-Hsuan Yang. Real-time compressive tracking, 2012.
  38. A. Adam, E. Rivlin, and I. Shimshoni. Robust fragmentsbased tracking using the integral histogram. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), volume 1, pages 798–805. Institute of Electrical and Electronics Engineers (IEEE).
  39. Laura Sevilla-Lara. Distribution fields for tracking, 2012.

Keywords

Feature Extraction; Target Description; Visual Tracking; 1-NN Classification, Localization