CFP last date
20 May 2024
Call for Paper
June Edition
IJCA solicits high quality original research papers for the upcoming June edition of the journal. The last date of research paper submission is 20 May 2024

Submit your paper
Know more
Reseach Article

A Review on Outdoor and Indoor Automated Video Surveillance Systems

by U. Pavan Kumar, Bharathi S.H.
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 132 - Number 6
Year of Publication: 2015
Authors: U. Pavan Kumar, Bharathi S.H.
10.5120/ijca2015907524

U. Pavan Kumar, Bharathi S.H. . A Review on Outdoor and Indoor Automated Video Surveillance Systems. International Journal of Computer Applications. 132, 6 ( December 2015), 40-47. DOI=10.5120/ijca2015907524

@article{ 10.5120/ijca2015907524,
author = { U. Pavan Kumar, Bharathi S.H. },
title = { A Review on Outdoor and Indoor Automated Video Surveillance Systems },
journal = { International Journal of Computer Applications },
issue_date = { December 2015 },
volume = { 132 },
number = { 6 },
month = { December },
year = { 2015 },
issn = { 0975-8887 },
pages = { 40-47 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume132/number6/23601-2015907524/ },
doi = { 10.5120/ijca2015907524 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-06T23:28:27.892150+05:30
%A U. Pavan Kumar
%A Bharathi S.H.
%T A Review on Outdoor and Indoor Automated Video Surveillance Systems
%J International Journal of Computer Applications
%@ 0975-8887
%V 132
%N 6
%P 40-47
%D 2015
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Video surveillance is an important area of computer vision research, its applications including both outdoor and indoor automated surveillance systems. Detecting through video image processing is one of the most attractive alternative new technologies as it offers opportunities for performing substantially more complex tasks and providing more information than other sensors. Video Surveillance systems have as main goal to control the safety and the security of materials of which utilizing people. This paper provides an overview of various methods and techniques from the research area that address the problems of representation, recognition and learning of events, actions and activities of inhabitants from an environment.

References
  1. Ricquebourg, D.Menga, D.Durand, B.Marhic, L. Delahoche, and C.Loge, “The smart home concept: our immediate future,” in Proc.IEEE Int.Conf.on E-Learning in Industrial Electronics, dec2006, pp. 23–28.
  2. G.Lavee, E.Rivlin, and M.Rudzsky, "Understanding video events: a survey of methods for automatic interpretation of semantic occurrences in video," Trans.Sys. Man Cyber Part C, vol.39, no.5, pp.489-504, sep 2009.
  3. F.Cardimax, D.Bhowmik, C.Abhayaratne, and M.S.Hawley, "Video based technology for ambient assisted living: A review of the literature," Journal of Ambient Intelligence and Smart Environments, vol.3, no. 3, pp.253-269, 2011.
  4. A.J.Lipton, H.Fujiyoshi, and R.S.patil, "Moving target classification and tracking from real-time video," in Proc. IEEE Workshop Applications of Computer Vision, 1998, pp.8-14.
  5. J.Barron, D.Fleet, and S.Beauchemin, "Performance of optical techniques," Intl. Journal of Computer Vision, vol.12, no.1,pp.42-77, 1994.
  6. M.Hedayati, W.Zaki, and A.Hussain, “Real-time background subtraction for video surveillance: From research to reality,” in Proc. Int. Colloquium on Signal Processing and Its Applications, 2010, pp. 1–6.
  7. M.Murshed, A.Ramirez, and O.Chae, “Statistical background modeling: An edge segment based moving object detection approach,” in Proc. IEEE Int. Conf. on Advanced Video and Signal Based Surveillance, 2010, pp. 300–306.
  8. O.Barnich and M.V.Droogenbroeck, "Vibe":A universal background subtraction algorithm for video sequences," IEEE Trans. on Image Processing, vol.20, no.6, pp.1709-1724, 2011.
  9. W.Hu, T.Tan, L.Wang, and S.Maybank, “A survey on visual surveillance of object motion and behaviors", IEEE Trans. on Systems, Man, and Cybernetics, Part C, vol. 34, no. 3, pp. 334–352, 2004.
  10. I.A.Karaulova, P.M.Hall, and A.D.Marshall, “A hierarchical model of dynamics for tracking people with a single video camera,” in Proc. British Machine Vision Conf., 2000, pp. 261–352.
  11. S.A.Niyogi and E.H.Adelson, “Analyzing and recognizing walking figures in XYT,” in Proc. IEEE Int. Conf. Computer Vision and Pattern Recognition, 1994, pp. 496–474.
  12. Q.Delamarre and O.Fagugeras, "3D articulated models and multi-view tracking with physical forces," Comput. Vis. Image Understanding, vol.81, no.3, pp.328-357, 2001.
  13. R.Plankers and P.Fua, "Articulated soft objects for video based body modeling," in Proc. Int. Conf. Computer Vision, Vancouver, BC, Canada, 2001, pp. 394–401.
  14. T.Zhao, T. S. Wang, and H. Y. Shum, “Learning a highly structured motion model for 3D human tracking,” in Proc. Asian Conf. Computer Vision, Melbourne, Australia, 2002, pp. 144–149.
  15. C.Bregler, “Learning and recognizing human dynamics in video sequences,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, San Juan, Puerto Rico, 1997, pp. 568–576.
  16. H.Sidenbladh and M.Black, “tochastic tracking of 3D human figures using 2D image motion,” in Proc. European Conf. Computer Vision, Dublin, Ireland, 2000, pp. 702–716.
  17. D.G.Lowe, “Fitting parameterized 3-D models to images,” IEEE Trans. Pattern Anal. Machine Intell., vol. 13, pp. 441–450, 1991.
  18. J.Hoshino, H.Saito, and M.Yamamoto, "A match moving technique for merging CG cloth and human video," J.Visualiz. Comput. Animation, vol.12, no.1, pp.23–29, 2001.
  19. J.E.Bennett, A.Racine-Poon, and J.C.Wakefield, "Mcmc for nonlinear hierarchical models," in markov Chain Monte Carloin Practice, W.R.Gilks, S.Richardson, and D.J. Spiegelhalter, Eds. U.K:Chapman and Hall, 1996, pp. 339–357.
  20. M.Isard and A.Blake, "CONDENSATION conditional density propagation for visual tracking," Int J.Comput Vis., vol.29, no.1, pp.5-28,1998.
  21. T. B. Moeslund, A. Hilton, and V.Krger, “A survey of advances in vision-based human motion capture and analysis,” Comput. Vis. Image Understanding, vol. 104, pp. 96–126, 2006.
  22. P.Turaga, R.Chellappa, V.Subrahmanian, and O. Udrea, “Machine recognition of human activities: A survey,” IEEE on Circuits and Systems for Video Technology, vol. 18,pp. 1473–1488, 2008.
  23. J.Aggarwal and M.Ryoo, “Human activity analysis: A review machine recognition of human activities: A survey,” ACM Computing Surveys, vol. 13, no. 3, 2011.
  24. A.Bobick and J.Davis, “The recognition of human movement using temporal templates,” IEEE Trans. Pattern Anal. Machine Intell., vol. 23, no. 3, pp. 257–267, 2001.
  25. Y.Ke, R.Sukthankar, and M.Hebert, “Spatio-temporal shape and correlation for action recognition,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 2007, pp. 1–8.
  26. C.M.Sharma, A.K.S.Kushwaha, S.Nigam, and A. Khar, “Automatic human activity recognition in video using background modeling and spatio-temporal template matching based technique,” in Proc. of the Int. Conf. on Advances in Computing and Artificial Intelligence, 2011, pp. 97–101.
  27. Y.Sheikh, M.Sheikh, and M. Shah, “Exploring the space of a human actione,” in Proc. IEEE Int. Conf. on Comput. Vision, vol. 1, 2005, pp. 144–149.
  28. C.Rao and M.Shah, “View-invariance in action recognition,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, vol. 2, 2001, pp. 316–322.
  29. M.Blank, L.Gorelick, E.Shechtman, M.Irani, R., and Basri, “Actions as space-time shapes,” in Proc. IEEE Int. Conf. on Comput. Vision, vol. 2, 2005, pp. 1395–1402.
  30. J.C.Niebles, H.Wang, and L.Fei-Fei, “Unsupervised learning of human action categories using spatial-temporal words,” Int. J. Comp. Vis., vol. 79, no. 3, pp. 299–318, 2009.
  31. L.Zelnik-Manor and M. Irani, “Event-based analysis of video,” in Proc. IEEE Conf. on Comp. Vision and Pattern Recognition, vol. 2, 2001, pp. 123–130.
  32. A.Wiliem, V.Madasu, W. Boles, and P.Yarlagadda, “Context space model for detecting anomalous behaviour in video surveillance,” in Proc. IEEE Int. Conf. on Information Technology. Generations, 2012, pp. 18–24.
  33. A.Veeraraghavan, R.Chellappa, and A.Roy-Chowdhury, “The function space of an activity,” in Proc. IEEE Conf. on Comp. Vision and Pattern Recognition, vol. 1, 2006, pp. 959–968.
  34. R.Lublinerman, N.Ozay, D.Zarpalas, and O. Camps, “Parameterized modeling and recognition of activities,” in Proc.Int. Conf. on Pattern Recognition, 2006, pp. 347–350.
  35. N. Diehl, Object-Oriented Motion Estimation and Segmentation in Image Sequences, IEEE Trans. Image Processing, vol. 3, pp. 1,9011,904, Feb. 1990.
  36. Nikos Paragios, Rachid Deriche Geodesic Active Contours and Level Sets for the Detection and Tracking of Moving Objects, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.22 No.3 pp 266-280, March, 2000.
  37. Lei Xie; Guangxi Zhu; Yuqi Wang; Haixiang Xu; Zhenming Zhang; Robust vehicles extraction in a video-based intelligent transportation systems IEEE 2005 International Conference on Communications, Circuits and Systems, Volume 2, 27-30 May 2005 Page.887-890.
  38. H.H.Nagel, G.Socher, H.Kollnig, and M.Otte, Motion Boundary Detection in Image Sequences by Local Stochastic Tests, Proc. European Conf. Computer Vision, vol. II, pp. 305-315, 1994.
  39. T.Aach and A.Kaup, Bayesian Algorithms for Adaptive Change Detection in Image Sequences Using Markov Random Fields, Signal Processing: Image Comm., vol. 7, pp. 147-160, 1995.
  40. J.M.Odobez and P.Bouthemy, Robust Multi resolution Estimation of Parametric Motion Models, J.Visual Comm. and Image Representation, vol. 6, pp. 348-365, 1995.
  41. N. Paragios and G.Tziritas, Adaptive Detection and Localization of Moving Objects in Image Sequences, Signal Processing: Image Comm., vol. 14, pp. 277-296, 1999.
  42. Kalman.K. and BrandtA., Moving Object Recognition Using an Adaptive Background, In Proc. Time-varying Image Processing and Moving Object Recognition, Amsterdam, Netherlands, pp 289-296, 1990.
  43. Stauffer C. and Grimson W., Adaptive Background Mixture Models for Real-time Tracking. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, Fort Collins, Colorado, 246-252, 1999.
  44. Baisheng Chen, Yunqi, Lei, and Wangwei Li, A Novel Background Model for Real-time Vehicle Detection. IEEE ICSP’04, 12761279, 2004.
  45. Zhang Jinglei; Liu Zhengguang; A Vision-Based Road Surveillance System Using Improved Background Subtraction and Region Growing Approach IEEE Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing Volume 3, July 30 2007-Aug. 1 2007 Page(s): 819 – 822
  46. C.C.C.Pang,, W. W. L. Lam, and N. H. C. Yung. “A Novel Method for Resolving Vehicle Occlusion in a Monocular Traffic-image Sequence,” IEEE Trans. on Intelligent Transportation Systems, vol. 5, no. 3, pp.129-141, Sep. 2004.
  47. A.Mittaland N.Paragios."Motion-Based Background Subtraction using Adaptive Kernel Density Estimation," CVPR,, Washington, DC, 302-309, 2004.
  48. Junxian Wang, George Bebis and Ronald Miller, Overtaking Vehicle Detection Using Dynamic and Quasi-Static Background Modeling, IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2005,
  49. Haritaogu I., Harwood D. and Davis L., W. Real-Time Surveillance of People and Their Activities, IEEE Transaction on Pattern Analysis and Machine Intelligence. 22, (8), 809-830, August, 2000.
  50. N.Friedman and S.Russell. "Image Segmentation in Video Sequences. A Probabilistic approach, "Thirteenth Conf. on Uncertainity in Artificial Intelligence, pp. 175-181,1997.
  51. Hanzi Wang; Suter, D.; A re-evaluation of mixture of Gaussian background modeling ICASSP '05). IEEE International Conference on video signal processing applications Vol. 2 Page(s) 1017 - 1020.
  52. Andrew H.S. Lai and Nelson H.C.Yung, "Vehicle-type Identification Through automated virtual loop assignment and block-based direction biased motion estimation", IEEE Transactions on Intelligent Transportation Systems, vol.1,no.2, pp.86-97, Jun. 2000.
  53. C.Neustaedter, S. Greenberg, and M. Boyle, “Blur filtration fails to preserve privacy for home-based video conferencing,” ACM Trans. on Computer-Human Interaction, vol. 13, no. 1, pp. 1–36, 2005.
  54. A.Senior, S.Pankati, A.Hampapur, L.Brown, Y. Tian, and A.Ekin, “Blinkering surveillance: enabling video surveillance privacy through computer vision,” IEEE Security and Privacy, vol. 3, no. 5, pp. 50–57, 2005.
  55. E. Newton, L.Sweeney, and B.Malin, “Preserving privacy by de-identifying facial images,” IEEE Trans. on Knowledge and Data Engineering, vol. 17, no. 2, pp. 232–243, 2005.
  56. J.Schiff, M.Meingast, D.Mulligan, S.Sastry, and K..Gold-berg, "Respectful cameras: Detecting visualmarkers in real time to address privacy concerns," in Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems, 2007,pp. 971–978.
  57. W.Zhang, S.Ching S.Cheug, and M. Chen, "Hiding privacy information in video surveillance system," in Proc. IEEE Intl. Conf. on Image Processing, 2005, pp. 868-871.
Index Terms

Computer Science
Information Sciences

Keywords

Video surveillance tracking Shadow removes Motion detection.