Call for Paper - January 2023 Edition
IJCA solicits original research papers for the January 2023 Edition. Last date of manuscript submission is December 20, 2022. Read More

Black Box Anomaly Detection in Multi-Cloud Environment

Print
PDF
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Year of Publication: 2016
Authors:
Mahendra Kumar Ahirwar, Manish Kumar Ahirwar, Uday Chourasia
10.5120/ijca2016910125

Mahendra Kumar Ahirwar, Manish Kumar Ahirwar and Uday Chourasia. Black Box Anomaly Detection in Multi-Cloud Environment. International Journal of Computer Applications 144(2):31-37, June 2016. BibTeX

@article{10.5120/ijca2016910125,
	author = {Mahendra Kumar Ahirwar and Manish Kumar Ahirwar and Uday Chourasia},
	title = {Black Box Anomaly Detection in Multi-Cloud Environment},
	journal = {International Journal of Computer Applications},
	issue_date = {June 2016},
	volume = {144},
	number = {2},
	month = {Jun},
	year = {2016},
	issn = {0975-8887},
	pages = {31-37},
	numpages = {7},
	url = {http://www.ijcaonline.org/archives/volume144/number2/25155-2016910125},
	doi = {10.5120/ijca2016910125},
	publisher = {Foundation of Computer Science (FCS), NY, USA},
	address = {New York, USA}
}

Abstract

Automatic identification of anomalies for performance diagnosis in the cloud computing is a fundamental and challenging issue. TPA is interested to identifies these anomalies and remove them so that the performance of the cloud systems increased. In this paper we are proposing an Automatic Black Box Anomaly Detector which can find anomalies automatically with minimum human intervention. Using this detector we can find old and even new anomalies created in the cloud computing systems even if we don’t have knowledge of source code (i.e. black box testing). Automatic black box anomaly detection is a two step process in which first of all data from different sources is collected and transform it into a common form that is act as input for black box anomaly detector and secondly anomaly detection is performed.

References

  1. Haibo Mi, Huaimin Wang, Yangfan Zhou, Michael Rung-Tsong Lyu and Hua Cai, “Toward Fine-Grained, Unsupervised, Scalable Performance Diagnosis for Production Cloud Computing Systems.”IEEE Transactions on Parallel and Distributed Systems, vol. 24, no. 6, pp 1245-1254, June-2013. .
  2. R. Sambasivan, A. Zheng, M. De Rosa, E. Krevat, S. Whitman, M. Stroucken, W. Wang, L. Xu, and G. Ganger, “Diagnosing Performance Changes by Comparing Request Flows,” Proc. USENIX Eighth Symposium Networked Systems Design and Implementation (NSDI), pp. 43-56, 2011.
  3. M. Chen, A. Accardi, E. Kiciman, J. Lloyd, D. Patterson, A. Fox, and E. Brewer, “Path-Based Faliure and Evolution Management,” Proc. USENIX Symposium Networked Systems Design and Implementation (NSDI), pp. 23-36, 2004.
  4. E. Candes, X. Li, Y. Ma, and J. Wright, “Robust Principal Component Analysis?” Arxiv Preprint arXiv:0912.3599, 2009.
  5. B. Sigelman, L. Barroso, M. Burrows, P. Stephenson, M. Plakal, D. Beaver, S. Jaspan, and C. Shanbhag, “Dapper, a Large-Scale Distributed Systems Tracing Infrastructure,” Technical Report dapper-2010-1, Google, 2010.
  6. F. Chang, J. Dean, S. Ghemawat, W. Hsieh, D. Wallach, M. Burrows, T. Chandra, A. Fikes, and R. Gruber, “Bigtable: A Distributed Storage System for Structured Data,” ACM Trans. Computer Systems, vol. 26, no. 2, pp. 1-26, 2008.
  7. H. Mi, H. Wang, Y. Zhou, M.R. Lyu, and H. Cai, “P-tracer: Path-Base Performance Profiling in Cloud Computing Systems,” Proc. IEEE 36th Ann. Computer Software Applications Conference (COMPSAC), pp. 509-514, 2012.
  8. M. Chen, E. Kiciman, E. Fratkin, A. Fox, and E. Brewer, “Pinpoint: Problem Determination in Large, Dynamic Internet Services,” Proc. IEEE International Conference Dependable Systems and Networks (DSN), pp. 595-604, 2002.
  9. P. Reynolds, C. Killian, J. Wiener, J. Mogul, M. Shah, and A. Vahdat, “Pip: Detecting the Unexpected in Distributed Systems,” Proc. USENIX Third Symposium Networked Systems Design and Implementation (NSDI), pp. 115-128, 2006.
  10. E. Thereska and G. Ganger, “Ironmodel: Robust Performance Models in the Wild” ACM SIGMETRICS Performance Evaluation Rev., vol. 36, no. 1, pp. 253-264, 2008.
  11. Shobha Venkataraman, Juan Caballero, Dawn Song, Avrim Blum and Jennifer Yates” Black Box Anomaly Detection” Carnegie Institute of Technology at Research Showcase @ CMU. Pp 127-132, 2006.
  12. A. Lakhina, M. Crovella, and C. Diot. Mining anomalies using traffic feature distributions. ACM SIGCOMM 2005.
  13. M. V. Mahoney and P. K. Chan. Learning Rules for Anomaly Detection of Hostile Network Traffic. Third IEEE International Conference on Data Mining.
  14. Pankaj Sareen. “Cloud Computing: Types, Architecture, Applications, Concerns, Virtualization and Role of IT Governance in Cloud”, International Journal of Advanced Research in Computer Science and Software Engineering, Volume 3, Issue 3, pp 533-538, 2013.
  15. Sumit goyal. “Public vs Private vs Hybrid vs Community-Cloud Computing: A Critical Review”, I.J. Computer Network and Information Security, pp. 20-29, 2014.
  16. H. Mi, H. Wang, G. Yin, H. Cai, Q. Zhou, and T. Sun, “Performance Problems Diagnosis in Cloud Computing Systems by Mining Request Trace Logs,” Proc. IEEE Network Operations and Management Symp. (NOMS), pp. 893-899, 2012.

Keywords

Black box anomaly detector, cloud service provider, performance diagnosis, cloud systems.