CFP last date
22 April 2024
Reseach Article

A Review on Security Evaluation for Pattern Classifier against Attack

Published on December 2015 by Kunjali Pawar, Madhuri Patil
National Conference on Advances in Computing
Foundation of Computer Science USA
NCAC2015 - Number 4
December 2015
Authors: Kunjali Pawar, Madhuri Patil
5f7ca5c7-2caf-4f6c-8d13-ec1e9b8e69aa

Kunjali Pawar, Madhuri Patil . A Review on Security Evaluation for Pattern Classifier against Attack. National Conference on Advances in Computing. NCAC2015, 4 (December 2015), 15-18.

@article{
author = { Kunjali Pawar, Madhuri Patil },
title = { A Review on Security Evaluation for Pattern Classifier against Attack },
journal = { National Conference on Advances in Computing },
issue_date = { December 2015 },
volume = { NCAC2015 },
number = { 4 },
month = { December },
year = { 2015 },
issn = 0975-8887,
pages = { 15-18 },
numpages = 4,
url = { /proceedings/ncac2015/number4/23379-5045/ },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Proceeding Article
%1 National Conference on Advances in Computing
%A Kunjali Pawar
%A Madhuri Patil
%T A Review on Security Evaluation for Pattern Classifier against Attack
%J National Conference on Advances in Computing
%@ 0975-8887
%V NCAC2015
%N 4
%P 15-18
%D 2015
%I International Journal of Computer Applications
Abstract

The systems which can be used for pattern classification are used in adversarial application, for example spam filtering, network intrusion detection system, biometric authentication. This adversarial scenario's exploitation may sometimes affect their performance and limit their practical utility. In case of pattern classification conception and contrive methods to adversarial environment is a novel and relevant research direction, which has not yet pursued in a systematic way. To address one main open issue: evaluating at contrive phase the security of pattern classifiers (for example the performance degradation under potential attacks which incurs during the operation). To propose a framework for evaluation of classifier security and also this framework can be applied to different classifiers on one of the application from the spam filtering,biometric authentication andnetwork intrusion detection.

References
  1. A. D. Joseph, L. Huang, B. Nelson, J. D. Tygar and B. Rubinstein, "Adversarial Machine Learning," Proc. Fourth ACM Workshop Artificial Intelligence and Security, pp. 43-57, 2011.
  2. R. Lippmann and P. Laskov, "Machine Learning in Adversarial Environments," Machine Learning, vol. 81, pp. 115-119, 2010.
  3. D. Lowd and C. Meek, "Adversarial Learning," Proc. 11thACM SIGKDD Int'l Conf. Knowledge Discovery and Data Mining, pp. 641-647, 2005.
  4. Battista Biggio, Giorgio Fumera and Fabio Roli, " Security Evaluation of Pattern Classifiers under Attack," IEEE Transactions On Knowledge And Data Engineering, VOL. 26, NO. 4, APRIL 2014.
  5. K. Seamon and J. S. Baras, "A Framework for the Evaluation of Intrusion Detection Systems," Proc. IEEE Symp. Security and Privacy, pp. 63-77, 2006.
  6. B. Nelson, M. Barreno and A. Joseph, "The Security ofMachine Learning," Machine Learning, vol. 81, pp. 121-148, 2010.
  7. B. Nelson , M. Barreno, R. Sears, J. D. Tygar and A. D. Joseph, "Can Machine Learning be Secure?," Proc. ACM Symp. Information, Computer and Comm. Security (ASIACCS), pp. 16-25, 2006.
  8. S. Rizzi, "What-If Analysis," Encyclopedia of Database Systems, pp. 3525-3529, Springer, 2009.
  9. A. Wahi and C. Prabakaran, "A Literature Survey on Security Evaluation of the Pattern Classifiers under Attack," International Journal of Advance Research in the Computer Science and Management Studies, VOL. 2, Issue 10, October 2014.
  10. A. M. Narasimhamurthy and L. I. Kuncheva, "A Framewo-rk for Generating Data to Simulate Changing Environments," Proc. 25th Conf. Proc. The 25th IASTED Int'l Multi-Conf. : Artificial Intelligence and Applications, pp. 415-420, 2007.
  11. B. Biggio, F. Roli and G. Fumera, "Adversarial Pattern Classification Using Multiple Classifiers and Randomisation," Proc. Joint IAPR Int'l Workshop Structural, Syntactic, and Statistical Pattern Recognition, pp. 500-509, 2008.
  12. P. Laskov and M. Kloft, "A Framework for Quantitative Security Analysis of Machine Learning," Proc. Second ACM Workshop Security and Artificial Intelligence, pp. 1-4, 2009.
  13. Mausam, S. Sanghai, N. Dalvi, and D. Vermaand P. Domingos,"Adversarial Classification," Proc. 10th ACM SIGKDD Int'l Conference Knowledge Discovery and Data Mining, pp. 99-108, 2004.
  14. A. Adler, "Vulnerabilities in Biometric Encryption Syste-ms," Proc. Fifth Int'l Conf. Audio- and Video-Based Biometric Person Authentication, pp. 1100-1109, 2005.
  15. D. B. Skillicorn, "Adversarial Knowledge Discovery," IE-
  16. EE Intelligent Systems, vol. 24, no. 6, Nov. /Dec. 2009. R. J. Tibshirani and B. Efron, An Introduction to the Bootstrap, Chapman & Hall, 1993.
  17. C. Meek and D. Lowd, "Good Word Attacks on Statistical Spam Filters," Proc. Second Conf. Email and Anti-Spam, 2005.
  18. I. Mani and J. Zhang, "A Multiple Instance Learning Strategy for Combating Good Word Attacks on Spam Filters," J. Machine Learning Research, vol. 9, pp. 1115-1146, 2008.
  19. J. Zhang and I. Mani, "KNN Approach to Unbalanced Data Distributions: A Case Study Involving Information Extraction," Proc. Int'l Conf. Machine Learning (ICML'2003), Workshop Learning from Imbalanced Data Sets, 2003.
  20. G. L. Wittel and S. F. Wu, "On Attacking Statistical Spam Filters," Proc. First Conf. Email and Anti-Spam, 2004.
  21. C. H. Teo and A. Kolcz, "Feature Weighting forImproved
  22. ClassifierRobustness," Proc. Sixth Conf. Email And Anti-Spam, 2009.
Index Terms

Computer Science
Information Sciences

Keywords

Machine Learning System Security Evaluation Adversarial Classification Arms-race Spam Filtering.