CFP last date
22 April 2024
Reseach Article

A Comprehensive Survey on Pattern Classifier's Security Evaluation under Attack

by Suryakant R. Paralkar, U. V Kulkarni
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 115 - Number 21
Year of Publication: 2015
Authors: Suryakant R. Paralkar, U. V Kulkarni
10.5120/20276-2701

Suryakant R. Paralkar, U. V Kulkarni . A Comprehensive Survey on Pattern Classifier's Security Evaluation under Attack. International Journal of Computer Applications. 115, 21 ( April 2015), 23-28. DOI=10.5120/20276-2701

@article{ 10.5120/20276-2701,
author = { Suryakant R. Paralkar, U. V Kulkarni },
title = { A Comprehensive Survey on Pattern Classifier's Security Evaluation under Attack },
journal = { International Journal of Computer Applications },
issue_date = { April 2015 },
volume = { 115 },
number = { 21 },
month = { April },
year = { 2015 },
issn = { 0975-8887 },
pages = { 23-28 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume115/number21/20276-2701/ },
doi = { 10.5120/20276-2701 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2024-02-06T22:55:29.644560+05:30
%A Suryakant R. Paralkar
%A U. V Kulkarni
%T A Comprehensive Survey on Pattern Classifier's Security Evaluation under Attack
%J International Journal of Computer Applications
%@ 0975-8887
%V 115
%N 21
%P 23-28
%D 2015
%I Foundation of Computer Science (FCS), NY, USA
Abstract

From many years, various tools based on pattern classification system have been used in security related applications like, spam filtering, biometric authentication system, network intrusion detection system, pattern classification systems are used, in which brilliant and adaptive adversary may changes data to make the classifier produce false negatives (regular). Measurement of pattern classifier security performance is very important part for making decisions, finding product viability, for differentiate various classifiers. Pattern classification systems may exhibit obligations, exploitation affect their performance, produce limitations to practical utility, if adversarial scenario is not taken into account. At design phase, the system evaluates the classifiers security. The classifiers security means performance degradation for related attacks may acquire when operation runs. A phenomenon is used for classifier security evaluation, adversary model for defining attack scenarios that generates training and testing sets.

References
  1. B. Biggio, G. Fumera, and F. Roli, "Security evaluation of Pattern Classifiers under Attack," IEEE Transactions on Knowledge and Data Engineering, vol. 26, No. 4, April 2014. Available online at: https://pralab. diee. unica. it/sites/default/files/biggio13-tkde. pdf
  2. B. Biggio, G. Fumera, and F. Roli, "Multiple Classifier Systems for Robust Classifier Design in Adversarial Environment," Int'l J. Machine Learning and Cybernetics, vol. 1, no. 1, pp. 27-41, 2010.
  3. A. Kolcz and C. H. Teo, "Feature weighting for improved classifier robustness," In 6th Conf. on Email and Anti-Spam (CEAS), 2009.
  4. L. Breiman. Bagging predictors. Machine Learning, 24(2):123–140, 1996
  5. L. Breiman. Random forests. Machine Learning, 45:5–32, 2001.
  6. L. Huang, A. D. Joseph, B. Nelson, B. Rubinstein, and J. D. Tygar, "Adversarial Machine Learning," Proc. Fourth ACM Workshop Artificial Intelligence and Security, pp. 43-57, 2011.
  7. A. A. Cardenas and J. S. Baras, "Evaluation of Classifiers: Practical Considerations for Security Applications," Proc. AAAI Workshop Evaluation Methods for Machine Learning, 2006.
  8. M. Barreno, B. Nelson, R. Sears, A. D. Joseph, and J. D. Tygar, "Can Machine Learning be Secure?" Proc. ACM Symp. Information, Computer and Comm. Security (ASIACCS), pp. 16-25, 2006.
  9. G. L. Wittel and S. F. Wu, "On Attacking Statistical Spam Filters," Proc. First Conf. Email and Anti-Spam, 2004.
  10. N. Dalvi, P. Domingos, Mausam, S. Sanghai, and D. Verma, "Adversarial Classification," Proc. 10th ACM SIGKDD Int'l Conf. Knowledge Discovery and Data Mining, pp. 99-108, 2004.
  11. D. Lowd and C. Meek, "Adversarial Learning," Proc. 11th ACMSIGKDD Int'l Conf. Knowledge Discovery and Data Mining, pp. 641-647, 2005.
  12. A. Adler, "Vulnerabilities in Biometric Encryption Systems," Proc. Fifth Int'l Conf. Audio- and Video-Based Biometric Person Authentication, pp. 1100-1109, 2005.
  13. R. N. Rodrigues, L. L. Ling, and V. Govindaraju, "Robustness of Multimodal Biometric Fusion Methods against Spoof Attacks," J. Visual Languages and Computing, vol. 20, no. 3, pp. 196-179, 2009.
  14. P. Fogla, M. Sharif, R. Perdisci, O. Kolesnikov, and W. Lee, "Polymorphic Blending Attacks," Proc. 15th Conf. USENIX SecuritySymp. , 2006.
  15. R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification. Wiley-Interscience Publication, 2000.
  16. M. Barreno, B. Nelson, A. Joseph, and J. Tygar, "The Security of Machine Learning," Machine Learning, vol. 81, pp. 121-148, 2010.
  17. D. Lowd and C. Meek, "Good Word Attacks on Statistical Spam Filters," Proc. Second Conf. Email and Anti-Spam, 2005.
  18. D. B. Skillicorn, "Adversarial Knowledge Discovery," IEEE Intelligent Systems, vol. 24, no. 6, Nov. /Dec. 2009.
  19. D. Fetterly, "Adversarial Information Retrieval: The Manipulation of Web Content," ACM Computing Rev. , 2007.
  20. S. Rizzi, "What-If Analysis," Encyclopedia of Database Systems, pp. 3525-3529, Springer, 2009.
  21. P. Johnson, B. Tan, and S. Schuckers, "Multimodal Fusion Vulnerability to Non-Zero Effort (Spoof) Imposters," Proc. IEEE Int'l Workshop Information Forensics and Security, pp. 1-5, 2010.
  22. C. Sutton, M. Sindelar, and A. McCallum. Feature bagging: Preventing weight undertraining in structured discriminative learning. IR 402, University of Massachusetts, 2005.
Index Terms

Computer Science
Information Sciences

Keywords

Legitimate samples malicious samples reactive and proactive arms race spoof attack.