CFP last date
20 April 2026
Call for Paper
May Edition
IJCA solicits high quality original research papers for the upcoming May edition of the journal. The last date of research paper submission is 20 April 2026

Submit your paper
Know more
Random Articles
Reseach Article

Evaluation of Generative AI-Enabled Cyber Attack Vectors

by Shekar Munirathnam
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 187 - Number 88
Year of Publication: 2026
Authors: Shekar Munirathnam
10.5120/ijca2026926540

Shekar Munirathnam . Evaluation of Generative AI-Enabled Cyber Attack Vectors. International Journal of Computer Applications. 187, 88 ( Mar 2026), 44-50. DOI=10.5120/ijca2026926540

@article{ 10.5120/ijca2026926540,
author = { Shekar Munirathnam },
title = { Evaluation of Generative AI-Enabled Cyber Attack Vectors },
journal = { International Journal of Computer Applications },
issue_date = { Mar 2026 },
volume = { 187 },
number = { 88 },
month = { Mar },
year = { 2026 },
issn = { 0975-8887 },
pages = { 44-50 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume187/number88/evaluation-of-generative-ai-enabled-cyber-attack-vectors/ },
doi = { 10.5120/ijca2026926540 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2026-03-20T22:55:20.828481+05:30
%A Shekar Munirathnam
%T Evaluation of Generative AI-Enabled Cyber Attack Vectors
%J International Journal of Computer Applications
%@ 0975-8887
%V 187
%N 88
%P 44-50
%D 2026
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Generative artificial intelligence represents a transformative shift in offensive cyber operations. This research examines AI weaponization, particularly Large Language Models (LLMs) and neural generation systems, within attack contexts. A classification framework spanning four operational domains is presented: engineered social manipulation, adaptive malware development, automated vulnerability discovery, and intelligent reconnaissance. Through empirical analysis of documented incidents, technical capability assessments, quantitative comparative evaluations, and workflow modeling, it is demonstrated how these technologies amplify adversarial effectiveness while compressing attack timelines by up to 87% and reducing prerequisite expertise. The findings reveal critical vulnerabilities in contemporary defense architectures, particularly regarding reliance on pattern-based detection and signature-dependent controls. Evidence of asymmetric advantages emerging from AI adoption is presented, wherein offensive applications outpace defensive countermeasures. The paper concludes with countermeasure frameworks incorporating AI-enhanced defensive technologies alongside regulatory approaches necessary for sustainable security [1], [14], [21].

References
  1. M. Brundage et al., “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation,” Future of Humanity Institute, University of Oxford, 2018.
  2. I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and Harnessing Adversarial Examples,” in Proc. ICLR, 2015.
  3. N. Carlini and D. Wagner, “Towards Evaluating the Robustness of Neural Networks,” in Proc. IEEE S&P, 2017, pp. 39-57.
  4. A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial Examples in the Physical World,” in Proc. ICLR, 2017.
  5. F. Tramèr et al., “Stealing Machine Learning Models via Prediction APIs,” in Proc. USENIX Security, 2016, pp. 601-618.
  6. B. Biggio and F. Roli, “Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning,” Pattern Recognition, vol. 84, pp. 317-331, 2018.
  7. S. Shen et al., “Backdoor Pre-trained Models Can Transfer to All,” in Proc. ACM CCS, 2021, pp. 3141-3158.
  8. Y. Li et al., “Deepfake Detection: A Systematic Literature Review,” IEEE Access, vol. 10, pp. 139652-139671, 2022.
  9. H. Pearce et al., “Examining Zero-Shot Vulnerability Repair with Large Language Models,” in Proc. IEEE S&P, 2023, pp. 2339-2356.
  10. N. Papernot et al., “The Limitations of Deep Learning in Adversarial Settings,” in Proc. IEEE EuroS&P, 2016, pp. 372-387.
  11. S. Malone et al., “Large Language Models and Cybersecurity: A Systematic Literature Review,” arXiv:2401.15283, 2024.
  12. M. Chen et al., “Evaluating Large Language Models Trained on Code,” arXiv:2107.03374, 2021.
  13. D. Kang et al., “Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard Security Attacks,” arXiv:2302.05733, 2023.
  14. A. Gupta and H. Sundaram, “Security Threats and Mitigation Strategies in AI-Powered Security Systems,” IEEE Security & Privacy, vol. 21, no. 3, pp. 45-54, 2023.
  15. J. Wei et al., “Jailbroken: How Does LLM Safety Training Fail?,” in Proc. NeurIPS, 2023.
  16. Y. Mirsky and W. Lee, “The Creation and Detection of Deepfakes: A Survey,” ACM Computing Surveys, vol. 54, no. 1, 2021.
  17. A. Vaswani et al., “Attention is All You Need,” in Proc. NeurIPS, 2017.
  18. I. Goodfellow et al., “Generative Adversarial Networks,” Communications of the ACM, vol. 63, no. 11, pp. 139-144, 2020.
  19. J. Ho, A. Jain, and P. Abbeel, “Denoising Diffusion Probabilistic Models,” in Proc. NeurIPS, 2020.
  20. C. Zhang et al., “Security and Privacy in Machine Learning: A Survey,” IEEE TDSC, vol. 19, no. 5, pp. 3359-3378, 2022.
  21. NIST, “Artificial Intelligence Risk Management Framework,” National Institute of Standards and Technology, 2023.
  22. T. Brown et al., “Language Models are Few-Shot Learners,” in Proc. NeurIPS, 2020.
  23. OpenAI, “GPT-4 System Card,” Technical Report, 2023.
  24. R. Anderson et al., “Measuring the Changing Cost of Cybercrime,” in Proc. WEIS, 2019.
  25. S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach, 4th ed., Pearson, 2020.
Index Terms

Computer Science
Information Sciences

Keywords

Artificial Intelligence Security Generative Models Offensive AI Language Model Exploitation Deepfake Technology Adversarial AI Automated Exploitation Cybersecurity Threats Attack Workflows