CFP last date
20 February 2026
Call for Paper
March Edition
IJCA solicits high quality original research papers for the upcoming March edition of the journal. The last date of research paper submission is 20 February 2026

Submit your paper
Know more
Random Articles
Reseach Article

Adaptive Reinforcement Learning Framework for Automated Incident Response to Insider Threats

by O.O. Olasehinde, O.C. Olayemi, B.K. Alese, O.O. Akinade
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 187 - Number 77
Year of Publication: 2026
Authors: O.O. Olasehinde, O.C. Olayemi, B.K. Alese, O.O. Akinade
10.5120/ijca2026926180

O.O. Olasehinde, O.C. Olayemi, B.K. Alese, O.O. Akinade . Adaptive Reinforcement Learning Framework for Automated Incident Response to Insider Threats. International Journal of Computer Applications. 187, 77 ( Jan 2026), 16-22. DOI=10.5120/ijca2026926180

@article{ 10.5120/ijca2026926180,
author = { O.O. Olasehinde, O.C. Olayemi, B.K. Alese, O.O. Akinade },
title = { Adaptive Reinforcement Learning Framework for Automated Incident Response to Insider Threats },
journal = { International Journal of Computer Applications },
issue_date = { Jan 2026 },
volume = { 187 },
number = { 77 },
month = { Jan },
year = { 2026 },
issn = { 0975-8887 },
pages = { 16-22 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume187/number77/adaptive-reinforcement-learning-framework-for-automated-incident-response-to-insider-threats/ },
doi = { 10.5120/ijca2026926180 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2026-01-31T00:33:39+05:30
%A O.O. Olasehinde
%A O.C. Olayemi
%A B.K. Alese
%A O.O. Akinade
%T Adaptive Reinforcement Learning Framework for Automated Incident Response to Insider Threats
%J International Journal of Computer Applications
%@ 0975-8887
%V 187
%N 77
%P 16-22
%D 2026
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Insider threats remain one of the most difficult security challenges because malicious actions often originate from trusted users and evolve over time. Traditional rule-based and static incident response systems struggle to adapt to changing insider behaviours, leading to delayed or suboptimal responses. This study proposes an adaptive incident response framework based on reinforcement learning that dynamically selects response actions according to observed system states and threat severity. The framework models incident response as a sequential decision-making process, where an agent learns optimal response policies through interaction with a simulated enterprise environment. States capture security context and threat indicators, actions represent response options, and rewards are designed to balance rapid containment, operational continuity, and false positive reduction. Experimental evaluation demonstrates that the proposed approach consistently outperforms static and heuristic-based baselines in response effectiveness, convergence stability, and adaptability to evolving attack patterns. Results show improved response accuracy, faster containment times, and stable learning behaviour across training episodes. The Q Learning model performed better than Support Vector Machine and Random Forest models, reaching 96.8 percent accuracy, an F1 score of 0.944, and a Matthews Correlation Coefficient (MCC) of 0.917. When connected to Security Orchestration, Automation and Response (SOAR) platforms, the system can make fast and context aware decisions that reduce the work of analysts and shorten response time. The findings confirm that reinforcement learning offers a practical and scalable solution for adaptive insider threat incident response. This work contributes an automated decision framework that improves resilience, reduces manual intervention, and supports trustworthy security operations in dynamic environments.

References
  1. U.S. CISA, “Guidance for SIEM and SOAR Implementation,” resource hub, May 27, 2025. https://www.cisa.gov/resources-tools/resources/guidance-siem-and-soar-implementation
  2. 12Y. Gong, S. Cui, S. Liu, B. Jiang, C. Dong, and Z. Lu, “Graph-based insider threat detection: A survey,” Computer Networks, vol. 254, 2024, Art. 110757. DOI: 10.1016/j.comnet.2024.110757. https://www.sciencedirect.com/science/article/abs/pii/S1389128624005899
  3. C. J. C. H. Watkins and P. Dayan, “Q-learning,” Machine Learning, vol. 8, pp. 279–292, 1992. DOI: 10.1007/BF00992698.
  4. R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction, 2nd ed. Cambridge, MA: MIT Press, 2018. https://mitpress.mit.edu/9780262039246/reinforcement-learning/
  5. A. M. K. Adawadkar and N. Kulkarni, “Cyber-security and reinforcement learning — A brief survey,” Engineering Applications of Artificial Intelligence, vol. 114, 2022, Art. 105116. DOI: 10.1016/j.engappai.2022.105116.
  6. I. Miles et al., “Reinforcement Learning for Autonomous Resilient Cyber Defence,” Frazer-Nash Consultancy White Paper, 2024.https://www.fnc.co.uk/media/mwcnckij/us-24-milesfarmer-reinforcementlearningforautonomousresilientcyberdefence-wp.pdf
  7. B. Lindauer, “Insider Threat Test Dataset.” Carnegie Mellon University, Software Engineering Institute, 2020. DOI: 10.1184/R1/12841247.v1.
  8. J. Glasser and B. Lindauer, “Bridging the Gap: A Pragmatic Approach to Generating Insider Threat Data,” 2013 IEEE Security and Privacy Workshops, pp. 98–104, 2013. DOI: 10.1109/SPW.2013.37.
  9. H. Han, W.-Y. Wang, and B.-H. Mao, “Borderline-SMOTE: A New Over-Sampling Method in Imbalanced Data Sets Learning,” in Advances in Intelligent Computing, 2005, pp. 878–887. DOI: 10.1007/11538059_91.
  10. Y. Sun, M. Zhang, and C. Li, “Borderline SMOTE algorithm and feature selection-based network anomalies detection strategy,” Energies, vol. 15, no. 13, 2022, Art. 4751. DOI: 10.3390/en15134751.
  11. D. Chicco and G. Jurman, “The advantages of the MCC over F1 score and accuracy in binary classification evaluation,” BMC Genomics, vol. 21, no. 6, 2020. DOI: 10.1186/s12864-019-6413-7.
  12. Y. Gong, S. Cui, S. Liu, B. Jiang, C. Dong, and Z. Lu, “Graph-based insider threat detection: A survey,” Computer Networks, vol. 254, 2024, Art. 110757. DOI: 10.1016/j.comnet.2024.110757. https://www.sciencedirect.com/science/article/abs/pii/S1389128624005899
  13. X. Tao, Z. Cao, and J. Huang, “An insider threat detection method based on improved Test-Time Training,” High-Confidence Computing, 2025. (Online first) https://www.sciencedirect.com/science/article/pii/S2667295224000862.
  14. T. Tian, X. Luo, and X. Li, “Insider threat detection for specific threat scenarios,” Cybersecurity, 2025. https://cybersecurity.springeropen.com/articles/10.1186/s42400-024-00321-w
Index Terms

Computer Science
Information Sciences

Keywords

Reinforcement Learning Q-Learning Insider Threat Detection Cyber Incident Response Security Orchestration Automation and Response (SOAR) Adaptive Cyber Defense