CFP last date
22 December 2025
Call for Paper
January Edition
IJCA solicits high quality original research papers for the upcoming January edition of the journal. The last date of research paper submission is 22 December 2025

Submit your paper
Know more
Random Articles
Reseach Article

Evaluating the Vulnerability of Deep Learning Models in Medical Imaging to Adversarial Perturbations

by Hamuza Senyonga, Charity Mahwire, Thelma Chimusoro, Enock Katenda
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 187 - Number 55
Year of Publication: 2025
Authors: Hamuza Senyonga, Charity Mahwire, Thelma Chimusoro, Enock Katenda
10.5120/ijca2025925973

Hamuza Senyonga, Charity Mahwire, Thelma Chimusoro, Enock Katenda . Evaluating the Vulnerability of Deep Learning Models in Medical Imaging to Adversarial Perturbations. International Journal of Computer Applications. 187, 55 ( Nov 2025), 46-60. DOI=10.5120/ijca2025925973

@article{ 10.5120/ijca2025925973,
author = { Hamuza Senyonga, Charity Mahwire, Thelma Chimusoro, Enock Katenda },
title = { Evaluating the Vulnerability of Deep Learning Models in Medical Imaging to Adversarial Perturbations },
journal = { International Journal of Computer Applications },
issue_date = { Nov 2025 },
volume = { 187 },
number = { 55 },
month = { Nov },
year = { 2025 },
issn = { 0975-8887 },
pages = { 46-60 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume187/number55/evaluating-the-vulnerability-of-deep-learning-models-in-medical-imaging-to-adversarial-perturbations/ },
doi = { 10.5120/ijca2025925973 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2025-11-18T21:10:54.431527+05:30
%A Hamuza Senyonga
%A Charity Mahwire
%A Thelma Chimusoro
%A Enock Katenda
%T Evaluating the Vulnerability of Deep Learning Models in Medical Imaging to Adversarial Perturbations
%J International Journal of Computer Applications
%@ 0975-8887
%V 187
%N 55
%P 46-60
%D 2025
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Deep learning has revolutionized medical imaging, but it is vulnerable to adversarial attacks, which are deemed dangerous to clinical use. This paper compares the strength of convolutional neural networks (CNNs) and Vision Transformers (ViTs) that are trained on the ChestX-ray14 dataset at NIH to detect pneumonia. Both models showed high baseline accuracy (>90 percent) even when the models were attacked by Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), DeepFool, and Carlini and Wenger, although PGD and CW were the most disruptive. The evaluation of defense strategies was also performed, which involves adversarial training, input preprocessing, ensemble modelling, and adversarial detection. Adversarial training provided the best protection, at the cost of lower clean-data accuracy and preprocessings and ensembles offered partial resistance, and also detection strategies identified a lot of naive adversarial inputs. There was however no one defence that was enough to counter every assault. The discoveries reveal the necessity of layered defence practices and ethical and regulatory issues related to trust, liability, and patient safety, which supports the significance of strong and transparent AI in the field of healthcare practices.

References
  1. Ge, Z., S. Demyanov, R. Chakravorty, A. Bowling, and R. Garnavi. 2017. “Skin Disease Recognition Using Deep Saliency Features and Multimodal Learning of Dermoscopy and Clinical Images.” In Medical Image Computing and Computer Assisted Intervention—MICCAI 2017: 20th International Conference, 250–58. Quebec. doi: 10.1007/978-3-319-66179-7_29.
  2. Pereira, S., A. Pinto, V. Alves, and C. A. Silva. 2016. “Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images.” IEEE Trans Med Imaging 35, no. 5 (May): 1240–51. doi: 10.1109/TMI.2016.2538465.
  3. Dong, J., J. Chen, X. Xie, J. Lai, and H. Chen. 2024. “Survey on Adversarial Attack and Defense for Medical Image Analysis: Methods and Challenges.” November. doi: 10.1145/3702638.
  4. Finlayson, S. G., J. D. Bowers, J. Ito, J. L. Zittrain, A. L. Beam, and I. S. Kohane. 2019. “Adversarial attacks on medical machine learning.” Science (80-) 363, no. 6433 (March): 1287–89. doi: 10.1126/science.aaw4399.
  5. Mavire, S., K. Bernard Muhwati, C. D. Kudaro, and J. Awoleye. 2025. “A Federated Learning Approach to Secure AI-Based Patient Outcome Prediction Across Hospitals.” Int J Sci Manag Res 08, no. 08: 52–72. doi: 10.37502/IJSMR.2025.8806.
  6. Javed, H., S. El-Sappagh, and T. Abuhmed. 2024. “Robustness in deep learning models for medical diagnostics: security and adversarial challenges towards robust AI applications.” Artif Intell Rev 58, no. 1 (November): 12. doi: 10.1007/s10462-024-11005-9.
  7. Shah, A., et al. 2018. “Susceptibility to misdiagnosis of adversarial images by deep learning based retinal image analysis algorithms.” In 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), 1454–57. IEEE. doi: 10.1109/ISBI.2018.8363846.
  8. Paschali, M., S. Conjeti, F. Navarro, and N. Navab. 2018. “Generalizability vs. Robustness: Investigating Medical Imaging Networks Using Adversarial Examples.” In Medical Image Computing and Computer Assisted Intervention – MICCAI 2018. MICCAI 2018. Lecture Notes in Computer Science, edited by A. Frangi, J. Schnabel, C. Davatzikos, C. Alberola-López, and G. Fichtinger, 493–501. Cham: Springer. doi: 10.1007/978-3-030-00928-1_56.
  9. Mavire, S., K. B. Muhwati, N. Kota, and J. A. Awoleye. 2025. “Mitigating Ransomware in the Energy and Healthcare Sectors through Layered Defense Strategies.” Int J Sci Manag Res 08, no. 04: 143–66. doi: 10.37502/IJSMR.2025.8609.
  10. Kotia, J., A. Kotwal, and R. Bharti. 2020. “Risk Susceptibility of Brain Tumor Classification to Adversarial Attacks,” 181–87. doi: 10.1007/978-3-030-31964-9_17.
  11. Li, Y., H. Zhang, C. Bermudez, Y. Chen, B. A. Landman, and Y. Vorobeychik. 2020. “Anatomical context protects deep learning from adversarial perturbations in medical imaging.” Neurocomputing 379 (February): 370–78. doi: 10.1016/j.neucom.2019.10.085.
  12. Ma, X., et al. 2021. “Understanding adversarial attacks on deep learning based medical image analysis systems.” Pattern Recognit 110 (February): 107332. doi: 10.1016/j.patcog.2020.107332.
  13. Zhou, Q., et al. 2021. “A machine and human reader study on AI diagnosis model safety under attacks of adversarial images.” Nat Commun 12, no. 1 (December): 7281. doi: 10.1038/s41467-021-27577-x.
  14. Esteva, A., et al. 2017. “Dermatologist-level classification of skin cancer with deep neural networks.” Nature 542, no. 7639 (February): 115–18. doi: 10.1038/nature21056.
  15. Roth, H. R., et al. 2015. “DeepOrgan: Multi-level Deep Convolutional Networks for Automated Pancreas Segmentation.” June. http://arxiv.org/abs/1506.06448.
  16. Litjens, G., et al. 2017. “A survey on deep learning in medical image analysis.” Med Image Anal 42 (December): 60–88. doi: 10.1016/j.media.2017.07.005.
  17. McCradden, M. D., E. A. Stephenson, and J. A. Anderson. 2020. “Clinical research underlies ethical integration of healthcare artificial intelligence.” Nat Med 26, no. 9 (September): 1325–26. doi: 10.1038/s41591-020-1035-9.
  18. Szegedy, C., et al. 2014. “Intriguing properties of neural networks.” February. doi: 1312.6199.
  19. Goodfellow, I. J., J. Shlens, and C. Szegedy. 2014. “Explaining and Harnessing Adversarial Examples.” CoRR abs/1412.6. https://api.semanticscholar.org/CorpusID:6706414.
  20. Madry, A., A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. 2019. “Towards Deep Learning Models Resistant to Adversarial Attacks.” September. http://arxiv.org/abs/1706.06083.
  21. Carlini, N., and D. Wagner. 2017. “Towards Evaluating the Robustness of Neural Networks.” In 2017 IEEE Symposium on Security and Privacy (SP), 39–57. IEEE. doi: 10.1109/SP.2017.49.
  22. Moosavi-Dezfooli, S.-M., A. Fawzi, and P. Frossard. 2016. “DeepFool: a simple and accurate method to fool deep neural networks.” July. doi: 1511.04599.
  23. Papernot, N., P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami. 2015. “The Limitations of Deep Learning in Adversarial Settings.” November. http://arxiv.org/abs/1511.07528.
  24. Demontis, A., et al. 2019. “Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks.” June. http://arxiv.org/abs/1809.02861.
  25. Moosavi-Dezfooli, S.-M., A. Fawzi, O. Fawzi, and P. Frossard. 2017. “Universal adversarial perturbations.” March. doi: 1610.08401.
  26. Hirano, H., A. Minagi, and K. Takemoto. 2021. “Universal adversarial attacks on deep neural networks for medical image classification.” BMC Med Imaging 21, no. 1 (December): 9. doi: 10.1186/s12880-020-00530-y.
  27. Kumar, K. N., C. Vishnu, R. Mitra, and C. K. Mohan. 2021. “Black-box Adversarial Attacks in Autonomous Vehicle Technology.” January. http://arxiv.org/abs/2101.06092.
  28. Niu, Y., et al. 2019. “Pathological Evidence Exploration in Deep Retinal Image Diagnosis.” Proc AAAI Conf Artif Intell 33, no. 01 (July): 1093–1101. doi: 10.1609/aaai.v33i01.33011093.
  29. Dugas, E., J. Jared, and W. Cukierski. n.d. “Diabetic Retinopathy Detection.” Kaggle. Accessed September 13, 2025. https://www.kaggle.com/c/diabetic-retinopathy-detection/.
  30. Wang, X., Y. Peng, L. Lu, Z. Lu, M. Bagheri, and R. M. Summers. 2017. “ChestX-ray8: Hospital-scale Chest X-ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases.” December. doi: 10.1109/CVPR.2017.369.
  31. Kovalev, V., and D. Voynov. 2019. “Influence of Control Parameters and the Size of Biomedical Image Datasets on the Success of Adversarial Attacks.” April. http://arxiv.org/abs/1904.06964.
  32. Rao, C., et al. 2020. “A Thorough Comparison Study on Adversarial Attacks and Defenses for Common Thorax Disease Classification in Chest X-rays.” March. http://arxiv.org/abs/2003.13969.
  33. Sorin, V., S. Soffer, B. S. Glicksberg, Y. Barash, E. Konen, and E. Klang. 2023. “Adversarial attacks in radiology – A systematic review.” Eur J Radiol 167 (October): 111085. doi: 10.1016/j.ejrad.2023.111085.
  34. Hirano, H., K. Koga, and K. Takemoto. 2020. “Vulnerability of deep neural networks for detecting COVID-19 cases from chest X-ray images to universal adversarial attacks.” PLoS One 15, no. 12 (December): e0243963. doi: 10.1371/journal.pone.0243963.
  35. Bortsova, G., et al. 2021. “Adversarial attack vulnerability of medical image analysis systems: Unexplored factors.” Med Image Anal 73 (October): 102141. doi: 10.1016/j.media.2021.102141.
  36. Ren, X., L. Zhang, Q. Wang, and D. Shen. 2019. “Brain MR Image Segmentation in Small Dataset with Adversarial Defense and Task Reorganization.” June. http://arxiv.org/abs/1906.10400.
  37. Taghanaki, S. A., K. Abhishek, S. Azizi, and G. Hamarneh. 2019. “A Kernelized Manifold Mapping to Diminish the Effect of Adversarial Perturbations.” In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 11332–41. IEEE. doi: 10.1109/CVPR.2019.01160.
  38. Kansal, K., P. S. Krishna, P. B. Jain, S. R, P. Honnavalli, and S. Eswaran. 2022. “Defending against adversarial attacks on Covid-19 classifier: A denoiser-based approach.” Heliyon 8, no. 10 (October): e11209. doi: 10.1016/j.heliyon.2022.e11209.
  39. Liu, S., et al. 2021. “No Surprises: Training Robust Lung Nodule Detection for Low-Dose CT Scans by Augmenting With Adversarial Attacks.” IEEE Trans Med Imaging 40, no. 1 (January): 335–45. doi: 10.1109/TMI.2020.3026261.
  40. Watson, M., and N. Al Moubayed. 2021. “Attack-agnostic Adversarial Detection on Medical Data Using Explainable Machine Learning.” May. http://arxiv.org/abs/2105.01959.
  41. Xie, C., J. Wang, Z. Zhang, Z. Ren, and A. Yuille. 2018. “Mitigating Adversarial Effects Through Randomization.” February. http://arxiv.org/abs/1711.01991.
Index Terms

Computer Science
Information Sciences

Keywords

Adversarial Attacks; Deep Learning; Medical Imaging; Convolutional Neural Networks (CNNs); Vision Transformers (ViTs); Robustness; Adversarial Training; Healthcare AI; Patient Safety; Cybersecurity in Medicine