| International Journal of Computer Applications |
| Foundation of Computer Science (FCS), NY, USA |
| Volume 187 - Number 55 |
| Year of Publication: 2025 |
| Authors: Hamuza Senyonga, Charity Mahwire, Thelma Chimusoro, Enock Katenda |
10.5120/ijca2025925973
|
Hamuza Senyonga, Charity Mahwire, Thelma Chimusoro, Enock Katenda . Evaluating the Vulnerability of Deep Learning Models in Medical Imaging to Adversarial Perturbations. International Journal of Computer Applications. 187, 55 ( Nov 2025), 46-60. DOI=10.5120/ijca2025925973
Deep learning has revolutionized medical imaging, but it is vulnerable to adversarial attacks, which are deemed dangerous to clinical use. This paper compares the strength of convolutional neural networks (CNNs) and Vision Transformers (ViTs) that are trained on the ChestX-ray14 dataset at NIH to detect pneumonia. Both models showed high baseline accuracy (>90 percent) even when the models were attacked by Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), DeepFool, and Carlini and Wenger, although PGD and CW were the most disruptive. The evaluation of defense strategies was also performed, which involves adversarial training, input preprocessing, ensemble modelling, and adversarial detection. Adversarial training provided the best protection, at the cost of lower clean-data accuracy and preprocessings and ensembles offered partial resistance, and also detection strategies identified a lot of naive adversarial inputs. There was however no one defence that was enough to counter every assault. The discoveries reveal the necessity of layered defence practices and ethical and regulatory issues related to trust, liability, and patient safety, which supports the significance of strong and transparent AI in the field of healthcare practices.