CFP last date
22 December 2025
Call for Paper
January Edition
IJCA solicits high quality original research papers for the upcoming January edition of the journal. The last date of research paper submission is 22 December 2025

Submit your paper
Know more
Random Articles
Reseach Article

Explainable Federated Learning: Taxonomy, Evaluation Frameworks, and Emerging Challenges

by Rishika Singh, Swati Joshi
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 187 - Number 52
Year of Publication: 2025
Authors: Rishika Singh, Swati Joshi
10.5120/ijca2025925888

Rishika Singh, Swati Joshi . Explainable Federated Learning: Taxonomy, Evaluation Frameworks, and Emerging Challenges. International Journal of Computer Applications. 187, 52 ( Nov 2025), 52-58. DOI=10.5120/ijca2025925888

@article{ 10.5120/ijca2025925888,
author = { Rishika Singh, Swati Joshi },
title = { Explainable Federated Learning: Taxonomy, Evaluation Frameworks, and Emerging Challenges },
journal = { International Journal of Computer Applications },
issue_date = { Nov 2025 },
volume = { 187 },
number = { 52 },
month = { Nov },
year = { 2025 },
issn = { 0975-8887 },
pages = { 52-58 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume187/number52/explainable-federated-learning-taxonomy-evaluation-frameworks-and-emerging-challenges/ },
doi = { 10.5120/ijca2025925888 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2025-11-18T21:10:23.093416+05:30
%A Rishika Singh
%A Swati Joshi
%T Explainable Federated Learning: Taxonomy, Evaluation Frameworks, and Emerging Challenges
%J International Journal of Computer Applications
%@ 0975-8887
%V 187
%N 52
%P 52-58
%D 2025
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Solutions that guarantee data privacy and model transparency are required due to the quick integration of AI into delicate industries like cybersecurity, healthcare, and finance. Federated Learning (FL) is a promising paradigm that allows for cooperative model training across decentralized datasets while maintaining privacy by avoiding the sharing of raw data. Simultaneously, Explainable AI (XAI) makes otherwise opaque models interpretable, promoting stakeholder trust and assisting with regulatory compliance. Using techniques like SHAP, LIME, Grad-CAM, fuzzy logic, and rule-based systems, recent research has investigated the nexus between FL and XAI in tasks like intrusion detection, fraud detection, and medical diagnosis. Despite the impressive performance of these efforts, there are still unresolved issues with scalability, non- IID data, privacy–interpretability trade-offs, standardized evaluation metrics, and resilience to adversarial manipulation. The present state of research is compiled in this review, which also identifies important gaps, emphasizes methodological trends, and suggests future directions. These issues could be resolved by integrating FL and XAI, which could lead to reliable, private, and interpretable AI systems in high-stakes situations where security and explainability are crucial.

References
  1. Briola E, Nikolaidis CC, Perifanis V, Pavlidis N, Efraimidis P. A federated explainable AI model for breast cancer classification. In: Proc Eur Interdiscip Cybersecurity Conf. 2024 Jun 5;194–201.
  2. Dipto SM, Reza MT, Mim NT, Ksibi A, Alsenan S, Uddin J, Samad MA. An analysis of decipherable red blood cell abnormality detection under federated environment leveraging XAI incorporated deep learning. Sci Rep. 2024 Oct 27;14(1):25664.
  3. Jabarulla MY, Uden T, Jack T, Beerbaum P, Oeltze-Jafra S. Artificial intelligence in pediatric echocardiography: exploring challenges, opportunities, and clinical applications with explainable AI and federated learning. arXiv preprint arXiv:2411.10255. 2024 Nov 15.
  4. Raza A, Tran KP, Koehl L, Li S. Designing ECG monitoring healthcare system with federated transfer learning and explainable AI. Knowl Based Syst. 2022 Jan 25;236:107763.
  5. Mastoi QU, Latif S, Brohi S, Ahmad J, Alqhatani A, Alshehri MS, Al Mazroa A, Ullah R. Explainable AI in medical imaging: an interpretable and collaborative federated learning model for brain tumor classification. Front Oncol. 2025 Feb 27;15:1535478.
  6. Sharma MA, Raj BG, Ramamurthy B, Bhaskar RH. Credit card fraud detection using deep learning based on auto- encoder. In: ITM Web Conf. 2022;50:01001.
  7. Lopez-Ramos LM, Leiser F, Rastogi A, Hicks S, Strümke I, Madai VI, Budig T, Sunyaev A, Hilbert A. Interplay between federated learning and explainable artificial intelligence: a scoping review. arXiv preprint arXiv:2411.05874. 2024 Nov 7.
  8. Ducange P, Marcelloni F, Renda A, Ruffini F. Federated learning of XAI models in healthcare: a case study on Parkinson’s disease. Cogn Comput. 2024 Nov;16(6):3051–76.
  9. Bárcena JL, Daole M, Ducange P, Marcelloni F, Renda A, Ruffini F, Schiavo A. Fed-XAI: Federated learning of explainable artificial intelligence models. In: Proc XAI.it@AI 2022;104–117.
  10. López-Blanco R, Alonso RS, González-Arrieta A, Chamoso P, Prieto J. Federated learning of explainable artificial intelligence (FED-XAI): A review. In: Int Symp Distrib Comput Artif Intell. Cham: Springer; 2023 Jul 12. p. 318–26.
  11. Hickman E, Petrin M. Trustworthy AI and corporate governance: the EU’s ethics guidelines for trustworthy artificial intelligence from a company law perspective. Eur Bus Organ Law Rev. 2021 Dec;22(4):593–625.
  12. Silva PR, Vinagre J, Gama J. Towards federated learning: An overview of methods and applications. WIREs Data Min Knowl Discov. 2023 Mar;13(2):e1486.
  13. Beutel DJ, Topal T, Mathur A, Qiu X, Fernandez-Marques J, Gao Y, Sani L, Li KH, Parcollet T, de Gusmão PP, Lane ND.Flower: A friendly federated learning framework. arXiv preprint arXiv:2007.14390. 2020.
  14. Van der Velden BH, Kuijf HJ, Gilhuijs KG, Viergever MA. Explainable artificial intelligence (XAI) in deep learning- based medical image analysis. Med Image Anal. 2022 Jul 1;79:102470.
  15. Borys K, Schmitt YA, Nauta M, Seifert C, Krämer N, Friedrich CM, Nensa F. Explainable AI in medical imaging: An overview for clinical practitioners – beyond saliency-based XAI approaches. Eur J Radiol. 2023 May 1;162:110786.
  16. Konečný J, McMahan HB, Ramage D, Richtárik P. Federated optimization: Distributed machine learning for on- device intelligence. arXiv preprint arXiv:1610.02527. 2016 Oct 8.
  17. Selvaraju RR, Das A, Vedantam R, Cogswell M, Parikh D, Batra D. Grad-CAM: Why did you say that? arXiv preprint arXiv:1611.07450. 2016 Nov 22.
  18. Assaf R, Giurgiu I, Bagehorn F, Schumann A. Mtex-CNN: Multivariate time series explanations for predictions with convolutional neural networks. In: IEEE Int Conf Data Min (ICDM). 2019 Nov 8. p. 952–57.
  19. Deshpande RS, Ambatkar PV. Interpretable deep learning models: Enhancing transparency and trustworthiness in explainable AI. In: Proc Int Conf Sci Eng. 2023 Feb;11(1):1352–63.
  20. Kothandaraman D, Praveena N, Varadarajkumar K, Madhav Rao B, Dhabliya D, Satla S, Abera W. Intelligent forecasting of air quality and pollution prediction using machine learning. Adsorpt Sci Technol. 2022 Jun 27;2022:5086622.
  21. Ribeiro MT, Singh S, Guestrin C. “Why should I trust you?” Explaining the predictions of any classifier. In: Proc 22nd ACM SIGKDD Int Conf Knowl Discov Data Min. 2016 Aug 13. p. 1135–44.
  22. Morris C, Ritzert M, Fey M, Hamilton WL, Lenssen JE, Rattan G, Grohe M. Weisfeiler and Leman go neural: Higher- order graph neural networks. In: Proc AAAI Conf Artif Intell. 2019 Jul 17;33(1):4602–09.
  23. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I. Attention is all you need. Adv Neural Inf Process Syst. 2017;30.
  24. Gunning D, Aha D. DARPA’s explainable artificial intelligence (XAI) program. AI Mag. 2019 Jun 24;40(2):44–58.
  25. Adadi A, Berrada M. Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access. 2018 Sep 16;6:52138–60.
  26. Gilpin LH, Bau D, Yuan BZ, Bajwa A, Specter M, Kagal L. Explaining explanations: An overview of interpretability of machine learning. In: IEEE 5th Int Conf Data Sci Adv Anal (DSAA). 2018 Oct 1. p. 80–89.
  27. Renda A, Ducange P, Marcelloni F, Sabella D, Filippou MC, Nardini G, Stea G, Virdis A, Micheli D, Rapone D, Baltar LG. Federated learning of explainable AI models in 6G systems: Towards secure and automated vehicle networking. Information. 2022 Aug 20;13(8):395.
  28. Aljunaid SK, Almheiri SJ, Dawood H, Khan MA. Secure and transparent banking: explainable AI-driven federated learning model for financial fraud detection. J Risk Financ Manag. 2025 Mar 27;18(4):179.
  29. Khan MA, Azhar M, Ibrar K, Alqahtani A, Alsubai S, Binbusayyis A, Kim YJ, Chang B. COVID-19 classification from chest X-ray images: a framework of deep explainable artificial intelligence. Comput Intell Neurosci. 2022;2022(1):4254631.
  30. Markus AF, Kors JA, Rijnbeek PR. The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies. J Biomed Inform. 2021 Jan 1;113:103655.
  31. Zhang QS, Zhu SC. Visual interpretability for deep learning: a survey. Front Inf Technol Electron Eng. 2018 Jan;19(1):27–39.
  32. Timofte EM, Dimian M, Graur A, Potorac AD, Balan D, Croitoru I, Hrițcan DF, Pușcașu M. Federated learning for cybersecurity: a privacy-preserving approach. Appl Sci. 2025 Jun 18;15(12):6878.
Index Terms

Computer Science
Information Sciences

Keywords

Federated Learning Explainable AI Privacy-preserving AI Post-hoc Explanations Intrinsic Explainability Healthcare FinTech Cybersecurity.