CFP last date
20 January 2026
Call for Paper
February Edition
IJCA solicits high quality original research papers for the upcoming February edition of the journal. The last date of research paper submission is 20 January 2026

Submit your paper
Know more
Random Articles
Reseach Article

Assessing Public Ability to Distinguish AI-Generated from Real News: Accuracy, Confidence, and Influencing Factors

by Khalid A.H. Alazawi, Nantha Kumar Subramaniam
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 187 - Number 69
Year of Publication: 2025
Authors: Khalid A.H. Alazawi, Nantha Kumar Subramaniam
10.5120/ijca2025926152

Khalid A.H. Alazawi, Nantha Kumar Subramaniam . Assessing Public Ability to Distinguish AI-Generated from Real News: Accuracy, Confidence, and Influencing Factors. International Journal of Computer Applications. 187, 69 ( Dec 2025), 30-34. DOI=10.5120/ijca2025926152

@article{ 10.5120/ijca2025926152,
author = { Khalid A.H. Alazawi, Nantha Kumar Subramaniam },
title = { Assessing Public Ability to Distinguish AI-Generated from Real News: Accuracy, Confidence, and Influencing Factors },
journal = { International Journal of Computer Applications },
issue_date = { Dec 2025 },
volume = { 187 },
number = { 69 },
month = { Dec },
year = { 2025 },
issn = { 0975-8887 },
pages = { 30-34 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume187/number69/assessing-public-ability-to-distinguish-ai-generated-from-real-news-accuracy-confidence-and-influencing-factors/ },
doi = { 10.5120/ijca2025926152 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2025-12-24T19:35:38.320121+05:30
%A Khalid A.H. Alazawi
%A Nantha Kumar Subramaniam
%T Assessing Public Ability to Distinguish AI-Generated from Real News: Accuracy, Confidence, and Influencing Factors
%J International Journal of Computer Applications
%@ 0975-8887
%V 187
%N 69
%P 30-34
%D 2025
%I Foundation of Computer Science (FCS), NY, USA
Abstract

The rapid advancement of generative artificial intelligence (AI) has intensified concerns about the spread of highly convincing synthetic news. This study examines the public’s ability to distinguish between real and AI-generated news, investigates the misalignment between confidence and actual performance, and identifies the demographic, behavioural, and technological factors that influence detection accuracy. A total of 382 participants completed an online survey containing one real and one AI-generated news article; after rigorous preprocessing to remove bot-generated, inattentive, and uniform (“lazy”) responses, 210 valid cases were analysed. Results reveal a significant detection challenge: only 8% of respondents accurately identified both articles, while 32% failed to correctly classify either one. Despite these low accuracy levels, confidence was disproportionately high, with approximately 62% reporting that they were “mostly confident” or “fully confident” in their judgments. This confidence–accuracy mismatch highlights a critical cognitive vulnerability that may amplify susceptibility to misinformation. Regression analyses further show that commonly assumed protective factors—such as education level, age, and news-checking frequency—do not reliably predict the ability to detect AI-generated content. Only technological proficiency displayed a meaningful positive correlation with performance, although the effect was modest. These findings challenge traditional assumptions about digital literacy and indicate that demographic attributes alone cannot safeguard users against sophisticated AI-driven deception. Instrument reliability was strong (Cronbach’s α = .89; Composite Reliability = .80), affirming the stability of the measures used to assess credibility judgments. The implications of this study underscore the urgent need for redefined digital literacy frameworks that emphasizes critical reading, linguistic awareness, and metacognitive regulation. Technological interventions, such as AI-based detection tools and transparency mechanisms for synthetic content, are also necessary to complement user education. The study concludes that in the age of generative AI, human judgment alone is insufficient to ensure news authenticity, and coordinated efforts across education, platform design, and policy are essential to preserve information integrity.

References
  1. Baum, S. D. (2018). Countering superintelligence misinformation. Information, 9(10), 244.
  2. Dunn, C., Hunter, J., Steffes, W., Whitney, Z., Foss, M., Mammino, J., … Nathoo, R. (2023). Artificial intelligence–derived dermatology case reports are indistinguishable from those written by humans: A single-blinded observer study. Journal of the American Academy of Dermatology.
  3. Galanter, P. (2019). Artificial intelligence and problems in generative art theory. In Proceedings of EVA London 2019 (pp. 112–118).
  4. Hughes, S., Fried, O., Ferguson, M., Hughes, C., Hughes, R., Yao, X. D., & Hussey, I. (2021). Deepfaked online content is highly effective in manipulating people’s attitudes and intentions (No. FERMILAB-PUB-21-182-T). Fermi National Accelerator Laboratory. https://doi.org/10.31234/osf.io/4ms5a
  5. Kreps, S., McCain, R. M., & Brundage, M. (2022). All the news that’s fit to fabricate: AI-generated text as a tool of media misinformation. Journal of Experimental Political Science, 9(1), 104–117.
  6. Liu, Y. (2023). Analysis on the effects of misinformation: Taking Facebook as an example (Unpublished undergraduate thesis). Boston University, College of Communication.
  7. Longoni, C., Fradkin, A., Cian, L., & Pennycook, G. (2022, June). News from generative artificial intelligence is believed less. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 97–106). https://doi.org/10.1145/3531146.3533077
  8. Stanton, B., & Jensen, T. (2021). Trust and artificial intelligence. National Institute of Standards and Technology, U.S. Department of Commerce.
  9. Lyons, B. A., Montgomery, J. M., Guess, A. M., Nyhan, B., & Reifler, J. (2021). Overconfidence in news judgments is associated with false news susceptibility. Proceedings of the National Academy of Sciences, 118(23), e2019527118. https://doi.org/10.1073/pnas.2019527118
Index Terms

Computer Science
Information Sciences

Keywords

AI-generated misinformation; News credibility detection