| International Journal of Computer Applications |
| Foundation of Computer Science (FCS), NY, USA |
| Volume 187 - Number 88 |
| Year of Publication: 2026 |
| Authors: Francis Martinson |
10.5120/ijca2026926538
|
Francis Martinson . The Authenticity Spectrum Framework: Classifying Deepfake and Generative AI Risks in Synthetic Media. International Journal of Computer Applications. 187, 88 ( Mar 2026), 34-38. DOI=10.5120/ijca2026926538
The rapid advancement of generative artificial intelligence technologies—including large language models, diffusion models, and deepfake systems—has created unprecedented capabilities for synthetic media generation while simultaneously enabling novel vectors for fraud, disinformation, and exploitation. However, existing governance frameworks fail to differentiate between beneficial applications such as AI-generated marketing content, accessibility tools, and creative expression, and harmful uses including identity fraud, non-consensual intimate imagery, and political disinformation. This paper introduces the Authenticity Spectrum Framework (ASF), a novel five-level classification system for AI-generated content based on three dimensions: disclosure transparency, creator intent, and harm potential. Building on prior research examining exploitation architectures in gaming systems [1] and smartphone vulnerabilities [2], the ASF extends dual-use technology analysis to synthetic media governance. Through systematic analysis of current synthetic media platforms including AI avatar generators, video generation models, and voice cloning services, we demonstrate the framework's practical application to real-world governance challenges. The framework provides regulators, platform operators, and AI developers with a standardized taxonomy for risk assessment aligned with emerging requirements under the EU AI Act, NIST AI Risk Management Framework, and FTC guidelines.