| International Journal of Computer Applications |
| Foundation of Computer Science (FCS), NY, USA |
| Volume 187 - Number 63 |
| Year of Publication: 2025 |
| Authors: Sadeen Ghaleb Alsabbagh, Suhair Amer |
10.5120/ijca2025926054
|
Sadeen Ghaleb Alsabbagh, Suhair Amer . Algorithmic and Empirical contributions in Linguistic Architectures for Grounding Hallucinating Models. International Journal of Computer Applications. 187, 63 ( Dec 2025), 60-67. DOI=10.5120/ijca2025926054
This paper examines hallucination in large language models (LLMs) through the lens of linguistic grounding. Hallucinations—plausible yet inaccurate outputs—undermine reliability, interpretability, and trust in generative systems. Existing mitigation strategies, including retrieval-augmented generation, fact-checking, and reinforcement learning with human feedback, vary in effectiveness but share a reliance on post-hoc correction rather than representational grounding. By comparing algorithmic approaches that optimize model behavior with empirical methods that depend on observed or human-guided validation, this study reveals a structural gap: current systems lack a semantic foundation to constrain generative drift. To address this, the paper introduces linguistic frames—structured templates capturing meaning, roles, and contexts—as a pathway for embedding semantic constraints directly into model architectures. Framed grounding offers a route toward architectures that balance fluency with truthfulness, positioning semantic representation as central to sustainable hallucination mitigation.