International Journal of Computer Applications |
Foundation of Computer Science (FCS), NY, USA |
Volume 187 - Number 27 |
Year of Publication: 2025 |
Authors: Tadeu da Ponte, Matevz Vremec, Matej Mertik |
![]() |
Tadeu da Ponte, Matevz Vremec, Matej Mertik . Assessing LLMs as Cognitive Interpreters of Student Prompts: A Typological Framework. International Journal of Computer Applications. 187, 27 ( Aug 2025), 1-11. DOI=10.5120/ijca2025925477
This paper introduces a typology of student cognitive actions in interactions with large language model (LLM)-based tutors. Drawing on the CoMTA dataset of 188 anonymized math tutoring dialogues from Khan Academy, student-generated questions were analyzed as evidence of reasoning processes. The methodology combines a natural language processing (NLP) pipeline for semantic clustering with a dual-stage human classification of communicative intent and cognitive action. The resulting typology is synthesized into a partially ordered taxonomy that captures the complexity and multidimensionality of student thinking in AI-mediated learning contexts. Two research questions guide this investigation: (1) Can a typology be derived directly from unsupervised NLP clustering methods? and (2) To what extent can LLMs replicate expert-driven classification schemes? Findings from RQ1 reveal that semantic clustering via PCA and KMeans offers only limited alignment with pedagogically meaningful distinctions. In contrast, results from RQ2 show that several LLMs–particularly Deepseek, Grok, and Gemini–can reliably extend the typology to unseen data, demonstrating high accuracy in classification. These results suggest that scalable, cognitively informed AI tutoring may be supported by combining expert frameworks with strategically configured LLM architectures.