International Journal of Computer Applications |
Foundation of Computer Science (FCS), NY, USA |
Volume 187 - Number 21 |
Year of Publication: 2025 |
Authors: Augustine O. Ugbari, Clement Ndeekor, Echebiri Wobidi |
![]() |
Augustine O. Ugbari, Clement Ndeekor, Echebiri Wobidi . Optimizing GPT-4 for Automated Short Answer Grading in Educational Assessments. International Journal of Computer Applications. 187, 21 ( Jul 2025), 32-36. DOI=10.5120/ijca2025925255
Automated Short Answer Grading Systems (ASAGS) have witnessed significant advancement with the integration of large language models (LLMs), particularly GPT-4. This paper explores methodologies to optimize GPT-4 for the purpose of grading short answer questions in educational assessments. The focus is on aligning GPT-4’s natural language processing capabilities with human grading rubrics to enhance accuracy, consistency, and fairness. We examine techniques including prompt engineering, rubric-based scoring, and fine-tuning strategies. The research also assesses the model’s performance across various domains, evaluates inter-rater reliability with human graders, and addresses concerns related to bias, explainability, and scalability. This paper proposes a framework that leverages GPT-4 as a co-grader, ensuring human-in-the-loop moderation to improve educational outcomes.