International Journal of Computer Applications |
Foundation of Computer Science (FCS), NY, USA |
Volume 187 - Number 12 |
Year of Publication: 2025 |
Authors: M. Kamaraju, K. Ujwala, B. Rajasekhar |
![]() |
M. Kamaraju, K. Ujwala, B. Rajasekhar . FPGA-based Real-Time Emotion Recognition System using Facial Expressions for Physically Disabled Individuals. International Journal of Computer Applications. 187, 12 ( Jun 2025), 22-28. DOI=10.5120/ijca2025925109
Emotion recognition through facial expressions is a critical enabler of non-verbal communication, particularly for individuals with physical disabilities who may face barriers in speech or motor-based interaction. This paper proposes a real-time, FPGA-based facial emotion recognition system optimized for embedded deployment and low-power operation. The system utilizes a quantized MobileNetV2 Convolutional Neural Network (CNN) trained on an enhanced FERPlus dataset (FERPlus-A), which is refined using CLAHE, bilateral filtering, and sharpening to improve feature clarity. The trained model is quantized to 8-bit integer arithmetic for efficient synthesis via Vivado HLS and deployed onto a ZYNQ SoC platform. Integration through AXI interfaces enables seamless communication between the CNN accelerator and the processing system. Simulation results demonstrate high inference speed with a latency of approximately 1.174 milliseconds per frame and an estimated throughput of 851 frames per second. Despite the absence of hardware testing due to board unavailability, functional verification confirms the model’s readiness for real-time assistive applications. This work presents a scalable and energy-efficient solution for enhancing emotional communication in assistive technologies, offering significant potential for integration in healthcare, smart interfaces, and human-centered embedded systems.