International Journal of Computer Applications |
Foundation of Computer Science (FCS), NY, USA |
Volume 186 - Number 81 |
Year of Publication: 2025 |
Authors: Vishniakou Uladzimir Anatol'evich, Yu ChuYue |
![]() |
Vishniakou Uladzimir Anatol'evich, Yu ChuYue . Hardware Support for Intelligent Text Analysis using FPGA for Accelerating Random Forest-based Classification. International Journal of Computer Applications. 186, 81 ( Apr 2025), 49-54. DOI=10.5120/ijca2025924782
Efficient analysis and classification of text performed at the edge of a network, especially on platforms with limited resources such as embedded systems and FPGA devices, creates computational challenges. Traditional CPU and GPU-based natural language processing (NLP) methods struggle to meet the real-time and energy efficiency requirements of peripheral computing scenarios. To eliminate these limitations, this study suggests hardware support for an FPGA-based random forest algorithm for text classification. To meet the resource constraints inherent in embedded and FPGA-based systems, the proposed methodology includes model compression, simplified algorithmic optimization, fixed-parameter configurations, fixed-point computing, and dimensionality reduction techniques, which effectively reduces both computational complexity and memory consumption. A hybrid CPU-FPGA pipelining architecture has been developed, in which the central processor performs text preprocessing tasks, including tokenization, TF-IDF vector computing, and function normalization, while the FPGA accelerates data output from the random forest algorithm using parallel computing and pipelining strategies. The FPGA implementation has been thoroughly tested for compliance with the Python-based reference processor model through a joint software and hardware verification process. The results demonstrated a high degree of numerical consistency, reaching a similarity of 0.9990, which confirms the correctness of the end-to-end logic of feature extraction and classification. The proposed FPGA architecture provides a scalable solution for high-performance, low-latency NLP applications suitable for deployment in peripheral computing environments.