CFP last date
20 April 2026
Call for Paper
May Edition
IJCA solicits high quality original research papers for the upcoming May edition of the journal. The last date of research paper submission is 20 April 2026

Submit your paper
Know more
Random Articles
Reseach Article

Signify: A Real-Time Sign to Text and Text to Sign Mobile Application for Dynamic Filipino Sign Language Translation using Transformer Architecture Deep Learning Model

by Aliyah Ayco, Kaye Anne Mirador, Glaiza Mei Natividad, Noah Andrea Pagba, James Esquivel
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 187 - Number 87
Year of Publication: 2026
Authors: Aliyah Ayco, Kaye Anne Mirador, Glaiza Mei Natividad, Noah Andrea Pagba, James Esquivel
10.5120/ijca2026926514

Aliyah Ayco, Kaye Anne Mirador, Glaiza Mei Natividad, Noah Andrea Pagba, James Esquivel . Signify: A Real-Time Sign to Text and Text to Sign Mobile Application for Dynamic Filipino Sign Language Translation using Transformer Architecture Deep Learning Model. International Journal of Computer Applications. 187, 87 ( Mar 2026), 38-44. DOI=10.5120/ijca2026926514

@article{ 10.5120/ijca2026926514,
author = { Aliyah Ayco, Kaye Anne Mirador, Glaiza Mei Natividad, Noah Andrea Pagba, James Esquivel },
title = { Signify: A Real-Time Sign to Text and Text to Sign Mobile Application for Dynamic Filipino Sign Language Translation using Transformer Architecture Deep Learning Model },
journal = { International Journal of Computer Applications },
issue_date = { Mar 2026 },
volume = { 187 },
number = { 87 },
month = { Mar },
year = { 2026 },
issn = { 0975-8887 },
pages = { 38-44 },
numpages = {9},
url = { https://ijcaonline.org/archives/volume187/number87/signify-a-real-time-sign-to-text-and-text-to-sign-mobile-application-for-dynamic-filipino-sign-language-translation-using-transformer-architecture-deep-learning-model/ },
doi = { 10.5120/ijca2026926514 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2026-03-20T22:55:13.102085+05:30
%A Aliyah Ayco
%A Kaye Anne Mirador
%A Glaiza Mei Natividad
%A Noah Andrea Pagba
%A James Esquivel
%T Signify: A Real-Time Sign to Text and Text to Sign Mobile Application for Dynamic Filipino Sign Language Translation using Transformer Architecture Deep Learning Model
%J International Journal of Computer Applications
%@ 0975-8887
%V 187
%N 87
%P 38-44
%D 2026
%I Foundation of Computer Science (FCS), NY, USA
Abstract

This study presents Signify, a real-time, bidirectional mobile application for dynamic Filipino Sign Language (FSL) translation designed to bridge communication gaps between the Deaf and Hard of Hearing (DHH) community and hearing individuals. Utilizing Long Short-Term Memory (LSTM) and Transformer architectures, the system enables Sign-to-Text (S2T) and Text-to-Sign (T2S) functionalities. To improve model robustness, the researchers expanded the FSL-105 dataset by adding a "Directions" category and recording 80 additional videos per gesture, resulting in a total of 11,530 videos. For S2T recognition, hand landmarks were extracted via MediaPipe. Comparative analysis revealed that the Transformer model significantly outperformed the LSTM baseline, achieving a test accuracy of 98.73%. This was further improved to 99.60% through data augmentation techniques including Gaussian noise injection and temporal jitter. The T2S module utilizes a direct mapping approach to retrieve pre-recorded FSL video segments validated by a certified interpreter for linguistic accuracy. Integrated into an Android application using TensorFlow Lite, the system supports real-time, offline inference. User usability testing yielded a Grand Overall Mean of 4.57 (Excellent), reflecting high satisfaction among signers and non-signers. This research advances inclusive communication in alignment with SDGs 4 and 10.

References
  1. Hand Talk. 2023. The Benefits of Sign Language for Children with Hearing Loss. Hand Talk - Learn ASL today.
  2. Tolentino, L. K., Serfa Juan, R., Thio-ac, A., Pamahoy, M. A., Forteza, J. R. and Garcia, X. J. 2019. Static Sign Language Recognition Using Deep Learning. International Journal of Machine Learning and Computing 9(6), 821-827.
  3. Sundar, B. and Bagyammal, T. 2022. American Sign Language recognition for alphabets using MediaPipe and LSTM. Procedia Computer Science 215, 642-651.
  4. Canlas et al. 2024. Real-time dynamic Filipino Sign Language recognition application. Angeles University Foundation.
  5. Chaudhary, L., Ananthanarayana, T., Hoq, E. and Nwogu, I. 2023. SignNet II: A transformer-based two-way sign language translation model. IEEE Transactions on Pattern Analysis and Machine Intelligence 45(11), 12896-12907.
  6. Tupal, J. 2023. FSL-105 Dataset. Mendeley Data repository.
  7. Evangelista, C. L. L., Geli, C. J. R., Castillo, M. M. V. and Macabagdal, C. B. G. 2023. Long Short-Term Memory-based Static and Dynamic Filipino Sign Language Recognition. IEEE.
  8. Nerlekar, A. 2021. Sign Language Recognition Using Smartphones. California State University.
  9. Caya, M. V. C., Madrid, G. K. R. and Villanueva, R. G. R. 2022. Recognition of Dynamic Filipino Sign Language using MediaPipe and Long Short-Term Memory. 13th ICCCNT.
  10. Kothadiya, D., Bhatt, C., Sapariya, K., Patel, K., Gil-González, A.-B. and Corchado, J. M. 2022. Deepsign: Sign Language Detection and Recognition Using Deep Learning. Electronics 11(11), 1780.
  11. Do Long, V. 2021. Mobile Application for Sign Language Recognition. Czech Technical University.
  12. De Coster, M., Van Herreweghe, M. and Dambre, J. 2020. Sign language recognition with transformer networks. LREC, 6018-6024.
  13. Abdul, W., Alsulaiman, M., Amin, S. U., Faisal, M., Muhammad, G., Albogamy, F. R. and Ghaleb, H. 2021. Intelligent real-time Arabic sign language classification using attention-based inception and BiLSTM. Computers & Electrical Engineering 95, 107395.
  14. Ananthanarayana, T. 2021. A Comprehensive Approach to Automated Sign Language Translation. Rochester Institute of Technology.
  15. Wei, S. and Lan, Y. 2023. A two-way translation system of Chinese sign language based on computer vision. arXiv preprint.
  16. Stoll, S., Camgoz, N. C., Hadfield, S. and Bowden, R. 2020. Text2Sign: Towards sign language production using neural machine translation and generative adversarial networks. International Journal of Computer Vision 128(6), 891–908.
  17. toll, S., Hadfield, S. and Bowden, R. 2020. Signsynth: Data-driven sign language video generation. ECCV, 353-370.
  18. Latkar, H., Wasker, S., Vashistha, A. and Kanse, A. 2024. Real-time conversion of sign language to text and speech, and vice-versa. TIJER 11(4).
  19. Faisal, M., Alsulaiman, M., Mekhtiche, M., Abdelkader, B. M., Algabri, M. and Alrayes, T. B. S. 2023. Enabling two-way communication of deaf using Saudi sign language. IEEE Access 11, 135423-135434.
  20. Kavana and Suma. 2022. Real-time sign language recognition.
  21. Manoj Kumar, D., Bavanraj, K., Thavananthan, S., Bastiansz, G. M. A. S., Harshanath, S. M. B. and Alosious, J. 2020. EasyTalk: A translator for Sri Lankan Sign Language using machine learning. ICAC.
  22. Saleem, M. I., Siddiqui, A., Noor, S., Luque-Nieto, M. A. and Otero, P. 2022. A novel machine learning based two-way communication system for deaf and mute. Applied Sciences 13(1), 453.
  23. Ramadhan, M. F., Samsuryadi, S. and Primanita, A. 2024. American sign language translation to display the text (subtitles) using a convolutional neural network. EMACS Journal 6(3).
  24. Osman et al. 2020. Hearing Assistive Technology: Sign Language Translation Application for Hearing-Impaired Communication.
  25. Kautsar et al. 2024. Transformer-based sequence models.
Index Terms

Computer Science
Information Sciences

Keywords

Filipino Sign Language (FSL) Transformer Architecture Long-Short Term Memory (LSTM) MediaPipe Real-time Translation