| International Journal of Computer Applications |
| Foundation of Computer Science (FCS), NY, USA |
| Volume 187 - Number 86 |
| Year of Publication: 2026 |
| Authors: Fahmida Hakim Suchita, Saleha Akter Shetu, Most.Jannatul Fardous, Sumaya Akter, Md Zahurul Haque |
10.5120/ijca2026926486
|
Fahmida Hakim Suchita, Saleha Akter Shetu, Most.Jannatul Fardous, Sumaya Akter, Md Zahurul Haque . MINDGUARD: An AI-Powered Smart Assistant for Alzheimer's Patients using Voice and Activity Recognition. International Journal of Computer Applications. 187, 86 ( Mar 2026), 19-26. DOI=10.5120/ijca2026926486
Alzheimer disease (AD) is a progressive neurological condition, which causes loss of memory, cognitive and impairment of daily functions. Faced with the current increase in the prevalence of AD globally, it is imperative to develop assistive technologies that would help to improve patient autonomy, safety, and caregiver burden. The modern innovations in artificial intelligence (AI), Internet of Things (IoT), mobile applications, and wearable devices have made it possible to create smart help systems to support Alzheimer patients. These systems use facial recognition, activity recognition, real time monitoring and location tracking to provide independent living and enhance communication. The review offers an in-depth analysis of the state-of-the-art assistive systems and compares methodologies and technological solutions as well as outcomes. The results indicate that AI-based diagnostic systems and IoT-based monitoring tools have a great potential to alleviate the issues related to AD care. Nevertheless, the challenge of usability, scalability, data imbalance, and patient privacy are critical and still cause difficulties. The paper wraps up by providing research analysis that include the framework suggested will be the integration of the Artificial Intelligence (AI), Internet of Things (IoT), and wearable technology to deliver the real-time assistance based on a platform that is related to a smartwatch. The system applies a voice recognition model trained with Mel Frequency Cepstral Coefficients (MFCC) features and a Convolutional Neural Network (CNN) on a self-created dataset of 345 voice samples categorized into family, friends, and unknown Speakers. This makes the device to identify familiar voices and provide contextual notifications and alerts. Furthermore, IoT-based modules monitor movement and activity in real-time, facilitating caregiver- patient communication. The results of the experiment indicate that the proposed model achieves encouraging results when it comes to the identification of the speaker, which confirms that it can be utilized in the real world. In the subsequent work, the optimization of the model will be determined so that lightweight processing on the device will be possible, data balance, and scalability will be improved, and it will be ensured that the model does not violate the ethics in long- term patient monitoring and in nature with clinical validation that fits well into the lifestyle of the patients.