RESEARCH
My engagement with research stems from a desire to move beyond theory and ask how advanced technologies can respond to real human needs. I am particularly drawn to projects where artificial intelligence, creativity, and accessibility intersect.

A Deep Learning-Based Multimodal Communication Framework for Assisting Deaf–Mute Individuals via Computer Vision
-
Abstract: Deaf–mute individuals face significant communication barriers due to the gap between sign language and spoken language. This study proposes a deep learning-based multimodal communication framework designed to bridge this gap for Vietnamese Sign Language (VSL). The system integrates computer vision and natural language processing techniques, employing CNN–LSTM and 3D-CNN models to recognize hand gestures from video sequences. The recognized signs are then converted into Vietnamese text and speech using a Text-to-Speech (TTS) module. A user-friendly software interface was developed to simulate two-way communication between deaf– mute individuals and normal speakers in real time. Experimental results demonstrated an accuracy of over 85% in sign recognition with low latency, confirming the feasibility and potential application of the proposed framework in real-world scenarios.
-
Keywords: Deep Learning; Computer Vision; CNN–LSTM; Text-to-Speech (TTS); Vietnamese Sign Language (VSL)

