AI-Enabled Smart Glove For Real-Time Voice Translation Of Hand Gestures: Design, Implementation, And Evaluation

Uncategorized

Authors: Nagul Nisok K S, Nidhish V, Nirmal B, Nivesh S, Sai Sarvesh P G

Abstract: Communication barriers faced by individuals with speech and hearing impairments represent a significant societal challenge. This paper presents an AI-enabled smart glove system designed to translate hand gestures into synthesized voice output in real time. The proposed system integrates an array of flex sensors, an inertial measurement unit (IMU), and surface electromyography (sEMG) electrodes embedded within a lightweight, wearable glove. Raw sensor data are transmitted wirelessly via Bluetooth Low Energy (BLE) to a companion edge-computing module, where a multi-stream convolutional neural network–long short-term memory (CNN-LSTM) architecture performs gesture classification. Classified gestures are subsequently converted to speech using a neural text-to-speech (TTS) engine. Evaluated on a 250-class American Sign Language (ASL) dataset comprising 48,000 gesture samples from 40 subjects, the system achieves a top-1 classification accuracy of 97.4 % and an average end-to-end latency of 68 ms. Power consumption is maintained at 84 mW during continuous operation, enabling up to 11 hours of use on a 1,000 mAh Li-Po cell. Comparative analysis demonstrates that the proposed design outperforms existing glove-based and vision-based translation systems in accuracy, latency, and portability. The findings highlight the potential of the system as an effective assistive device for the deaf and hard-of-hearing community.

DOI: https://doi.org/10.5281/zenodo.19707715

× How can I help you?