Hand Gesture Recognition For Sign Language Interpretation

Uncategorized

Authors: Mrs.R.Aruna, AP / IT, I.Hari Haran, P.A Manikandan, J.Mohamed Farees

Abstract: Effective communication between sign language users and non-signers remains a significant challenge in education, workplaces, and daily life. To address this issue, a Bi-Directional Sign Language Translation System is proposed, leveraging advanced computer vision techniques (OpenCV, Mediapipe), deep learning frameworks (TensorFlow/Keras), and Natural Language Processing (NLP) algorithms. The system provides real-time translation of sign gestures into text or speech, and conversely, converts text or voice into dynamic animated sign language. Furthermore, multilingual Text-to Speech (TTS) integration ensures clear and natural voice assistance, enhancing accessibility across diverse communities. Implemented with scalable technologies such as Python, Flask, and React.js, the platform ensures low latency, high performance, and ease of use. By combining gesture recognition, neural networks, and speech synthesis, this system promotes inclusivity and empowers individuals with hearing or speech impairments to participate fully in modern communication environments.

DOI: http://doi.org/10.5281/zenodo.17309992

× How can I help you?