Authors: Vaishnavi Yelnare, Dr.Santosh Gaikwad, Dr. A. A. Khan, Dr. R. S. Deshpandes
Abstract: This research undertakes an in-depth exploration into the utilization of machine learning algorithms for the recognition and classification of sign languages commonly used by individuals within the deaf and mute communities. We evaluate different models, such as CNNs, LSTMs, and hybrid networks, for gesture recogni- tion, image processing, and sequence classification. Chal- lenges including lighting, occlusion, inter-user variability, and data scarcity are addressed. Experiments are con- ducted on real-world datasets like RWTH-BOSTON and American Sign Language (ASL) to benchmark model performance. Our study contributes a scalable, real-time framework for sign language recognition, which aids in bridging communication gaps for the hearing-impaired community.