Abstract: Access to information and interactions is severely restricted by communication obstacles between the hearing and the deaf or mute cultures. In order to solve this problem, a Python program called "Voice-to-Sign Translator" was created to convert spoken English into animated representations of Indian Sign Language. This project uses a user-friendly interface to empower deaf and mute persons by allowing for efficient communication in brief, predictable settings such as classrooms, airports, and customer service lines. By focussing on ISL, the system attempts to satisfy the unique needs of Indian users while also improving accessibility in regular interactions. The application uses Python's sophisticated speech recognition and computer graphics tools to dynamically convert spoken English into ISL signs. A 3D graphical representation of ISL gestures is displayed on the screen in real-time, allowing users to comprehend spoken instructions visually. Unlike traditional sign translation systems, this project emphasizes voice-to-sign conversion, addressing limited domains where brief, predictable communication is required. This approach makes the system ideal for organised environments, boosting efficiency and inclusion. This system integrates Python-based speech recognition APIs like Google Speech Recognition with advanced animation tools like Blender or PyOpenGL for 3D modelling. The technology links spoken words and phrases to their corresponding ISL gestures using a database to guarantee accurate and context-sensitive translations. Furthermore, the project prioritises usability and accessibility, with features designed to accommodate non-technical users.
Call for Papers
Rapid Publication 24/7
March 2025/April 2025
Submission: eMail paper now
Notification: Immediate
Publication: Immediately with eCertificates
Frequency: Monthly