Ishaara
An AI-powered mobile application that enables real-time, two-way translation between sign language and text/speech using computer vision and gesture recognition.
Executive Summary
Bridging the communication gap between the Deaf community and the hearing world through real-time translation.
Ishaara is an AI-powered ecosystem designed for the Deaf and Hard-of-Hearing. It uses advanced computer vision to translate sign language into text/speech and employs a specialized mapping system to convert text back into sign language. As a finalist in SIH 2025, our team developed this to ensure that accessibility isn't just a feature, but a fundamental right.
Core Infrastructure
Real-time Sign-to-Text
High-speed gesture recognition using MediaPipe and custom LSTM models.
Text-to-Sign Animation
Seamlessly converts spoken or typed text into accurate sign language videos.
Two-Way Interface
Fluid conversation UI designed specifically for accessibility.
SIH 2025 Finalist Project
Validated by experts for its impact on social inclusion.
Design Philosophy
Built as a team project for SIH 2025 to create a truly accessible communication tool. I focused on the text-to-sign language conversion to complete the two-way communication loop.
Qualifying as finalists for SIH 2025 and demonstrating how AI can translate complex gestures into spoken words in under 200ms.
Technical Architecture
Optimizing computer vision models to run efficiently on mobile devices while maintaining high accuracy for nuanced sign language gestures.
Engineered With
- React Native
- Python (FastAPI)
- TensorFlow Lite
- MediaPipe
- OpenCV
- Flask
Performance Goal
- Sub-200ms gesture recognition latency
- 95%+ accuracy on core sign vocabulary
- Smooth 60fps sign language animations
System Integrity
- On-device processing for user privacy
- Robust API error handling for real-time streams
- Accessible UI/UX following WCAG guidelines