Welcome to the SignPal project! This tool translates sign language gestures into real-time English captions.
The Sign Language Interpretation Application is designed to facilitate communication for individuals who are non-verbal. By utilizing deep learning techniques, this application translates sign language gestures into real-time English captions, enhancing inclusivity and community interaction.
Our objective is to create an accessible communication tool for individuals who are unable to speak, allowing them to express themselves through sign language. The application aims to empower users by enabling their friends, family, and community members to understand their gestures, fostering a more inclusive environment.
- Real-time translation of sign language into English captions.
- User-friendly interface with straightforward controls.
- Expandable information sections on the sidebar for easy navigation.
- Video recording functionality for gesture capture.
- Designed for accessibility and ease of use.
- Open the Application: Launch the app in your web browser.
- Start the Camera: Click on the 'Start' button to enable your camera.
- Record Sign Language: When your friend begins to sign, click on the 'Record' button.
- Real-Time Translation: The app will convert the sign language into English captions in real-time.
- Stop Recording: Click on the 'Stop' button when you are finished.
- Clone the repository:
git clone <repository-url> cd <project-directory>
- Install the required packages:
pip install -r requirements.txt
- Run the application:
streamlit run app.py
- Python
- TensorFlow/Keras (for the deep learning model)
- Long Short Term Memory architecture
- Streamlit (for web application development)
- OpenCV (for video processing)
- MediaPipe (for gesture detection)
This project is developed by Pumpkin Seeds, consisting of graduate students from Clark University pursuing a Master's in Data Analytics:
- Kunal Malhan
- Keerthana Goka
- Jothsna Praveena Pendyala
- Mohana Uma Manem