Skip to content

This project aims to bridge communication gaps for non-verbal individuals by translating sign language gestures into real-time English captions using deep learning. The application leverages MediaPipe for gesture recognition, TensorFlow for model training, and Streamlit for a user-friendly web interface. Users can record gestures via webcam and see

Notifications You must be signed in to change notification settings

jothsnapraveena/SignPal-Real-Time-Sign-Language-Translation

Repository files navigation

SignPal Logo

Welcome to the SignPal project! This tool translates sign language gestures into real-time English captions.

Table of Contents

Project Overview

The Sign Language Interpretation Application is designed to facilitate communication for individuals who are non-verbal. By utilizing deep learning techniques, this application translates sign language gestures into real-time English captions, enhancing inclusivity and community interaction.

Objective

Our objective is to create an accessible communication tool for individuals who are unable to speak, allowing them to express themselves through sign language. The application aims to empower users by enabling their friends, family, and community members to understand their gestures, fostering a more inclusive environment.

Features

  • Real-time translation of sign language into English captions.
  • User-friendly interface with straightforward controls.
  • Expandable information sections on the sidebar for easy navigation.
  • Video recording functionality for gesture capture.
  • Designed for accessibility and ease of use.

How to Use

  1. Open the Application: Launch the app in your web browser.
  2. Start the Camera: Click on the 'Start' button to enable your camera.
  3. Record Sign Language: When your friend begins to sign, click on the 'Record' button.
  4. Real-Time Translation: The app will convert the sign language into English captions in real-time.
  5. Stop Recording: Click on the 'Stop' button when you are finished.

Installation

  1. Clone the repository:
    git clone <repository-url>
    cd <project-directory>
  2. Install the required packages:
    pip install -r requirements.txt
  3. Run the application:
    streamlit run app.py

Technologies Used

  • Python
  • TensorFlow/Keras (for the deep learning model)
  • Long Short Term Memory architecture
  • Streamlit (for web application development)
  • OpenCV (for video processing)
  • MediaPipe (for gesture detection)

Team

This project is developed by Pumpkin Seeds, consisting of graduate students from Clark University pursuing a Master's in Data Analytics:

  • Kunal Malhan
  • Keerthana Goka
  • Jothsna Praveena Pendyala
  • Mohana Uma Manem

About

This project aims to bridge communication gaps for non-verbal individuals by translating sign language gestures into real-time English captions using deep learning. The application leverages MediaPipe for gesture recognition, TensorFlow for model training, and Streamlit for a user-friendly web interface. Users can record gestures via webcam and see

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •