Skip to content

AI-powered smart helmet for the deaf/hard-of-hearing - detects emergency sounds (sirens, horns) and alerts via haptic/visual feedback. Raspberry Pi-based assistive technology helmet that translates critical audio alerts into tactile and visual notifications.

Notifications You must be signed in to change notification settings

Karansehgal0611/SoundSenseHelmet

Repository files navigation

SoundSenseHelmet 🛡️🎧

AI-Powered Safety Helmet for the Deaf Community

License: MIT
Python 3.9+
TensorFlow Lite

🏗️ Project Structure

smart_helmet/
├── client/                     # Laptop audio streaming client
│   └── audio_client.py
├── raspberry_pi/               # Core helmet system
│   ├── audio/                  # Audio processing
│   │   ├── audio_server.py         # UDP server
│   │   ├── audio_processor.py      # Chunk handling
│   │   └── features.py             # Spectrogram conversion
│   ├── ml/                     # Machine learning
│   │   ├── inference.py            # Real-time prediction
│   │   ├── model_loader.py         # TFLite integration
│   │   └── models/                 # Pretrained models
│   ├── sensors/                # Sensor interfaces
│   │   ├── force_sensor.py         # Impact detection
│   │   ├── gps.py                  # Location tracking
│   │   └── mpu6050.py              # Head tracking
│   ├── output/                 # User feedback
│   │   ├── haptic.py               # Vibration control
│   │   └── visual.py               # LED patterns
│   ├── utils/                  # Utilities
│   │   ├── alerts.py               # Notification system
│   │   ├── logger.py               # Logging config
│   │   └── config.py               # Hardware pin config
├── model_training/             # ML model development
├── tests/                      # Unit tests
│   ├── test_audio_streaming.py
│   ├── test_ml_inference.py
│   └── test_sensors.py
├── requirements.txt            # Python dependencies
└── README.md

🚀 Key Features

  • Real-Time Audio Classification
    • Processes microphone input at 22.05kHz (50ms chunks)
    • Identifies sirens, horns, and emergency sounds
  • Multi-Sensor Integration
    • MPU6050 for head orientation tracking
    • Force sensor for crash detection
    • GPS for emergency location logging
  • Tactile Feedback System
    • Configurable vibration patterns for different alerts
    • LED visual indicators

📦 Hardware

  • Raspberry Pi 4
  • MEMS Microphone
  • MPU6050 (Head tracking)
  • GPS Module (NEO-6M)
  • Vibration Motors

System Architecture

Architecture

🛠️ Setup

# 1. Install dependencies
pip install -r requirements.txt

# 2. Configure hardware pins
nano raspberry_pi/config.py

# 3. Start the system
# On Pi:
python raspberry_pi/main.py

# On laptop:
python client/audio_client.py --pi-ip 192.168.x.x

🧪 Testing

# Run all tests
python -m pytest tests/

# Individual test modules
pytest tests/test_sensors.py
pytest tests/test_ml_inference.py -v

Recommended Tags

Topics:  
assistive-technology, deaf-community, real-time-audio, raspberry-pi, tensorflow-lite, haptic-feedback, smart-helmet, accessibility

About

AI-powered smart helmet for the deaf/hard-of-hearing - detects emergency sounds (sirens, horns) and alerts via haptic/visual feedback. Raspberry Pi-based assistive technology helmet that translates critical audio alerts into tactile and visual notifications.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published