A comprehensive AI-powered system for detecting and analyzing human behaviors in video footage using computer vision and machine learning techniques.
This system automatically analyzes video files to:
- Extract video frames at configurable intervals
- Detect human poses using MediaPipe
- Classify behaviors (normal, falling, fighting, loitering, running, walking)
- Trigger alerts for suspicious behaviors
- Provide an interactive chatbot for querying analysis results
human_behavior_detection_project/
โโโ data/
โ โโโ raw_videos/ # Input MP4 video files
โ โโโ extracted_frames/ # Extracted video frames
โ โโโ keypoints/ # Pose keypoint data
โ โโโ labels/ # Behavior labels (for training)
โ โโโ logs/ # System logs and alerts
โโโ preprocessing/
โ โโโ extract_frames.py # Video frame extraction
โ โโโ extract_pose.py # Pose keypoint extraction
โโโ ai_model/
โ โโโ predict_behavior.py # Behavior classification model
โโโ alerts/
โ โโโ alert_trigger.py # Alert system
โโโ chatbot/
โ โโโ chatbot_interface.py # Interactive chatbot
โโโ main.py # Main orchestration script
โโโ requirements.txt # Python dependencies
โโโ README.md # This file
# Clone the repository
git clone <repository-url>
cd human_behavior_detection_project
# Install dependencies
pip install -r requirements.txtPlace your MP4 video files in the data/raw_videos/ directory.
# Start the interactive menu system
python main.py --mode interactiveThis will show you a menu with options:
- ๐น Analyze Saved Videos
- ๐ฅ Real-time Camera Analysis
- ๐ Run Full Pipeline
- ๐ฌ Chat with AI Assistant
- ๐ System Status
- ๐งช Test System
# Run complete analysis pipeline
python main.py --mode full_pipeline
# Start interactive chatbot
python main.py --mode chat
# Check system status
python main.py --statuspython main.py --mode interactiveStart the interactive menu system with all features:
- ๐น Analyze saved videos
- ๐ฅ Real-time camera analysis
- ๐ Run full pipeline
- ๐ฌ Chat with AI assistant
- ๐ System status
- ๐งช Test system
python main.py --mode extract_frames --fps 1
python main.py --mode extract_pose
python main.py --mode analyzeOr use the interactive menu to select and analyze specific videos.
python main.py --mode interactive
# Then select option 2Use your webcam for live behavior detection with pose visualization.
python main.py --mode full_pipelineRuns the complete workflow: frame extraction โ pose detection โ behavior classification โ alert monitoring.
python main.py --mode chatStart an interactive chatbot to query analysis results.
python main.py --statusCheck the current status of all system components.
The chatbot can answer questions like:
- "What behaviors were detected in the videos?"
- "Show me recent alerts"
- "How many videos have been processed?"
- "What happened in video X?"
- "Show me statistics about falling behavior"
- "What are the most common behaviors detected?"
The system automatically triggers alerts for suspicious behaviors:
| Behavior | Severity | Threshold |
|---|---|---|
| Falling | High | 30% |
| Fighting | Critical | 20% |
| Loitering | Medium | 50% |
| Running | Medium | 40% |
Alerts include:
- Console notifications with color coding
- Log file entries
- Email notifications (configurable)
- Cooldown periods to prevent spam
Edit alerts/alert_config.json to customize:
- Email notification settings
- Alert thresholds
- Cooldown periods
- Log file locations
Control frame extraction rate:
python main.py --mode full_pipeline --fps 2 # 2 frames per secondChoose between synthetic and real data:
# Use synthetic data (default)
python main.py --mode train_model
# Use real data (requires labeled data)
python main.py --mode train_model --no-synthetic- Location:
data/extracted_frames/<video_name>/ - Format: JPEG images named
frame_XXXXXX.jpg
- Location:
data/keypoints/<video_name>/ - Files:
pose_data.json- Human-readable pose datapose_data.pkl- Binary format for fast loading
- Model:
ai_model/behavior_classifier.pkl - Logs:
data/logs/main.log - Alerts:
data/logs/alerts.log
- Library: MediaPipe Pose
- Landmarks: 33 body keypoints
- Features: x, y, z coordinates + visibility scores
- Algorithm: Random Forest Classifier
- Features: Pose landmark coordinates
- Classes: normal, falling, fighting, loitering, running, walking
- Types: Console, Email, Log file
- Severity Levels: Low, Medium, High, Critical
- Cooldown: Configurable time periods
- Frame Extraction: ~100-500 fps (depending on video resolution)
- Pose Detection: ~10-30 fps per frame
- Behavior Classification: ~1000+ predictions/second
- Pose Detection: >90% for clear human figures
- Behavior Classification: Varies by behavior type (requires training data)
-
No videos found
- Ensure MP4 files are in
data/raw_videos/ - Check file permissions
- Ensure MP4 files are in
-
Pose detection fails
- Verify video quality and lighting
- Check MediaPipe installation
-
Model training errors
- Use synthetic data for testing:
--no-syntheticflag - Check available memory
- Use synthetic data for testing:
-
Alert system not working
- Verify alert configuration file
- Check log files for errors
- Main logs:
data/logs/main.log - Alert logs:
data/logs/alerts.log - Chatbot logs: Console output
The system includes synthetic training data for testing:
python main.py --mode train_model # Uses synthetic data by defaultFor production use, prepare labeled training data:
- Extract frames from videos
- Extract pose keypoints
- Label behaviors manually
- Train model with real data
- Real-time video processing
- Multi-person detection
- Advanced behavior patterns
- Web interface
- Mobile app integration
- Cloud deployment
- Custom behavior training
This project is licensed under the MIT License - see the LICENSE file for details.
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
For questions and support:
- Check the troubleshooting section
- Review log files for errors
- Open an issue on GitHub
Note: This system is designed for research and educational purposes. For production deployment, ensure compliance with privacy laws and regulations.