Skip to content

Body Language Decoder that decodes human emotions by using computer vision and machine learning.

License

Notifications You must be signed in to change notification settings

AaTekle/Body-Language-Decoder

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

36 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Body Language Decoder

Welcome to my Body Language Decoder project! This project aims to decode human emotions and reactions by analyzing facial expressions and gestures using computer vision techniques and machine learning. The project leverages Python, Mediapipe, OpenCV (cv2), csv, NumPy, Pandas, scikit-learn (sklearn), and pickle to create an insightful tool for understanding body language.

Table of Contents

Introduction

This project uses computer vision and machine learning to decode body language, providing insights into human behavior.

Technical Overview of Mediapipe

MediaPipe Mediapipe, developed by Google, is a Computer Vision library that provides a pre-trained models for various computer vision tasks, e.g. facial landmark detection. It allowed me to easily extract facial landmarks from images / video streams, enabling accurate tracking of key points on the face, such as eyes, nose, mouth, and eyebrows. This library forms the backbone of this project, enabling me to analyze and interpret facial expressions.

Creating the Dataset

Data is needed pre-training, Utilized the csv module to collect & organize the coordinates of facial landmarks via Mediapipe. I captured and labeled facial landmarks of my own various emotional states, creating a dataset. The coordinates were created in real time while I displayed emotional states, it was cool that I got to create the dataset in such an interactive way within real time, seeing my facial expressions being translated into thounsands of numerical values (coordinates).

Real-Time Facial Expression Analysis

OpenCV (cv2) allowed me to capture video frames from a webcam feed and process them using the Mediapipe facial landmark model. The coordinates of facial landmarks are then saved to the dataset, creating a representation of facial expressions (features).

Probability Box/Function

Used NumPy to construct and display a probability box or function within the Body Language Decoder. This visual representation allows one to understand the likelihood of different emotions being expressed based on the facial landmark data.

Conclusion:

Thank you for looking at my Body Language Decoder project.

About

Body Language Decoder that decodes human emotions by using computer vision and machine learning.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published