This repository has been archived by the owner on Nov 17, 2019. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 3
Capability In Progress: Facial Expression Recogniton
snielsen221b edited this page Jun 13, 2017
·
3 revisions
Current process: using K-Means clustering to train a classifier on the Cohn-Kanade AU-Coded Expression Database (http://www.pitt.edu/~emotion/ck-spread.htm).
- Face split into ? regions -> should all landmarks be used for each AU?
- Classifiers split so no labels are doubled
- Mouth:
- Theoretical detections: FACS 10-28 sans 19, 21, 25
- Actual labels present (at 30 samples): FACS 12, 13
- Lip Part:
- uses same landmarks as mouth region
- Theoretical detections: FACS 25
- Actual labels present (at 30 samples): FACS 25
- Eyes
- Theoretical detections: FACS 5, 7, 41-46
- Actual labels present (at 30 samples): 45
TODO: split eyes into eyelid movements and movements that affect the eye region
TODO: train classifiers on all landmarks
- Fix all conflicting labels
- Train classifiers on all landmarks
- Chain classifiers for facial expression recognition
- Create structure for testing accuracy
- Test accuracy fo holistic expression classifier (as opposed to through FACS)
- Tutorial: Setup and Installation
- Tutorial: Working with the Arms
- Tutorial: Motion in ROS
- Tutorial: Speech To Text
- Tutorial: OpenCV
- Tutorial: Depth Camera