Skip to content
This repository has been archived by the owner on Nov 17, 2019. It is now read-only.

Capability In Progress: Facial Expression Recogniton

snielsen221b edited this page Jun 13, 2017 · 3 revisions

Current process: using K-Means clustering to train a classifier on the Cohn-Kanade AU-Coded Expression Database (http://www.pitt.edu/~emotion/ck-spread.htm).

Machine Learning

First (Failed) Approach: K-Means Clustering

Second Approach (in progress): Label Propagation

Data Processing

Image Processing

Labeling Data

  • Face split into ? regions -> should all landmarks be used for each AU?
  • Classifiers split so no labels are doubled
  1. Mouth:
  • Theoretical detections: FACS 10-28 sans 19, 21, 25
  • Actual labels present (at 30 samples): FACS 12, 13
  1. Lip Part:
  • uses same landmarks as mouth region
  • Theoretical detections: FACS 25
  • Actual labels present (at 30 samples): FACS 25
  1. Eyes
  • Theoretical detections: FACS 5, 7, 41-46
  • Actual labels present (at 30 samples): 45

TODO: split eyes into eyelid movements and movements that affect the eye region

TODO: train classifiers on all landmarks

Current Goals

  1. Fix all conflicting labels
  2. Train classifiers on all landmarks
  3. Chain classifiers for facial expression recognition
  4. Create structure for testing accuracy
  5. Test accuracy fo holistic expression classifier (as opposed to through FACS)