Facial expression recognition
The facial expression recognition pipeline is encapsulated by chapter7.py
. This file consists of an interactive GUI that operates in two modes (training and testing), as described earlier.
In order to arrive at our end-to-end app, we need to cover the following three steps:
Load the
chapter7.py
GUI in the training mode to assemble a training set.Train an MLP classifier on the training set via
train_test_mlp.py
. Because this step can take a long time, the process takes place in its own script. After successful training, store the trained weights in a file, so that we can load the pre-trained MLP in the next step.Load the
chapter7.py
GUI in the testing mode to classify facial expressions on a live video stream in real-time. This step involves loading several pre-trained cascade classifiers as well as our pre-trained MLP classifier. These classifiers will then be applied to every captured video frame.
Assembling a training set
Before we can train an MLP, we need to...