The facial-expression-recognition pipeline is encapsulated in chapter8.py. This file consists of an interactive GUI that operates in two modes (training and testing), as described earlier.
Our entire application is divided into parts, mentioned as follows:
- Running the application in the collect mode using the following command from the command line:
$ python chapter8.py collect
The previous command will pop up a GUI in the data collection mode to assemble a training set,
training an MLP classifier on the training set via python train_classifier.py. Because this step can take a long time, the process takes place in its own script. After successful training, store the trained weights in a file, so that we can load the pre-trained MLP in the next step.
- Then, again running the GUI in the demo mode as follows, we will be able to see how good the facial recognition...