In order to run our app, we will need to execute the main function routine (chapter8.py) that loads the pre-trained cascade classifier and the pre-trained MLP and applies them to each frame of the webcam live stream.
However, this time, instead of collecting more training samples, we will start the program with a different option, shown here:
$ python chapter8.py demo --classifier data/clf1
This will start the application with a new FacialExpressionRecognizerLayout layout, which is a subclass of BasicLayout without any extra UI elements. Let's go over the constructor first, as follows:
- It reads and initializes all the data that was stored by the training script, like this:
class FacialExpressionRecognizerLayout(BaseLayout):
def __init__(self, *args,
clf_path=None,
**kwargs):
super().__init__(*args, **kwargs...