To directly apply the notions presented in this chapter, we will develop an app making use of a lightweight computer vision model, and we will deploy it to various platforms.
We will build an app that classifies facial expressions. When pointed to a person's face, it will output the expression of that person—happy, sad, surprised, disgusted, angry, or neutral. We will train our model on the Facial Expression Recognition (FER) dataset available at https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge, put together by Pierre-Luc Carrier and Aaron Courville. It is composed of 28,709 grayscale images of 48 × 48 in size:
Inside the app, the naive approach would be to capture images with the camera and then feed them directly to our trained model. However, this would yield poor results as objects in the environment...