Transfer learning with Keras
Transfer learning is an effective technique to train a model when dealing with small datasets.
In this recipe, we will exploit it alongside the MobileNet v2 pre-trained model to recognize our two desk objects.
Getting ready
The basic principle behind transfer learning is to exploit features learned for one problem to address a new and similar one. Therefore, the idea is to take layers from a previously trained model, commonly called a pre-trained model, and add some new trainable layers on top of them:
Figure 8.17: Model architecture with transfer learning
The pre-trained model’s layers are frozen, meaning their weights cannot change during training. These layers are the base (or backbone) of the new architecture and aim to extract features from the input data. These features feed the trainable layers, the only layers to be trained from scratch.
The trainable layers are the head of the new architecture, and for...