Transfer Learning
So far, we've learned a lot about designing and training our own CNN models. But as you may have noticed, some of our models are not performing very well. This can be due to multiple reasons, such as the dataset being too small or our model requiring more training.
But training a CNN takes a lot of time. It would be great if we could reuse an existing architecture that has already been trained. Luckily for us, such an option does exist, and it is called transfer learning. TensorFlow provides different implementations of state-of-the-art models that have been trained on the ImageNet dataset (over 14 million images).
Note
You can find the list of available pretrained models in the TensorFlow documentation: https://www.tensorflow.org/api_docs/python/tf/keras/applications
To use a pretrained model, we need to import its implemented class. Here, we will be importing a VGG16
model:
import tensorflow as tf from tensorflow.keras.applications import VGG16...