Transfer learning is a great discovery that allows us to save time by applying the models trained in bigger datasets to different datasets. Transfer learning also helps warm start the training process when the dataset is small. In this chapter, we learned how to use pre-trained models, such as VGG16 and Inception v3 to classify the images in a different dataset to the dataset they were trained on. We also learned how to retrain the pre-trained models with examples in both TensorFlow and Keras, and how to preprocess the images for feeding into both the models.
We also learned that there are several models that are trained on the ImageNet dataset. Try to find some other models that are trained on different datasets, such as video datasets, speech datasets or text/NLP datasets. Try using these models to retrain and use for your own deep learning problems on your own datasets...