Working with pretrained models
Training large computer vision models is not only hard, but computationally expensive. Therefore, it's common to use models that were originally trained for another purpose and fine-tune them for a new purpose. This is an example of transfer learning.
Transfer learning aims to transfer the learning from one task to another task. As humans, we are very good at transferring what we have learned. When you see a dog that you have not seen before, you don't need to relearn everything about dogs for this particular dog; instead, you just transfer new learning to what you already knew about dogs. It's not economical to retrain a big network every time, as you'll often find that there are parts of the model that we can reuse.
In this section, we will fine-tune VGG-16, originally trained on the ImageNet dataset. The ImageNet competition is an annual computer vision competition, and the ImageNet dataset consists of millions of images of real-world objects, from dogs to...