PyTorch provides a set of trained models in its torchvision library. Most of them accept an argument called pretrained when True, which downloads the weights tuned for the ImageNet classification problem. Let's look at the code snippet that creates a VGG16 model:
from torchvision import models
vgg = models.vgg16(pretrained=True)
Now we have our VGG16 model with all the pre-trained weights ready to be used. When the code is run for the first time, it could take several minutes, depending on your internet speed. The size of the weights could be around 500 MB. We can take a quick look at the VGG16 model by printing it. Understanding how these networks are implemented turns out to be very useful when we use modern architectures. Let's take a look at the model:
VGG ( (features): Sequential ( (0): Conv2d(3, 64, kernel_size=(3, 3)...