The first layers of any neural network are basically responsible for identifying low-level features, such as edges, colors, and blobs, but the last layers are usually very specific to the task they are trained for.
Because pre-trained networks are usually trained on a very large dataset such as ImageNet, which contains over 10 million images, it makes those features very generic and possible to be reused for other models.
In the following activity, we will learn how to extract the features from the last activation and use them for solving the CIFAR-10 problem, where we previously achieved around 70% accuracy.