We have discussed the image classification task in all of the previous chapters of this book. We have seen how it is possible to define a convolutional neural network by stacking several convolutional layers and how to train it using Keras. We also looked at eager execution and saw that using AutoGraph is straightforward.
So far, the convolutional architecture used has been a LeNet-like architecture, with an expected input size of 28 x 28, trained end to end every time to make the network learn how to extract the correct features to solve the fashion-MNIST classification task.
Building a classifier from scratch, defining the architecture layer by layer, is an excellent didactical exercise that allows you to experiment with how different layer configurations can change the network performance. However, in real-life scenarios, the amount...