Summary
This has been a dense chapter, but hopefully you got a better overview of what neural networks are and how to train them.
We talked a lot about the dataset, including how to get correct datasets for training, validation, and testing. We described what a classifier is and we implemented data augmentation. Then we discussed the model and how to tune the convolutional layers, the MaxPooling layers, and the dense layers. We saw how training is done, what backpropagation is, discussed the role of randomness on the initialization of the weights, and we saw graphs of underfitting and overfitting networks. To understand how well our CNN is doing, we went as far as visualizing the activations. Then we discussed inference and retraining.
This means that you now have sufficient knowledge to choose or create a dataset and train a neural network from scratch, and you will be able to understand if a change in the model or in the dataset improves precision.
In Chapter 6, Improving...