Summary
This has been a dense chapter! We discussed machine learning in general and deep learning in particular. We talked about neural networks and how convolutions can be used to make faster and more accurate neural networks, leveraging the knowledge of pixel proximity. We learned about weights, bias, and parameters, and how the goal of the training phase is to optimize all these parameters to learn the task at hand.
After verifying the installation of Keras and TensorFlow, we described MNIST, and we instructed Keras to build a network similar to LeNet, to achieve more than 98% accuracy on this dataset, meaning that we can now easily recognize handwritten digits. Then, we saw that the same model does not perform well in CIFAR-10, despite increasing the number of epochs and the size of the network.
In the next chapter, we will study in depth many of the concepts that we introduced here, with the final goal, to be completed by Chapter 6, Improving Your Neural Network, of learning...