What have we learned so far?
In this chapter, we have learned the basics of neural networks. More specifically, we have learned what a perceptron is and what a multi-layer perceptron is, how to define neural networks in TensorFlow, how to progressively improve metrics once a good baseline is established, and how to fine-tune the hyperparameter space. In addition to that, we also have a good idea of useful activation functions (sigmoid and ReLU) available, and how to train a network with backpropagation algorithms based on either GD, SGD, or more sophisticated approaches, such as Adam and RMSProp.