What have we learned so far?
In this chapter we have learned the basics of neural networks. More specifically, what a perceptron and what a multi-layer perceptron is, how to define neural networks in TensorFlow 2.0, how to progressively improve metrics once a good baseline is established, and how to fine-tune the hyperparameter space. In addition to that, we also have an intuitive idea of what some useful activation functions (sigmoid and ReLU) are, and how to train a network with backprop algorithms based on either gradient descent, SGD, or more sophisticated approaches such as Adam and RMSProp.