In this chapter, we saw how we can perform a regression task with neural networks. This involved some simple architectural changes to our previous classification models, pertaining to model construction (one output layer with no activation function) and the choice of loss function (MSE). We also tracked the MAE as a metric, since squared errors are not very intuitive to visualize. Finally, we plotted out our model's predictions versus the actual prediction labels using a scatter plot to better visualize how well the network did. We also used a histogram to understand the distribution of prediction errors in our model.
Finally, we introduced the methodology of k-fold cross validation, which is preferred over explicit train test splits of our data, in cases where we deal with very few data observations. What we did instead of splitting our data into a training and test...