This chapter is about some of the things you can do in order to improve the quality of the predictions our models make.
This chapter is divided into two main sections. In the first, we discuss hyperparameter tuning, which is the way to choose those values that define our model that are not directly learned from data. We start by discussing the simplest case, which is how to tune one parameter, and then we move on and show one of the most popular methods for optimizing more than one hyperparameter at the same time—we use the concept of cross-validation and k-fold cross-validation in this section, so it is important that you have learned these concepts from the previous chapters.
In the second section, we show how sometimes trying a different model can improve the quality of our predictions. Then, we use some of the information we got...