This chapter covered the basics behind using a computer to learn prediction models by introducing the loss function and gradient descent. It then introduced the concepts of overfitting, underfitting, and the penalty approach to regularize your model during fits. It then covered common regression and classification techniques, and the regularized versions of each of these where appropriate. Large margin and tree-based classification were introduced in an intuition-driven manner. The chapter finished with a discussion of best practices for model tuning, including cross-validation and grid search. After reading this chapter, you should have a full picture of what the computer is doing when you ask it to learn a prediction model. You should now have intuition on what methods to try on your problem statement and how to tune and validate your models.
The next chapter will cover...