Introduction
In the last two chapters, we have gained a thorough understanding of the workings of logistic regression. We have also gotten a lot of experience with using the scikit-learn package in Python to create logistic regression models.
In this chapter, we will introduce a powerful type of predictive model that takes a completely different approach from the logistic regression model: decision trees. The concept of using a tree process to make decisions is simple, and therefore, decision tree models are easy to interpret. However, a common criticism of decision trees is that they overfit the training data. In order to remedy this issue, researchers have developed ensemble methods, such as random forests, that combine many decision trees to work together and make better predictions than any individual tree could.
We will see that decision trees and random forests can improve the quality of our predictive modeling of the case study data beyond what we achieved so far with logistic regression...