In Chapter 5, Classification, we explored decision tree methods in which models consisted of a tree of if/then statements. These if/then portions of the decision tree split the prediction logic based on one of the features of the training set. In an example where we were trying to classify medical patients into unhealthy or healthy categories, a decision tree might first split based on a gender feature, then based on an age feature, then based on a weight feature, and so on, eventually landing on healthy or unhealthy.
How does the algorithm choose which features to use first in the decision tree? In the preceding example, we could split on gender first, or weight first, and any other feature first. We need a way to arrange our splits in an optimal way, such that our model makes the best predictions that it can make.
Many decision...