When we looked at random forests in the Testing a random forest model section of chapter 5 (Predicting the Failures of Banks - Multivariate Analysis) previously, decision trees were briefly introduced. In a decision tree, the training sample is split into two or more homogeneous sets based on the most significant independent variables. In a decision tree, the best variable to split the data into the different categories is found. Information gain and the Gini index are the most common ways to find this variable. Then, data is recursively split, expanding the leaf nodes of the tree until the stopping criterion is reached.
Let's see how a decision tree can be implemented in R and how this algorithm is able to predict credit ratings.
Decision trees are implemented in the rpart package. Moreover, the rpart.plot package will be useful to visualize...