Model evaluation
After completing our model estimation as described in the preceding section, we need to evaluate these estimated models to see if they fit our client's criterion so that we can either move to the explanation of results or go back to some previous stage to refine our predictive models.
To perform our model evaluation, in this section, we will utilize confusion matrix numbers to assess the quality of fit for our models, and then expand to other statistics.
As always, to calculate them, we need to use our test data rather than the training data.
Confusion matrix
In R, we can produce the model's performance indices with the following code:
model$confusion
Once a cutting point is determined, the following confusion matrix is produced, which shows a good result:
Model's Performance |
Predicted as Default |
Predicted as NOT (Good) |
---|---|---|
Actual Default |
89% |
11% |
Actual Not (Good) |
12% |
88% |
For this project, the preceding table is the most important evaluation, as the company wants to increase...