Model Selection
Now that we know how to train a classifier, we will revise how to evaluate and choose between models. First, we will cover four common metrics: Accuracy, Precision, Recall, and F1, and then we will discuss cross-validation as a good tool for model comparison.
Evaluation Metrics
We use a confusion matrix to split the predictions into True Positives, False Positives, True Negatives, and False Negatives. In our case, True Positives are the applicants that were correctly identified as being "Good," False Positives are the applicants that were incorrectly identified as being "Good," True Negatives are the applicants that were correctly identified as "Bad," while False Negatives are the applicants that were incorrectly identified as "Bad."
Figure 4.4: Confusion matrix
In Figure 4.4, the rows are the true classes, while the columns are the predicted classes. If the true class is Positive and the model predicts Positive, that prediction...