Summary
In this chapter, you learned about the main metrics for model evaluation. You started with the metrics for classification problems and then you moved on to the metrics for regression problems.
In terms of classification metrics, you have been introduced to the well-known confusion matrix, which is probably the most important artifact for performing a model evaluation on classification models.
You learned about true positives, true negatives, false positives, and false negatives. Then, you learned how to combine these components to extract other metrics, such as accuracy, precision, recall, the F1 score, and AUC.
You then went even deeper and learned about ROC curves, as well as precision-recall curves. You learned that you can use ROC curves to evaluate fairly balanced datasets and precision-recall curves for moderate to imbalanced datasets.
By the way, when you are dealing with imbalanced datasets, remember that using accuracy might not be a good idea.
In terms...