In the Evaluating accuracy using cross-validation metrics recipe, we calculated some metrics to measure the accuracy of the model. Let's remember its meaning. The accuracy returns the percentage of correct classifications. Precision returns the percentage of positive classifications that are correct. Recall (sensitivity) returns the percentage of positive elements of the testing set that have been classified as positive. Finally, in F1, both the precision and the recall are used to compute the score. In this recipe, we will learn how to extract a performance report.
Extracting a performance report
Getting ready
We also have a function in scikit-learn that can directly print the precision, recall, and F1 scores for us...