To correctly evaluate the quality of the predictions that were obtained by our classifiers, we cannot be satisfied with just accuracy_score, but must also use other measures, such as the F1 score and the ROC curve, which we previously encountered in Chapter 5, Network Anomalies Detection with AI, dealing with the topic related to anomaly detection.
Evaluating the quality of our predictions
F1 value
For the convenience, let's briefly go over the metrics that were previously introduced and their definitions:
Sensitivity or True Positive Rate (TPR) = True Positive / (True Positive + False Negative);
Here, sensitivity is also known as the recall rate:
False Positive Rate (FPR) = False Positive / (False Positive + True Negative...