We have previously encountered the ROC curve and AUC measure (Chapter 5, Network Anomaly Detection with AI, and Chapter 7, Fraud Prevention with Cloud AI Solutions) to evaluate and compare the performance of different classifiers.
Now let's explore the topic in a more systematic way, introducing the confusion matrix associated with all the possible results returned by a fraud-detection classifier, comparing the predicted values with the real values:
We can then calculate the following values (listed with their interpretation) based on the previous confusion matrix:
- Sensitivity = Recall = Hit rate = TP/(TP + FP): This value measures the rate of correctly labeled fraudsters and represents the true positive rate (TPR)
- False Positive Rate (FPR) = FP/(FP + TN): FPR is also calculated as 1 – Specificity
- Classification accuracy...