Questions
- How does the choice of loss function when training a model affect the performance of the model on imbalanced datasets?
- Can you explain why the PR curve is more informative than the ROC curve when dealing with highly skewed datasets?
- What are some of the potential issues with using accuracy as a metric for model performance on imbalanced datasets?
- How does the concept of “class imbalance” affect the process of feature engineering in machine learning?
- In the context of imbalanced datasets, how does the choice of “k” in k-fold cross-validation affect the performance of the model? How would you fix the issue?
- How does the distribution of classes in the test data affect the PR curve, and why? What about the ROC curve?
- What are the implications of having a high AUC-ROC but a low AUC-PR in the context of an imbalanced dataset?
- How does the concept of “sampling bias” contribute to the challenge of imbalanced datasets in machine learning?
- How does the concept of “labeling errors” contribute to the challenge of imbalanced datasets in machine learning?
- What are some of the real-world scenarios where dealing with imbalanced datasets is inherently part of the problem?
- Matthews Correlation Coefficient (MCC) is a metric that takes all the cells of the confusion matrix into account and is given by the following formula:
- What can be the minimum and maximum values of the metric?
- Because it takes TN into account, its value may not change much when we are comparing different models, but it can tell us if the predictions for various classes are going well. Let’s illustrate this through an artificial example where we take a dummy model that always predicts 1 for an imbalanced test set made of 100 examples, with 90 of class 1 and 10 of class 0. Compute the various terms in the MCC formula and the value of MCC. Also, compute the values of accuracy, precision, recall, and F1 score.
- What can you conclude about the model from the MCC value that you just computed in the previous question?
- Create an imbalanced dataset using imblearn’s
fetch_dataset
API and then compute the values of MCC, accuracy, precision, recall, and F1 score. See if the MCC value can be a useful metric for this dataset.