Summary
In this chapter, we went through various oversampling techniques for dealing with imbalanced datasets and applied them using Python’s imbalanced-learn
library (also called imblearn
). We also saw the internal workings of some of the techniques by implementing them from scratch. While random oversampling generates new minority class samples by duplicating them, SMOTE-based techniques work by choosing random samples in the direction of nearest neighbors of the minority class samples. Though oversampling can potentially overfit the model on your data, it usually has more pros than cons, depending on the data and model.
We applied them to some of the synthesized and publicly available datasets and benchmarked their performance and effectiveness. We saw how different oversampling techniques may lead to model performance on a varying scale, so it becomes crucial to try a few different oversampling techniques to decide on the one that’s most optimal for our data.
...