Until now, we dealt with problems where we had a similar number of datapoints in all our classes. In the real world, we might not be able to get data in such an orderly fashion. Sometimes, the number of datapoints in one class is a lot more than the number of datapoints in other classes. If this happens, then the classifier tends to get biased. The boundary won't reflect the true nature of your data, just because there is a big difference in the number of datapoints between the two classes. Therefore, it is important to account for this discrepancy and neutralize it so that our classifier remains impartial.
Tackling class imbalance
Getting ready
In this recipe, we will use a new dataset, named data_multivar_imbalance...