In the case of ensemble models, each base classifier must have some degree of diversity within itself. This diversity can be obtained in one of the following manners:
- By using different subsets of training data through various resampling methods or randomization of the training data
- By using different learning hyperparameters for different base learners
- By using different learning algorithms
In the case of ensemble models, where different algorithms are used for the base learners, the ensemble is called a heterogeneous ensemble method. If the same algorithm is used for all the base learners on different distributions of the training set, the ensemble is called a homogeneous ensemble.