The main ensemble techniques are the ones we have seen so far. The following ones are also good to know about and can be useful for some peculiar cases.
Voting ensembles
Sometimes, we have a number of good estimators, each with its own mistakes. Our objective is not to mitigate their bias or variance, but to combine their predictions in the hope that they don't all make the same mistakes. In these cases, VotingClassifier and VotingRegressor could be used. You can give a higher preference to some estimators versus the others by adjusting the weights hyperparameter. VotingClassifier has different voting strategies, depending on whether the predicted class labels are to be used or whether the predicted probabilities should be used instead.
Stacking ensembles
Rather than voting, you can combine the predictions of multiple estimators by adding an extra one that uses their predictions as input. This strategy is known as...