Why does ensembling work?
When using the bagging method, we combine the result of many decision trees and produce a single output/prediction by taking a majority count. Under a different sampling mechanism, the results had been combined to produce a single prediction for the random forests. Under a sequential error reduction method for decision trees, the boosting method also provides improved answers. Although we are dealing with uncertain data, which involves probabilities, we don't intend to have methodologies that give results out of a black box and behave without consistent solutions. A theory should explain the working and we need an assurance that the results will be consistent and there is no black magic about it. Arbitrary and uncertain answers are completely unwanted. In this section, we will look at how and why the ensembling solutions work, as well as scenarios where they will not work.
Ensembling methods have strong mathematical and statistical underpinnings that explain...