In this chapter, we saw how algorithms benefit from being assembled in the form of ensembles. We learned how these ensembles can mitigate the bias versus variance trade-off.
When dealing with heterogeneous data, the gradient boosting and random forest algorithms are my first two choices for classification and regression. They do not require any sophisticated data preparation, thanks to their dependence on trees. They are able to deal with non-linear data and capture feature interactions. Above all, the tuning of their hyperparameters is straightforward.
The more estimators in each method, the better, and you should not worry so much about them overfitting. As for gradient boosting, you can pick a lower learning rate if you can afford to have more trees. In addition to these hyperparameters, the depth of the trees in each of the two algorithms should be tuned via trail and error and cross-validation. Since the two algorithms come from different sides of...