In this chapter, we introduced different ensemble methods such as bootstrap sampling, bagging, random forest, and boosting, and their working was explained with the help of some examples. We then used them for regression and classification. For regression, we took the example of a diamond dataset, and we also trained some KNN and other regression models. Later, their performance was compared. For classification, we took the example of a credit card dataset. Again, we trained all of the regression models. We compared their performance, and we found that the random forest model was the best performer.
In the next chapter, we will study k-fold cross-validation and parameter tuning. We will compare different ensemble learning models with k-fold cross-validation and later, we'll use k-fold cross-validation for hyperparameter tuning.