Summary
In this chapter, you learned how to build a classical ML model in Azure Machine Learning.
You learned about decision trees, a popular technique for various classification and regression problems. The main strengths of decision trees are that they require little data preparation as they work well on categorical data and different data distributions. Another important benefit is their interpretability, which is especially important for business decisions and users. This helps you to understand when a decision-tree-based ensemble predictor is appropriate to use.
However, we also learned about a set of weaknesses, especially regarding overfitting and poor generalization. Luckily, tree-based ensemble techniques such as bagging (bootstrap aggregation) and boosting help to overcome these problems. While bagging has popular methods such as random forests that parallelize very well, boosting—especially gradient boosting—has efficient implementations such as XGBoost...