Gradient-boosting algorithms are also used to address the disadvantages of the decision tree algorithm. However, unlike the random forests algorithm, which trains multiple trees based on random subsets of training data, gradient-boosting algorithms train multiple trees sequentially by reducing the errors in the decision trees. Gradient-boosting decision trees are based on a popular machine learning technique called Adaptive Boosting, where we learn why a machine learning model is making errors and then train a new machine learning model that reduces the errors from the previous models.
Gradient-boosting algorithms discover patterns in the data that are difficult to represent in decision trees, and add a greater weight to the training examples, which can lead to correct predictions. Thus, similar to random forests, we generate multiple...