Random forest algorithms are built on aggregating decision trees on randomly selected observations and/or randomly selected features. We will not cover how decision trees are trained, but will show that there are types of random forests that can be trained using gradient boosting, which TensorFlow can calculate for us.
Using a random forest
Getting ready
Tree based algorithms are traditionally non-smooth, as they are based on partitioning the data to minimize the variance in the target outputs. Non-smooth methods do not lend themselves well to gradient based methods. TensorFlow relies on the fact that the functions used in the model are smooth and that it automatically calculates how to change the model parameters to minimize...