A random forest is an ensemble of decision trees. In a decision tree, the training sample, which is based on the independent variables, will be split into two or more homogeneous sets. This algorithm deals with both categorical and continuous variables. The best attribute is selected using a recursive selection method and is split to form the leaf nodes. This continues until a criterion that's meant to stop the loop is met. Every tree that's created by the expansion of leaf nodes is considered to be a weak learner. This weak learner is built on top of the rows and columns of the subsets. The higher the number of trees, the lower the variance. Both classification and regression random forests calculate the average prediction of all of the trees to make a final prediction.
When a random forest is trained, some different parameters can be set...