Decision trees are a family of non-parametric supervised learning methods. In the decision tree algorithm, we start with the complete dataset and split it into two partitions based on a simple rule. The splitting continues until a specified criterion is met. The nodes at which the split is made are called interior nodes and the final endpoints are called terminal or leaf nodes.
As an example, let us look at the following tree:
Here, we are assuming that the exoplanet data has only two properties: flux.1 and flux.2. First, we make a decision if flux.1 > 400 and then divide the data into two partitions. Then we divide the data again based on flux.2 feature, and that division decides whether the planet is an exoplanet or not. How did we decide that condition flux.1 > 400? We did not. This was just to demonstrate a decision...