Selecting the most predictable features
A mantra of (almost) every data scientist is: build a simple model while explaining as much variance in the target as possible. In other words, you can build a model with all your features, but the model may be highly complex and prone to overfitting. What's more, if one of the variables is missing, the whole model might produce an erroneous output and some of the variables might simply be unnecessary, as other variables would already explain the same portion of the variance (a term called collinearity).
In this recipe, we will learn how to select the best predicting model when building either classification or regression models. We will be reusing what we learn in this recipe in the recipes that follow.
Getting ready
To execute this recipe, you will need a working Spark environment and you would have already loaded the data into the forest
DataFrame.
No other prerequisites are required.
How to do it...
Let's begin with a code that will help to select the...