A mantra of (almost) every data scientist is: build a simple model while explaining as much variance in the target as possible. In other words, you can build a model with all your features, but the model may be highly complex and prone to overfitting. What's more, if one of the variables is missing, the whole model might produce an erroneous output and some of the variables might simply be unnecessary, as other variables would already explain the same portion of the variance (a term called collinearity).
In this recipe, we will learn how to select the best predicting model when building either classification or regression models. We will be reusing what we learn in this recipe in the recipes that follow.