We have already discussed ordinary least squares (OLS) and its related techniques, lasso and ridge, in the context of linear regression. In this recipe, we will see how easily these techniques can be implemented in caret and how to tune the corresponding hyperparameters.
OLS is designed to find the estimates that minimize the square distances between the observations and the predicted values of a linear model. There are three reasons why this approach might not be ideal:
- If the number of predictors is greater than the number of samples, OLS cannot be used. This is not usually a problem, since in most of the practical cases we have, n>p.
- If we have lots of variables of dubious importance, OLS will still estimate a coefficient for each one of them. After the model is estimated, we will need to do some variable selection and discard the irrelevant...