Regularizing with elastic net regression
Elastic net regression, besides having a very fancy name, is nothing more than a combination of ridge and lasso penalization. It’s a regularization method that can be of help in some specific cases. Let’s have a look at what it means in terms of loss, and then train a model on the California housing dataset.
Getting ready
The idea with elastic net is to have both L1 and L2 regularization.
This means that the loss is the following:
The two hyperparameters, and , can be fine-tuned.
We won’t go into detail on the equations for the gradient descent, since deriving them is straightforward as soon as ridge and lasso are clear.
To train a model, we again need the sklearn
library, which we already installed in previous recipes. Also, we again assume that the California housing dataset is already downloaded and prepared.
How to do it…
In scikit-learn, elastic net is implemented...