Regularizing a logistic regression model
Logistic regression uses the same trick as linear regression to add regularization: it adds penalization to the loss. In this recipe, we will first briefly explain how penalization affects the loss, and how to add regularization using scikit-learn on the breast cancer dataset that we prepared in the previous recipe.
Getting ready
Just like linear regression, it is very easy to add a regularization term to the loss L
, either an L1- or L2-norm of the parameters w. For example, the loss with an L2-norm would be the following:
As we did for ridge regression, we’ve added a squared sum of the weights, with a hyperparameter in front of it. To keep as close as possible to the scikit-learn implementation, we will use 1/C instead of 𝜆 for the regularization hyperparameter, but the idea remains the same.
In this recipe, we assume the following libraries are already installed from previous recipes: sklearn...