Regularization in a nutshell
You may recall that our linear model follows the form, Y = B0 + B1x1 +...Bnxn + e, and also that the best fit tries to minimize the RSS, which is the sum of the squared errors of the actual minus the estimate or e12 + e22 + … en2.
With regularization, we will apply what is known as a shrinkage penalty in conjunction with the minimization RSS. This penalty consists of a lambda (symbol λ) along with the normalization of the beta coefficients and weights. How these weights are normalized differs in the techniques and we will discuss them accordingly. Quite simply, in our model, we are minimizing (RSS + λ(normalized coefficients)). We will select the λ, which is known as the tuning parameter in our model building process. Please note that if lambda is equal to zero, then our model is equivalent to OLS as it cancels out the normalization term.
So what does this do for us and why does it work? First of all, regularization methods are very computationally...