The fundamental objective of the linear regression algorithm is to minimize the loss/cost function. In order to do this, the algorithm tries to optimize the values of the coefficients of each feature (Parameter1), such that the loss function is minimized.
Sometimes, this leads to overfitting, as the coefficients of each variable are optimized for the data that the variable is trained on. This means that your linear regression model will not generalize beyond your current training data very well.
The process by which we penalize hyper-optimized coefficients in order to prevent this type of overfitting is called regularization.
There are two broad types of regularization methods, as follows:
- Ridge regression
- Lasso regression
In the following subsections, the two types of regularization techniques will be discussed in detail, and you will learn about how...