In linear regression, the model that we trained returns the best-fit parameters on the training data. However, finding the best-fit parameters on the training data may lead to overfitting.
Overfitting means that the model fits best to the training data but gives a greater error on the test data. Thus, we generally add a penalty term to the model to obtain a simpler model.
This penalty term is called a regularization term, and the regression model thus obtained is called a regularized regression model. There are three main types of regularization models:
- Lasso regression: In lasso regularization, also known as L1 regularization, the regularization term is the lasso parameter α multiplied with the sum of absolute values of the weights w. Thus, the loss function is as follows:
- Ridge regression: In ridge regularization, also known as L2 regularization...