Regularization is a hyperparameter that allows slight changes to the learning algorithm so that the model becomes more generalized. This also improves the performance of the model on the unseen data.
In ML, regularization penalizes the coefficients. In deep learning, regularization penalizes the weight matrices of the nodes.
We are going to discuss two types of regularization, as follows:
- L1 and L2 regularization
- Dropout
We will start with L1 and L2 regularization.