We know that autoencoders learn to reconstruct the input. But when we set the number of nodes in the hidden layer greater than the number of nodes in the input layer, then it will learn an identity function which is not favorable, because it just completely copies the input.
Having more nodes in the hidden layer helps us to learn robust latent representation. But when there are more nodes in the hidden layer, the autoencoder tries to completely mimic the input and thus it overfits the training data. To resolve the problem of overfitting, we introduce a new constraint to our loss function called the sparsity constraint or sparsity penalty. The loss function with sparsity penalty can be represented as follows:
The first term represents the reconstruction error between the original input and reconstructed input, . The second term implies the sparsity...