As we mentioned previously, one way of ensuring that our model encodes representative features from the inputs that are shown is by adding a sparsity constraint on the hidden layer representing the latent space (h). We denote this constraint with the Greek letter omega (Ω), which allows us to redefine the loss function of a sparse autoencoder, like so:
- Normal AE loss: L ( x , g ( f ( x ) ) )
- Sparse AE loss: L ( x , g ( f ( x ) ) ) + Ω(h)
This sparsity constraint term, Ω(h), can simply be thought of as a regularizer term that can be added to a feed-forward neural network, as we saw in previous chapters.
A comprehensive review of different forms of sparsity constraint methods in autoencoders can be found in the following research paper, which we recommend to our interested audience: Facial expression recognition via learning...