The two autoencoders that we have explored in the previous two recipes are examples of Undercomplete Autoencoders because the hidden layer in them has a lower dimension as compared to the input (output) layer. Denoising autoencoder belongs to the class of Overcomplete Autoencoders because it works better when the dimensions of the hidden layer are more than the input layer.
A denoising autoencoder learns from a corrupted (noisy) input; it feeds its encoder network the noisy input and then the reconstructed image from the decoder is compared with the original input. The idea is that this will help the network learn how to denoise an input. It will no longer just make a pixel-wise comparison, but in order to denoise, it will learn the information of neighbouring pixels as well.