Denoising autoencoders
The two autoencoders that we have covered in the previous sections are examples of undercomplete autoencoders, because the hidden layer in them has lower dimensionality compared to the input (output) layer. Denoising autoencoders belong to the class of overcomplete autoencoders because they work better when the dimensions of the hidden layer are more than the input layer.
A denoising autoencoder learns from a corrupted (noisy) input; it feeds its encoder network the noisy input, and then the reconstructed image from the decoder is compared with the original input. The idea is that this will help the network learn how to denoise an input. It will no longer just make pixel-wise comparisons, but in order to denoise, it will learn the information of neighboring pixels as well.
A denoising autoencoder has two main differences from other autoencoders: first, n_hidden
, the number of hidden units in the bottleneck layer is greater than the number of units in...