Deep autoencoders
So far, we have talked only about single-layer encoders and single-layer decoders for a simple autoencoder. However, a deep autoencoder with more than one encoder and decoder brings more advantages.
Feed-forward networks perform better when they are deep. Autoencoders are basically feed-forward networks; hence, the advantages of a basic feed-forward network can also be applied to autoencoders. The encoders and decoders are autoencoders, which also work like a feed-forward network. Hence, we can deploy the advantages of the depth of a feed-forward network in these components also.
In this context, we can also talk about the universal approximator theorem, which ensures that a feed-forward neural network with at least one hidden layer, and with enough hidden units, can produce an approximation of any arbitrary function to any degree of accuracy. Following this concept, a deep autoencoder having at least one hidden layer, and containing sufficient hidden units, can approximate...