Under-complete autoencoders
One of the ways to obtain useful features from the autoencoder is done by constraining h to have a smaller dimension than input x
. An autoencoder with a code dimension less than the input dimension is called under-complete.
Learning a representation that is under-complete forces the autoencoder to capture the most salient features of the training data.
The learning process is described as minimizing a loss function, L(x, g(f(x)))
,
where L
is a loss function penalizing g(f (x))
for being dissimilar from x
, such as the mean squared error.