While, in the past (circa 2012), autoencoders have briefly enjoyed some fame for their use in initializing layer weights for deep Convolutional Neural Networks (CNNs) (through an operation known as greedy layer-wise pretraining), researchers gradually lost interest in such pretraining techniques as better random weight initialization schemes came about, and more advantageous methods that allowed deeper neural networks to be trained (such as batch normalization, 2014, and later residual learning, 2015) surfaced to the general sphere.
Today, a paramount utility of autoencoders is derived from their ability to discover low-dimensional representations of high-dimensional data, while still attempting to preserve the core attributes present therein. This permits us to perform tasks such as recovering damaged images (or image denoising). A similar area of active interest...