Setting up stacked autoencoders
The stacked autoencoder is an approach to train deep networks consisting of multiple layers trained using the greedy approach. An example of a stacked autoencoder is shown in the following diagram:
An example of a stacked autoencoder
Getting ready
The preceding diagram demonstrates a stacked autoencoder with two layers. A stacked autoencoder can have n layers, where each layer is trained using one layer at a time. For example, the previous layer will be trained as follows:
Training of a stacked autoencoder
The initial pre-training of layer 1 is obtained by training it over the actual input xi . The first step is to optimize the We(1) layer of the encoder with respect to output X. The second step in the preceding example is to optimize the weights We(2) in the second layer, using We(1) as input and output. Once all the layers of We(i) where i=1, 2, ...,n is number of layers are pretrained, model fine-tuning is performed by connecting all the layers together, as...