This intermediate chapter showed the power of deep autoencoders when combined with regularization strategies such as dropout and batch normalization. We implemented an autoencoder that has more than 30 layers! That's deep! We saw that in difficult problems a deep autoencoder can offer an unbiased latent representation of highly complex data, as most deep belief networks do. We looked at how dropout can reduce the risk of overfitting by ignoring (disconnecting) a fraction of the neurons at random in every learning step. Furthermore, we learned that batch normalization can offer stability to the learning algorithm by gradually adjusting the response of some neurons so that activation functions and other connected neurons don't saturate or overflow numerically.
At this point, you should feel confident applying batch normalization and dropout strategies in a deep autoencoder model. You should be able to create your own deep autoencoders and apply them to different tasks...