Summary
In this chapter, we presented autoencoders as unsupervised models that can learn to represent high-dimensional datasets with lower-dimensional code. They are structured into two separate blocks (which, however, are trained together): an encoder, responsible for mapping the input sample to an internal representation, and a decoder, which must perform the inverse operation, rebuilding the original image starting from the code.
We have also discussed how autoencoders can be used to denoise samples and how it's possible to impose a sparsity constraint on the code layer to resemble the concept of standard dictionary learning. The last topic was about a slightly different pattern called a VAE. The idea is to build a generative model that is able to reproduce all the possible samples belonging to a training distribution.
In the next chapter, we are going to briefly introduce a very important model family called generative adversarial networks (GANs), which are...