Summary
In this chapter, you saw how deep neural networks can be used to create representations of complex data such as images that capture more of their variance than traditional dimension reduction techniques, such as PCA. This is demonstrated using the MNIST digits, where a neural network can spatially separate the different digits in a two-dimensional grid more cleanly than the principal components of those images. The chapter showed how deep neural networks can be used to approximate complex posterior distributions, such as images, using variational methods to sample from an approximation of an intractable distribution, leading to a VAE algorithm based on minimizing the variational lower bound between the true and approximate posterior.
You also learned how the latent vector from this algorithm can be reparameterized to have lower variance, leading to better convergence in stochastic minibatch gradient descent. You saw how the latent vectors generated by encoders in these...