In the previous chapter, we have looked into what DQN is and what types of predictions we can make around rewards or actions. In this chapter, we will look into how to build a VAE and about the advantages of a VAE over a standard autoencoder. We will also look into the effect of varying latent space dimensions on the network.
Let's take a look at another autoencoder. We've looked at autoencoders once before in Chapter 3, Beyond Basic Neural Networks – Autoencoders and RBMs, with a simple example, generating MNIST digits. Now we'll take a look at using it for a very different task—that is, generating new digits.
In this chapter, the following topics will be covered:
- Introduction to variational autoencoders (VAEs)
- Building a VAE on MNIST
- Assessing the results and changing the latent dimensions