Conclusion
In this chapter, we've covered the principles of variational autoencoders (VAEs). As we learned in the principles of VAEs, they bear a resemblance to GANs in the aspect of both attempt to create synthetic outputs from latent space. However, it can be noticed that the VAE networks are much simpler and easier to train compared to GANs. It's becoming clear how conditional VAE and
-VAE are similar in concept to conditional GAN and disentangled representation GAN respectively.
VAEs have an intrinsic mechanism to disentangle the latent vectors. Therefore, building a
-VAE is straightforward. We should note however that interpretable and disentangled codes are important in building intelligent agents.
In the next chapter, we're going to focus on Reinforcement learning. Without any prior data, an agent learns by interacting with its world. We'll discuss how the agent can be rewarded for correct actions and punished for the wrong ones.