4. Conclusion
In this chapter, we've covered the principles of VAEs. As we learned in the principles of VAEs, they bear a resemblance to GANs from the point of view of both attempts to create synthetic outputs from latent space. However, it can be noticed that the VAE networks are much simpler and easier to train compared to GANs. It's becoming clear how CVAE and -VAE are similar in concept to conditional GANs and disentangled representation GANs, respectively.
VAEs have an intrinsic mechanism to disentangle the latent vectors. Therefore, building a -VAE is straightforward. We should note, however, that interpretable and disentangled codes are important in building intelligent agents.
In the next chapter, we're going to focus on reinforcement learning. Without any prior data, an agent learns by interacting with the world around it. We'll discuss how the agent can be rewarded for correct actions, and punished for the wrong ones.