Conclusion
In this chapter, we've discussed CycleGAN as an algorithm that can be used for image translation. In CycleGAN, the source and target data are not necessarily aligned. We demonstrated two examples, grayscale ↔ color, and MNIST ↔ SVHN. Though there are many other possible image translations that CycleGAN can perform.
In the next chapter, we'll embark on another type of generative model, Variational AutoEncoders (VAEs). VAEs have a similar objective of learning how to generate new images (data). They focus on learning the latent vector modeled as a Gaussian distribution. We'll demonstrate other similarities in the problem being addressed by GANs in the form of conditional VAEs and the disentangling of latent representations in VAEs.