You may have noticed that, when training pix2pix, we need to determine a direction (AtoB or BtoA) that the images are translated to. Does this mean that, if we want to freely translate from image set A to image set B and vice versa, we need to train two models separately? Not with CycleGAN, we say!
CycleGAN was proposed by Jun-Yan Zhu, Taesung Park, and Phillip Isola, et. al. in their paper, Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. It is a bidirectional generative model based on unpaired image collections. The core idea of CycleGAN is built on the assumption of cycle consistency, which means that if we have two generative models, G and F, that translate between two sets of images, X and Y, in which Y=G(X) and X=F(Y), we can naturally assume that F(G(X)) should be very...