Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On Generative Adversarial Networks with PyTorch 1.x

You're reading from   Hands-On Generative Adversarial Networks with PyTorch 1.x Implement next-generation neural networks to build powerful GAN models using Python

Arrow left icon
Product type Paperback
Published in Dec 2019
Publisher Packt
ISBN-13 9781789530513
Length 312 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Authors (2):
Arrow left icon
John Hany John Hany
Author Profile Icon John Hany
John Hany
Greg Walters Greg Walters
Author Profile Icon Greg Walters
Greg Walters
Arrow right icon
View More author details
Toc

Table of Contents (15) Chapters Close

Preface 1. Section 1: Introduction to GANs and PyTorch
2. Generative Adversarial Networks Fundamentals FREE CHAPTER 3. Getting Started with PyTorch 1.3 4. Best Practices for Model Design and Training 5. Section 2: Typical GAN Models for Image Synthesis
6. Building Your First GAN with PyTorch 7. Generating Images Based on Label Information 8. Image-to-Image Translation and Its Applications 9. Image Restoration with GANs 10. Training Your GANs to Break Different Models 11. Image Generation from Description Text 12. Sequence Synthesis with GANs 13. Reconstructing 3D models with GANs 14. Other Books You May Enjoy

CycleGAN – image-to-image translation from unpaired collections

You may have noticed that, when training pix2pix, we need to determine a direction (AtoB or BtoA) that the images are translated to. Does this mean that, if we want to freely translate from image set A to image set B and vice versa, we need to train two models separately? Not with CycleGAN, we say!

CycleGAN was proposed by Jun-Yan Zhu, Taesung Park, and Phillip Isola, et. al. in their paper, Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. It is a bidirectional generative model based on unpaired image collections. The core idea of CycleGAN is built on the assumption of cycle consistency, which means that if we have two generative models, G and F, that translate between two sets of images, X and Y, in which Y=G(X) and X=F(Y), we can naturally assume that F(G(X)) should be very...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image