Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Generative Adversarial Networks Cookbook

You're reading from   Generative Adversarial Networks Cookbook Over 100 recipes to build generative models using Python, TensorFlow, and Keras

Arrow left icon
Product type Paperback
Published in Dec 2018
Publisher Packt
ISBN-13 9781789139907
Length 268 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Josh Kalin Josh Kalin
Author Profile Icon Josh Kalin
Josh Kalin
Arrow right icon
View More author details
Toc

Table of Contents (10) Chapters Close

Preface 1. What Is a Generative Adversarial Network? FREE CHAPTER 2. Data First, Easy Environment, and Data Prep 3. My First GAN in Under 100 Lines 4. Dreaming of New Outdoor Structures Using DCGAN 5. Pix2Pix Image-to-Image Translation 6. Style Transfering Your Image Using CycleGAN 7. Using Simulated Images To Create Photo-Realistic Eyeballs with SimGAN 8. From Image to 3D Models Using GANs 9. Other Books You May Enjoy

GAN pieces come together in different ways

We have explored a few simple GAN structures; we are going to look at seven different styles of GANs in this book. The important thing to realize about the majority of these papers is that the changes occur on the generator and the loss functions.

How to do it...

The generator is going to be producing the images or output, and the loss function will drive the training process to optimize different functions. In practice, what types of variation will there be? Glad you're here. Let's take a brief look at the different architectures.

How it works...

Let's discuss the simplest concept to understand with GANs: style transfer. This type of methodology manifests itself in many different variations, but one of the things I find fascinating is that the architecture of the GAN needs to change based on the specific type of transfer that needs to occur. For instance, one of the papers coming out of Adobe Research Labs focuses on makeup application and removal. Can you apply the same style of makeup as seen in a photo to a photo of another person? The architecture itself is actually rather advanced to make this happen in a realistic fashion, as seen by the architecture diagram:

This particular architecture is one of the most advanced to date-there are five separate loss functions! One of the interesting things about this architecture is that it is able to simultaneously learn a makeup application and makeup removal function. Once the GAN understands how to apply the makeup, it already has a source image to remove the makeup. Along with the five loss functions, the generator is fairly unique in its construction, as given by the following diagram:

So, why does this even matter? One of the recipes we are going to cover is style transfer, and you'll see during that particular recipe that our GAN model won't be this advanced. Why is that? In constructing a realistic application of makeup, it takes additional loss functions to appropriately tune the model into fooling the discriminator. In the case of transferring a painter's style, it is easier to transfer a uniform style than multiple disparate makeup styles, like you would see in the preceding data distribution.

You have been reading a chapter from
Generative Adversarial Networks Cookbook
Published in: Dec 2018
Publisher: Packt
ISBN-13: 9781789139907
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image