So, what kinds of cool tidbits did you learn in this particular chapter? I'll try to use this final chapter as a recap of why the GAN structure is cool and what kinds of things make this a powerful tool for your future research.
Understanding the benefits of a GAN structure
How to do it...
As a recap, we start with three key questions:
- Are GANs all the same architecture?
- Are there any new concepts within the GAN architecture?
- How do we practically construct the GAN Architecture?
We'll also review the key takeaways from this chapter.
How it works...
Let's address these three key questions:
- Are GANs all the same architecture?
- GANs come in all shapes and sizes. There are simple implementations and complex ones. It just depends what domain you are approaching and what kind of accuracy you need in the generated input.
- Are there any new concepts within the GAN architecture?
- GANs rely heavily on advances in the deep learning world around Deep Neural Networks. The novel part of a GAN lies in the architecture and the adversarial nature of training two (or more) neural networks against each other.
- How do we practically construct the GAN architecture architecture?:
- The generator, discriminator, and associated loss functions are fundamental building blocks that we'll pull on for each of the chapters in order to build these models.
What are the key things to remember from this chapter?
- The initial GAN paper was only the beginning of a movement within the machine learning space
- The generator and discriminator are neural networks in a unique training configuration
- The loss functions are critical to ensuring that the architecture can converge during training