Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On Generative Adversarial Networks with PyTorch 1.x

You're reading from   Hands-On Generative Adversarial Networks with PyTorch 1.x Implement next-generation neural networks to build powerful GAN models using Python

Arrow left icon
Product type Paperback
Published in Dec 2019
Publisher Packt
ISBN-13 9781789530513
Length 312 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Authors (2):
Arrow left icon
John Hany John Hany
Author Profile Icon John Hany
John Hany
Greg Walters Greg Walters
Author Profile Icon Greg Walters
Greg Walters
Arrow right icon
View More author details
Toc

Table of Contents (15) Chapters Close

Preface 1. Section 1: Introduction to GANs and PyTorch
2. Generative Adversarial Networks Fundamentals FREE CHAPTER 3. Getting Started with PyTorch 1.3 4. Best Practices for Model Design and Training 5. Section 2: Typical GAN Models for Image Synthesis
6. Building Your First GAN with PyTorch 7. Generating Images Based on Label Information 8. Image-to-Image Translation and Its Applications 9. Image Restoration with GANs 10. Training Your GANs to Break Different Models 11. Image Generation from Description Text 12. Sequence Synthesis with GANs 13. Reconstructing 3D models with GANs 14. Other Books You May Enjoy

Text generation via SeqGAN – teaching GANs how to tell jokes

In the previous chapter, we learned how to generate high-quality images based on description text with GANs. Now, we will move on and look at sequential data synthesis, such as text and audio, using various GAN models.

When it comes to the generation of text, the biggest difference in terms of image generation is that text data is discrete while image pixel values are more continuous, though digital images and text are both essentially discrete. A pixel typically has 256 values and slight changes in the pixels won't necessarily affect the image's meaning to us. However, a slight change in the sentence – even a single letter (for example, turning we into he) – may change the whole meaning of the sentence. Also, we tend to have a higher tolerance bar for synthesized images compared to text...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image