Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On Natural Language Processing with PyTorch 1.x

You're reading from   Hands-On Natural Language Processing with PyTorch 1.x Build smart, AI-driven linguistic applications using deep learning and NLP techniques

Arrow left icon
Product type Paperback
Published in Jul 2020
Publisher Packt
ISBN-13 9781789802740
Length 276 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Thomas Dop Thomas Dop
Author Profile Icon Thomas Dop
Thomas Dop
Arrow right icon
View More author details
Toc

Table of Contents (14) Chapters Close

Preface 1. Section 1: Essentials of PyTorch 1.x for NLP
2. Chapter 1: Fundamentals of Machine Learning and Deep Learning FREE CHAPTER 3. Chapter 2: Getting Started with PyTorch 1.x for NLP 4. Section 2: Fundamentals of Natural Language Processing
5. Chapter 3: NLP and Text Embeddings 6. Chapter 4: Text Preprocessing, Stemming, and Lemmatization 7. Section 3: Real-World NLP Applications Using PyTorch 1.x
8. Chapter 5: Recurrent Neural Networks and Sentiment Analysis 9. Chapter 6: Convolutional Neural Networks for Text Classification 10. Chapter 7: Text Translation Using Sequence-to-Sequence Neural Networks 11. Chapter 8: Building a Chatbot Using Attention-Based Neural Networks 12. Chapter 9: The Road Ahead 13. Other Books You May Enjoy

Neural networks

In our previous examples, we have discussed mainly regressions in the form . We have touched on using polynomials to fit more complex equations such as . However, as we add more features to our model, when to use a transformation of the original feature becomes a case of trial and error. Using neural networks, we are able to fit a much more complex function, y = f(X), to our data, without the need to engineer or transform our existing features. 

Structure of neural networks

When we were learning the optimal value of , which minimized loss in our regressions, this is effectively the same as a one-layer neural network:

Figure 1.10 – One-layer neural network

Here, we take each of our features, , as an input, illustrated here by a node. We wish to learn the parameters, , which are represented as connections in this diagram. Our final sum of all the products between and gives us our final prediction, y:

A neural network...

You have been reading a chapter from
Hands-On Natural Language Processing with PyTorch 1.x
Published in: Jul 2020
Publisher: Packt
ISBN-13: 9781789802740
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image