Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Deep Learning with PyTorch Lightning

You're reading from   Deep Learning with PyTorch Lightning Swiftly build high-performance Artificial Intelligence (AI) models using Python

Arrow left icon
Product type Paperback
Published in Apr 2022
Publisher Packt
ISBN-13 9781800561618
Length 366 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Authors (2):
Arrow left icon
Dheeraj Arremsetty Dheeraj Arremsetty
Author Profile Icon Dheeraj Arremsetty
Dheeraj Arremsetty
Kunal Sawarkar Kunal Sawarkar
Author Profile Icon Kunal Sawarkar
Kunal Sawarkar
Arrow right icon
View More author details
Toc

Table of Contents (15) Chapters Close

Preface 1. Section 1: Kickstarting with PyTorch Lightning
2. Chapter 1: PyTorch Lightning Adventure FREE CHAPTER 3. Chapter 2: Getting off the Ground with the First Deep Learning Model 4. Chapter 3: Transfer Learning Using Pre-Trained Models 5. Chapter 4: Ready-to-Cook Models from Lightning Flash 6. Section 2: Solving using PyTorch Lightning
7. Chapter 5: Time Series Models 8. Chapter 6: Deep Generative Models 9. Chapter 7: Semi-Supervised Learning 10. Chapter 8: Self-Supervised Learning 11. Section 3: Advanced Topics
12. Chapter 9: Deploying and Scoring Models 13. Chapter 10: Scaling and Managing Training 14. Other Books You May Enjoy

Text classification using BERT transformers

Text classification using BERT transformers is a transformer-based machine learning technique for Natural Language Processing (NLP) developed by Google. BERT was created and published in 2018 by Jacob Devlin. Before BERT, for language tasks, semi-supervised models such as Recurrent Neural Networks (RNNs) or sequence models were commonly used. BERT was the first unsupervised approach to language models and achieved state-of-the-art performance on NLP tasks. The large BERT model consists of 24 encoders and 16 bi-directional attention heads. It was trained with Book Corpora words and English Wikipedia entries for about 3,000,000,000 words. It later expanded to over 100 languages. Using pre-trained BERT models, we can perform several tasks on text, such as classification, information extraction, question answering, summarization, translation, and text generation.

Figure 3.7 – BERT architecture diagram (Image credit...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image