Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Getting Started with Google BERT
Getting Started with Google BERT

Getting Started with Google BERT: Build and train state-of-the-art natural language processing models using BERT

Arrow left icon
Profile Icon Sudharsan Ravichandiran
Arrow right icon
Free Trial
Full star icon Full star icon Full star icon Full star icon Half star icon 4.2 (50 Ratings)
Paperback Jan 2021 352 pages 1st Edition
eBook
zł59.99 zł141.99
Paperback
zł177.99
Subscription
Free Trial
Arrow left icon
Profile Icon Sudharsan Ravichandiran
Arrow right icon
Free Trial
Full star icon Full star icon Full star icon Full star icon Half star icon 4.2 (50 Ratings)
Paperback Jan 2021 352 pages 1st Edition
eBook
zł59.99 zł141.99
Paperback
zł177.99
Subscription
Free Trial
eBook
zł59.99 zł141.99
Paperback
zł177.99
Subscription
Free Trial

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Getting Started with Google BERT

A Primer on Transformers

The transformer is one of the most popular state-of-the-art deep learning architectures that is mostly used for natural language processing (NLP) tasks. Ever since the advent of the transformer, it has replaced RNN and LSTM for various tasks. Several new NLP models, such as BERT, GPT, and T5, are based on the transformer architecture. In this chapter, we will look into the transformer in detail and understand how it works.

We will begin the chapter by getting a basic idea of the transformer. Then, we will learn how the transformer uses encoder-decoder architecture for a language translation task. Following this, we will inspect how the encoder of the transformer works in detail by exploring each of the encoder components. After understanding the encoder, we will deep dive into the decoder and look into each of the decoder components in detail. At the...

Introduction to the transformer

RNN and LSTM networks are widely used in sequential tasks such as next word prediction, machine translation, text generation, and more. However one of the major challenges with the recurrent model is capturing the long-term dependency.

To overcome this limitation of RNNs, a new architecture called Transformer was introduced in the paper Attention Is All You Need. The transformer is currently the state-of-the-art model for several NLP tasks. The advent of the transformer created a major breakthrough in the field of NLP and also paved the way for new revolutionary architectures such as BERT, GPT-3, T5, and more.

The transformer model is based entirely on the attention mechanism and completely gets rid of recurrence. The transformer uses a special type of attention mechanism called self-attention. We will learn about this in detail in the upcoming sections.

Let's understand how the transformer works with a language translation task. The transformer...

Understanding the encoder of the transformer

The transformer consists of a stack of number of encoders. The output of one encoder is sent as input to the encoder above it. As shown in the following figure, we have a stack of number of encoders. Each encoder sends its output to the encoder above it. The final encoder returns the representation of the given source sentence as output. We feed the source sentence as input to the encoder and get the representation of the source sentence as output:

Figure 1.2 – A stack of N number of encoders

Note that in the transformer paper Attention Is All You Need, the authors have used , meaning that they stacked up six encoders one above the another. However, we can try out different values of . For simplicity and better understanding, let's keep :

Figure 1.3 – A stack of encoders

Okay, the question is how exactly does the encoder work? How is it generating the representation for the given source sentence (input sentence)...

Understanding the decoder of a transformer

Suppose we want to translate the English sentence (source sentence) I am good to the French sentence (target sentence) Je vais bien. To perform this translation, we feed the source sentence I am good to the encoder. The encoder learns the representation of the source sentence. In the previous section, we learned how exactly the encoder learns the representation of the source sentence. Now, we take this encoder's representation and feed it to the decoder. The decoder takes the encoder representation as input and generates the target sentence Je vais bien, as shown in the following figure:

Figure 1.35 – Encoder and decoder of the transformer

In the encoder section, we learned that, instead of having one encoder, we can have a stack of encoders. Similar to the encoder, we can also have a stack of decoders. For simplicity, let's set . As shown in the following figure, the output of one decoder is sent as the input to the decoder...

Putting the encoder and decoder together

To give more clarity, the complete transformer architecture with the encoder and decoder is shown in the following figure:

Figure 1.63 – Encoder and decoder of the transformer

In the preceding figure, Nx denotes that we can stack number of encoders and decoders. As we can observe, once we feed the input sentence (source sentence), the encoder learns the representation and sends the representation to the decoder, which in turn generates the output sentence (target sentence).

Training the transformer

We can train the transformer network by minimizing the loss function. Okay, but what loss function should we use? We learned that the decoder predicts the probability distribution over the vocabulary and we select the word that has the highest probability as output. So, we have to minimize the difference between the predicted probability distribution and the actual probability distribution. First, how can we find the difference between the two distributions? We can use cross-entropy for that. Thus, we can define our loss function as a cross-entropy loss and try to minimize the difference between the predicted and actual probability distribution. We train the network by minimizing the loss function and we use Adam as an optimizer.

One additional point we need to note down is that to prevent overfitting, we apply dropout to the output of each sublayer and we also apply dropout to the sum of the embeddings and the positional encoding.

Thus, in this chapter, we learned...

Summary

We started off the chapter by understanding what the transformer model is and how it uses encoder-decoder architecture. We looked into the encoder section of the transformer and learned about different sublayers used in encoders, such as multi-head attention and feedforward networks.

We learned that the self-attention mechanism relates a word to all the words in the sentence to better understand the word. To compute self-attention, we used three different matrices, called the query, key, and value matrices. Following this, we learned how to compute positional encoding and how it is used to capture the word order in a sentence. Next, we learned how the feedforward network works in the encoder and then we explored the add and norm component.

After understanding the encoder, we understood how the decoder works. We explored three sublayers used in the decoder in detail, which are the masked multi-head attention, encoder-decoder attention, and feedforward network. Following this...

Questions

Let's put our newly acquired knowledge to the test. Try answering the following questions:

  1. What are the steps involved in the self-attention mechanism?
  2. What is scaled dot product attention?
  3. How do we create the query, key, and value matrices?
  4. Why do we need positional encoding?
  5. What are the sublayers of the decoder?
  6. What are the inputs to the encoder-decoder attention layer of the decoder?

Further reading

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Explore the encoder and decoder of the transformer model
  • Become well-versed with BERT along with ALBERT, RoBERTa, and DistilBERT
  • Discover how to pre-train and fine-tune BERT models for several NLP tasks

Description

BERT (bidirectional encoder representations from transformer) has revolutionized the world of natural language processing (NLP) with promising results. This book is an introductory guide that will help you get to grips with Google's BERT architecture. With a detailed explanation of the transformer architecture, this book will help you understand how the transformer’s encoder and decoder work. You’ll explore the BERT architecture by learning how the BERT model is pre-trained and how to use pre-trained BERT for downstream tasks by fine-tuning it for NLP tasks such as sentiment analysis and text summarization with the Hugging Face transformers library. As you advance, you’ll learn about different variants of BERT such as ALBERT, RoBERTa, and ELECTRA, and look at SpanBERT, which is used for NLP tasks like question answering. You'll also cover simpler and faster BERT variants based on knowledge distillation such as DistilBERT and TinyBERT. The book takes you through MBERT, XLM, and XLM-R in detail and then introduces you to sentence-BERT, which is used for obtaining sentence representation. Finally, you'll discover domain-specific BERT models such as BioBERT and ClinicalBERT, and discover an interesting variant called VideoBERT. By the end of this BERT book, you’ll be well-versed with using BERT and its variants for performing practical NLP tasks.

Who is this book for?

This book is for NLP professionals and data scientists looking to simplify NLP tasks to enable efficient language understanding using BERT. A basic understanding of NLP concepts and deep learning is required to get the best out of this book.

What you will learn

  • Understand the transformer model from the ground up
  • Find out how BERT works and pre-train it using masked language model (MLM) and next sentence prediction (NSP) tasks
  • Get hands-on with BERT by learning to generate contextual word and sentence embeddings
  • Fine-tune BERT for downstream tasks
  • Get to grips with ALBERT, RoBERTa, ELECTRA, and SpanBERT models
  • Get the hang of the BERT models based on knowledge distillation
  • Understand cross-lingual models such as XLM and XLM-R
  • Explore Sentence-BERT, VideoBERT, and BART

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jan 22, 2021
Length: 352 pages
Edition : 1st
Language : English
ISBN-13 : 9781838821593
Vendor :
Google
Category :
Languages :
Tools :

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : Jan 22, 2021
Length: 352 pages
Edition : 1st
Language : English
ISBN-13 : 9781838821593
Vendor :
Google
Category :
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just zł20 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just zł20 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 803.97
Getting Started with Google BERT
zł177.99
Mastering Transformers
zł221.99
Transformers for Natural Language Processing
zł403.99
Total 803.97 Stars icon
Banner background image

Table of Contents

14 Chapters
Section 1 - Starting Off with BERT Chevron down icon Chevron up icon
A Primer on Transformers Chevron down icon Chevron up icon
Understanding the BERT Model Chevron down icon Chevron up icon
Getting Hands-On with BERT Chevron down icon Chevron up icon
Section 2 - Exploring BERT Variants Chevron down icon Chevron up icon
BERT Variants I - ALBERT, RoBERTa, ELECTRA, and SpanBERT Chevron down icon Chevron up icon
BERT Variants II - Based on Knowledge Distillation Chevron down icon Chevron up icon
Section 3 - Applications of BERT Chevron down icon Chevron up icon
Exploring BERTSUM for Text Summarization Chevron down icon Chevron up icon
Applying BERT to Other Languages Chevron down icon Chevron up icon
Exploring Sentence and Domain-Specific BERT Chevron down icon Chevron up icon
Working with VideoBERT, BART, and More Chevron down icon Chevron up icon
Assessments Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.2
(50 Ratings)
5 star 72%
4 star 4%
3 star 6%
2 star 8%
1 star 10%
Filter icon Filter
Top Reviews

Filter reviews by




Amazon Customer Feb 07, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
The book acts as a great resource for learning BERT. It covers so many different types of BERT and helps you to learn how to apply BERT for interesting use cases. It’s a perfect getting started guide for BERT. The writing is so simple, clear, to the point. The way one topic connects to another is so interesting. I can’t close the book after reading one chapter, the book keeps you so engaging.
Amazon Verified review Amazon
ani Feb 05, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
The book explains transformers and BERT in very detail. I’m awestruck with the way the author have explained the concepts in a seamlessly simple way possible. I really loved the narrative style of the book.I can’t believe someone explained BERT with so much of in-depth detail. The book covers lot of things which I was never aware of and many different types of BERT like tinyBERT, ELECTRA, Multilingual BERT, XLM-R, and many others. If you are not getting this then definitely you are missing out a greatest content on BERT ever.
Amazon Verified review Amazon
Samuel de Zoete May 25, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Makes BERT accessible for Data Scientists without a PhD. The explanations are clear and still enough depth that’s needed when start working with Transformers.
Amazon Verified review Amazon
aditya Karampudi Feb 09, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
The book starts off with subtle introduction to multiple key concepts and slowly builds on the core methodologies of building NLP based neural networks. Over the years, neural networks have gone through multiple transformations, and yet the application of these architectures is limited because of the nature of data. The language is the flow of words and these sentences do not always follow a structured approach. This makes it hard to train models that can be intelligent to understand the words. The BERT tries to tackle this issue by predicting from both directions- left to right and right to left. The author tried to use multiple examples to illustrate the way NN are modeled and I thoroughly enjoyed reading this book. I recommend this book for every NLP and Deep Learning enthusiast.
Amazon Verified review Amazon
Ashwini Mar 05, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I liked this book about BERT. This is one of the great books on BERT that I have come across. The author takes detailed accounts of introduction and applications of BERT before explaining things in detail. I loved the fact that there are frameworks explaining how each and every topic works in BERT. The book is a bit mathy but great for people who want to understand things in details.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.