Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Getting Started with Google BERT
Getting Started with Google BERT

Getting Started with Google BERT: Build and train state-of-the-art natural language processing models using BERT

Arrow left icon
Profile Icon Sudharsan Ravichandiran
Arrow right icon
$20.98 $29.99
Full star icon Full star icon Full star icon Full star icon Half star icon 4.2 (50 Ratings)
eBook Jan 2021 352 pages 1st Edition
eBook
$20.98 $29.99
Paperback
$43.99
Subscription
Free Trial
Renews at $19.99p/m
Arrow left icon
Profile Icon Sudharsan Ravichandiran
Arrow right icon
$20.98 $29.99
Full star icon Full star icon Full star icon Full star icon Half star icon 4.2 (50 Ratings)
eBook Jan 2021 352 pages 1st Edition
eBook
$20.98 $29.99
Paperback
$43.99
Subscription
Free Trial
Renews at $19.99p/m
eBook
$20.98 $29.99
Paperback
$43.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

Getting Started with Google BERT

A Primer on Transformers

The transformer is one of the most popular state-of-the-art deep learning architectures that is mostly used for natural language processing (NLP) tasks. Ever since the advent of the transformer, it has replaced RNN and LSTM for various tasks. Several new NLP models, such as BERT, GPT, and T5, are based on the transformer architecture. In this chapter, we will look into the transformer in detail and understand how it works.

We will begin the chapter by getting a basic idea of the transformer. Then, we will learn how the transformer uses encoder-decoder architecture for a language translation task. Following this, we will inspect how the encoder of the transformer works in detail by exploring each of the encoder components. After understanding the encoder, we will deep dive into the decoder and look into each of the decoder components in detail. At the...

Introduction to the transformer

RNN and LSTM networks are widely used in sequential tasks such as next word prediction, machine translation, text generation, and more. However one of the major challenges with the recurrent model is capturing the long-term dependency.

To overcome this limitation of RNNs, a new architecture called Transformer was introduced in the paper Attention Is All You Need. The transformer is currently the state-of-the-art model for several NLP tasks. The advent of the transformer created a major breakthrough in the field of NLP and also paved the way for new revolutionary architectures such as BERT, GPT-3, T5, and more.

The transformer model is based entirely on the attention mechanism and completely gets rid of recurrence. The transformer uses a special type of attention mechanism called self-attention. We will learn about this in detail in the upcoming sections.

Let's understand how the transformer works with a language translation task. The transformer...

Understanding the encoder of the transformer

The transformer consists of a stack of number of encoders. The output of one encoder is sent as input to the encoder above it. As shown in the following figure, we have a stack of number of encoders. Each encoder sends its output to the encoder above it. The final encoder returns the representation of the given source sentence as output. We feed the source sentence as input to the encoder and get the representation of the source sentence as output:

Figure 1.2 – A stack of N number of encoders

Note that in the transformer paper Attention Is All You Need, the authors have used , meaning that they stacked up six encoders one above the another. However, we can try out different values of . For simplicity and better understanding, let's keep :

Figure 1.3 – A stack of encoders

Okay, the question is how exactly does the encoder work? How is it generating the representation for the given source sentence (input sentence)...

Understanding the decoder of a transformer

Suppose we want to translate the English sentence (source sentence) I am good to the French sentence (target sentence) Je vais bien. To perform this translation, we feed the source sentence I am good to the encoder. The encoder learns the representation of the source sentence. In the previous section, we learned how exactly the encoder learns the representation of the source sentence. Now, we take this encoder's representation and feed it to the decoder. The decoder takes the encoder representation as input and generates the target sentence Je vais bien, as shown in the following figure:

Figure 1.35 – Encoder and decoder of the transformer

In the encoder section, we learned that, instead of having one encoder, we can have a stack of encoders. Similar to the encoder, we can also have a stack of decoders. For simplicity, let's set . As shown in the following figure, the output of one decoder is sent as the input to the decoder...

Putting the encoder and decoder together

To give more clarity, the complete transformer architecture with the encoder and decoder is shown in the following figure:

Figure 1.63 – Encoder and decoder of the transformer

In the preceding figure, Nx denotes that we can stack number of encoders and decoders. As we can observe, once we feed the input sentence (source sentence), the encoder learns the representation and sends the representation to the decoder, which in turn generates the output sentence (target sentence).

Training the transformer

We can train the transformer network by minimizing the loss function. Okay, but what loss function should we use? We learned that the decoder predicts the probability distribution over the vocabulary and we select the word that has the highest probability as output. So, we have to minimize the difference between the predicted probability distribution and the actual probability distribution. First, how can we find the difference between the two distributions? We can use cross-entropy for that. Thus, we can define our loss function as a cross-entropy loss and try to minimize the difference between the predicted and actual probability distribution. We train the network by minimizing the loss function and we use Adam as an optimizer.

One additional point we need to note down is that to prevent overfitting, we apply dropout to the output of each sublayer and we also apply dropout to the sum of the embeddings and the positional encoding.

Thus, in this chapter, we learned...

Summary

We started off the chapter by understanding what the transformer model is and how it uses encoder-decoder architecture. We looked into the encoder section of the transformer and learned about different sublayers used in encoders, such as multi-head attention and feedforward networks.

We learned that the self-attention mechanism relates a word to all the words in the sentence to better understand the word. To compute self-attention, we used three different matrices, called the query, key, and value matrices. Following this, we learned how to compute positional encoding and how it is used to capture the word order in a sentence. Next, we learned how the feedforward network works in the encoder and then we explored the add and norm component.

After understanding the encoder, we understood how the decoder works. We explored three sublayers used in the decoder in detail, which are the masked multi-head attention, encoder-decoder attention, and feedforward network. Following this...

Questions

Let's put our newly acquired knowledge to the test. Try answering the following questions:

  1. What are the steps involved in the self-attention mechanism?
  2. What is scaled dot product attention?
  3. How do we create the query, key, and value matrices?
  4. Why do we need positional encoding?
  5. What are the sublayers of the decoder?
  6. What are the inputs to the encoder-decoder attention layer of the decoder?

Further reading

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Explore the encoder and decoder of the transformer model
  • Become well-versed with BERT along with ALBERT, RoBERTa, and DistilBERT
  • Discover how to pre-train and fine-tune BERT models for several NLP tasks

Description

BERT (bidirectional encoder representations from transformer) has revolutionized the world of natural language processing (NLP) with promising results. This book is an introductory guide that will help you get to grips with Google's BERT architecture. With a detailed explanation of the transformer architecture, this book will help you understand how the transformer’s encoder and decoder work. You’ll explore the BERT architecture by learning how the BERT model is pre-trained and how to use pre-trained BERT for downstream tasks by fine-tuning it for NLP tasks such as sentiment analysis and text summarization with the Hugging Face transformers library. As you advance, you’ll learn about different variants of BERT such as ALBERT, RoBERTa, and ELECTRA, and look at SpanBERT, which is used for NLP tasks like question answering. You'll also cover simpler and faster BERT variants based on knowledge distillation such as DistilBERT and TinyBERT. The book takes you through MBERT, XLM, and XLM-R in detail and then introduces you to sentence-BERT, which is used for obtaining sentence representation. Finally, you'll discover domain-specific BERT models such as BioBERT and ClinicalBERT, and discover an interesting variant called VideoBERT. By the end of this BERT book, you’ll be well-versed with using BERT and its variants for performing practical NLP tasks.

Who is this book for?

This book is for NLP professionals and data scientists looking to simplify NLP tasks to enable efficient language understanding using BERT. A basic understanding of NLP concepts and deep learning is required to get the best out of this book.

What you will learn

  • Understand the transformer model from the ground up
  • Find out how BERT works and pre-train it using masked language model (MLM) and next sentence prediction (NSP) tasks
  • Get hands-on with BERT by learning to generate contextual word and sentence embeddings
  • Fine-tune BERT for downstream tasks
  • Get to grips with ALBERT, RoBERTa, ELECTRA, and SpanBERT models
  • Get the hang of the BERT models based on knowledge distillation
  • Understand cross-lingual models such as XLM and XLM-R
  • Explore Sentence-BERT, VideoBERT, and BART

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jan 22, 2021
Length: 352 pages
Edition : 1st
Language : English
ISBN-13 : 9781838826239
Vendor :
Google
Category :
Languages :
Tools :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Product Details

Publication date : Jan 22, 2021
Length: 352 pages
Edition : 1st
Language : English
ISBN-13 : 9781838826239
Vendor :
Google
Category :
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 198.97
Getting Started with Google BERT
$43.99
Mastering Transformers
$54.99
Transformers for Natural Language Processing
$99.99
Total $ 198.97 Stars icon
Banner background image

Table of Contents

14 Chapters
Section 1 - Starting Off with BERT Chevron down icon Chevron up icon
A Primer on Transformers Chevron down icon Chevron up icon
Understanding the BERT Model Chevron down icon Chevron up icon
Getting Hands-On with BERT Chevron down icon Chevron up icon
Section 2 - Exploring BERT Variants Chevron down icon Chevron up icon
BERT Variants I - ALBERT, RoBERTa, ELECTRA, and SpanBERT Chevron down icon Chevron up icon
BERT Variants II - Based on Knowledge Distillation Chevron down icon Chevron up icon
Section 3 - Applications of BERT Chevron down icon Chevron up icon
Exploring BERTSUM for Text Summarization Chevron down icon Chevron up icon
Applying BERT to Other Languages Chevron down icon Chevron up icon
Exploring Sentence and Domain-Specific BERT Chevron down icon Chevron up icon
Working with VideoBERT, BART, and More Chevron down icon Chevron up icon
Assessments Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.2
(50 Ratings)
5 star 72%
4 star 4%
3 star 6%
2 star 8%
1 star 10%
Filter icon Filter
Top Reviews

Filter reviews by




Amazon Customer Feb 07, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
The book acts as a great resource for learning BERT. It covers so many different types of BERT and helps you to learn how to apply BERT for interesting use cases. It’s a perfect getting started guide for BERT. The writing is so simple, clear, to the point. The way one topic connects to another is so interesting. I can’t close the book after reading one chapter, the book keeps you so engaging.
Amazon Verified review Amazon
ani Feb 05, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
The book explains transformers and BERT in very detail. I’m awestruck with the way the author have explained the concepts in a seamlessly simple way possible. I really loved the narrative style of the book.I can’t believe someone explained BERT with so much of in-depth detail. The book covers lot of things which I was never aware of and many different types of BERT like tinyBERT, ELECTRA, Multilingual BERT, XLM-R, and many others. If you are not getting this then definitely you are missing out a greatest content on BERT ever.
Amazon Verified review Amazon
Samuel de Zoete May 25, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Makes BERT accessible for Data Scientists without a PhD. The explanations are clear and still enough depth that’s needed when start working with Transformers.
Amazon Verified review Amazon
aditya Karampudi Feb 09, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
The book starts off with subtle introduction to multiple key concepts and slowly builds on the core methodologies of building NLP based neural networks. Over the years, neural networks have gone through multiple transformations, and yet the application of these architectures is limited because of the nature of data. The language is the flow of words and these sentences do not always follow a structured approach. This makes it hard to train models that can be intelligent to understand the words. The BERT tries to tackle this issue by predicting from both directions- left to right and right to left. The author tried to use multiple examples to illustrate the way NN are modeled and I thoroughly enjoyed reading this book. I recommend this book for every NLP and Deep Learning enthusiast.
Amazon Verified review Amazon
Ashwini Mar 05, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I liked this book about BERT. This is one of the great books on BERT that I have come across. The author takes detailed accounts of introduction and applications of BERT before explaining things in detail. I loved the fact that there are frameworks explaining how each and every topic works in BERT. The book is a bit mathy but great for people who want to understand things in details.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.