Search icon CANCEL
Subscription
0
Cart icon
Cart
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Transformers for Natural Language Processing

You're reading from  Transformers for Natural Language Processing

Product type Book
Published in Jan 2021
Publisher Packt
ISBN-13 9781800565791
Pages 384 pages
Edition 1st Edition
Languages
Author (1):
Denis Rothman Denis Rothman
Profile icon Denis Rothman
Toc

Table of Contents (16) Chapters close

Preface 1. Getting Started with the Model Architecture of the Transformer 2. Fine-Tuning BERT Models 3. Pretraining a RoBERTa Model from Scratch 4. Downstream NLP Tasks with Transformers 5. Machine Translation with the Transformer 6. Text Generation with OpenAI GPT-2 and GPT-3 Models 7. Applying Transformers to Legal and Financial Documents for AI Text Summarization 8. Matching Tokenizers and Datasets 9. Semantic Role Labeling with BERT-Based Transformers 10. Let Your Data Do the Talking: Story, Questions, and Answers 11. Detecting Customer Emotions to Make Predictions 12. Analyzing Fake News with Transformers 13. Other Books You May Enjoy
14. Index
Appendix: Answers to the Questions

Summary

BERT brings bidirectional attention to transformers. Predicting sequences from left to right and masking the future tokens to train a model has serious limitations. If the masked sequence contains the meaning we are looking for, the model will produce errors. BERT attends to all of the tokens of a sequence at the same time.

We explored the architecture of BERT, which only uses the encoder stack of transformers. BERT was designed as a two-step framework. The first step of the framework is to pretrain a model. The second step is to fine-tune the model. We built a fine-tuning BERT model for an Acceptability Judgement downstream task. The fine-tuning process went through all phases of the process. First, we loaded the dataset and loaded the necessary pretrained modules of the model. Then the model was trained, and its performance measured.

Fine-tuning a pretrained model takes fewer machine resources than training downstream tasks from scratch. Fine-tuned models can perform a variety of tasks. BERT proves that we can pretrain a model on two tasks only, which is remarkable in itself. But producing a multitask fine-tuned model based on the trained parameters of the BERT pretrained model is extraordinary. OpenAI GPT had worked on this approach before, but BERT took it to another level!

In this chapter, we fine-tuned a BERT model. In the next chapter, Chapter 3, Pretraining a RoBERTa Model from Scratch, we will dig deeper into the BERT framework and build a pretraining BERT-like model from scratch.

You have been reading a chapter from
Transformers for Natural Language Processing
Published in: Jan 2021 Publisher: Packt ISBN-13: 9781800565791
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime