Search icon CANCEL
Subscription
0
Cart icon
Cart
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Transformers for Natural Language Processing

You're reading from  Transformers for Natural Language Processing

Product type Book
Published in Jan 2021
Publisher Packt
ISBN-13 9781800565791
Pages 384 pages
Edition 1st Edition
Languages
Author (1):
Denis Rothman Denis Rothman
Profile icon Denis Rothman
Toc

Table of Contents (16) Chapters close

Preface 1. Getting Started with the Model Architecture of the Transformer 2. Fine-Tuning BERT Models 3. Pretraining a RoBERTa Model from Scratch 4. Downstream NLP Tasks with Transformers 5. Machine Translation with the Transformer 6. Text Generation with OpenAI GPT-2 and GPT-3 Models 7. Applying Transformers to Legal and Financial Documents for AI Text Summarization 8. Matching Tokenizers and Datasets 9. Semantic Role Labeling with BERT-Based Transformers 10. Let Your Data Do the Talking: Story, Questions, and Answers 11. Detecting Customer Emotions to Make Predictions 12. Analyzing Fake News with Transformers 13. Other Books You May Enjoy
14. Index
Appendix: Answers to the Questions

Before we go

This chapter focused more on applying transformers to a problem than finding a silver bullet transformer model, which does not exist.

You have two main options to solve an NLP problem: find new transformer models or create reliable, durable methods to implement transformer models.

Looking for the silver bullet

Looking for a silver bullet transformer model can be time-consuming or rewarding, depending on how much time and money you want to spend on continually changing models.

For example, a new approach to transformers can be found through disentanglement. Disentanglement in AI allows you to separate the features of a representation to make the training process more flexible. Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen designed DeBERTa, a disentangled version of a transformer, and described the model in an interesting article:

DeBERTa: Decoding-enhanced BERT with Disentangled Attention, https://arxiv.org/abs/2006.03654

The two...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime