Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Transformers for Natural Language Processing

You're reading from   Transformers for Natural Language Processing Build innovative deep neural network architectures for NLP with Python, PyTorch, TensorFlow, BERT, RoBERTa, and more

Arrow left icon
Product type Paperback
Published in Jan 2021
Publisher Packt
ISBN-13 9781800565791
Length 384 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Denis Rothman Denis Rothman
Author Profile Icon Denis Rothman
Denis Rothman
Arrow right icon
View More author details
Toc

Table of Contents (16) Chapters Close

Preface 1. Getting Started with the Model Architecture of the Transformer 2. Fine-Tuning BERT Models FREE CHAPTER 3. Pretraining a RoBERTa Model from Scratch 4. Downstream NLP Tasks with Transformers 5. Machine Translation with the Transformer 6. Text Generation with OpenAI GPT-2 and GPT-3 Models 7. Applying Transformers to Legal and Financial Documents for AI Text Summarization 8. Matching Tokenizers and Datasets 9. Semantic Role Labeling with BERT-Based Transformers 10. Let Your Data Do the Talking: Story, Questions, and Answers 11. Detecting Customer Emotions to Make Predictions 12. Analyzing Fake News with Transformers 13. Other Books You May Enjoy
14. Index
Appendix: Answers to the Questions

Semantic Role Labeling with BERT-Based Transformers

Transformers have made more progress in the past few years than NLP in the past generation. Standard NLU approaches first learn syntactical and lexical features to explain the structure of a sentence. The former NLP models would be trained to understand the basic syntax of language before running Semantic Role Labeling (SRL).

Shi and Lin (2019) start their paper by asking if preliminary syntactic and lexical training can be skipped. Can a BERT-based model perform SRL without going through those classical training phases? The answer is yes!

Shi and Lin (2019) suggest that SRL can be considered as sequence labeling and provide a standardized input format. Their BERT-based model produced surprisingly good results.

In this chapter, we will use a pretrained BERT-based model provided by the Allen Institute for AI based on the Shi and Lin (2019) paper. Shi and Lin took SRL to the next level by dropping syntactic and lexical...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime