Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Transformers for Natural Language Processing

You're reading from   Transformers for Natural Language Processing Build, train, and fine-tune deep neural network architectures for NLP with Python, Hugging Face, and OpenAI's GPT-3, ChatGPT, and GPT-4

Arrow left icon
Product type Paperback
Published in Mar 2022
Publisher Packt
ISBN-13 9781803247335
Length 602 pages
Edition 2nd Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Denis Rothman Denis Rothman
Author Profile Icon Denis Rothman
Denis Rothman
Arrow right icon
View More author details
Toc

Table of Contents (25) Chapters Close

Preface 1. What are Transformers? 2. Getting Started with the Architecture of the Transformer Model FREE CHAPTER 3. Fine-Tuning BERT Models 4. Pretraining a RoBERTa Model from Scratch 5. Downstream NLP Tasks with Transformers 6. Machine Translation with the Transformer 7. The Rise of Suprahuman Transformers with GPT-3 Engines 8. Applying Transformers to Legal and Financial Documents for AI Text Summarization 9. Matching Tokenizers and Datasets 10. Semantic Role Labeling with BERT-Based Transformers 11. Let Your Data Do the Talking: Story, Questions, and Answers 12. Detecting Customer Emotions to Make Predictions 13. Analyzing Fake News with Transformers 14. Interpreting Black Box Transformer Models 15. From NLP to Task-Agnostic Transformer Models 16. The Emergence of Transformer-Driven Copilots 17. The Consolidation of Suprahuman Transformers with OpenAI’s ChatGPT and GPT-4 18. Other Books You May Enjoy
19. Index
Appendix I — Terminology of Transformer Models 1. Appendix II — Hardware Constraints for Transformer Models 2. Appendix III — Generic Text Completion with GPT-2 3. Appendix IV — Custom Text Completion with GPT-2 4. Appendix V — Answers to the Questions

What this book covers

Part I: Introduction to Transformer Architectures

Chapter 1, What are Transformers?, explains, at a high level, what transformers are. We’ll look at the transformer ecosystem and the properties of foundation models. The chapter highlights many of the platforms available and the evolution of Industry 4.0 AI specialists.

Chapter 2, Getting Started with the Architecture of the Transformer Model, goes through the background of NLP to understand how RNN, LSTM, and CNN deep learning architectures evolved into the Transformer architecture that opened a new era. We will go through the Transformer’s architecture through the unique Attention Is All You Need approach invented by the Google Research and Google Brain authors. We will describe the theory of transformers. We will get our hands dirty in Python to see how the multi-attention head sub-layers work. By the end of this chapter, you will have understood the original architecture of the Transformer. You will be ready to explore the multiple variants and usages of the Transformer in the following chapters.

Chapter 3, Fine-Tuning BERT Models, builds on the architecture of the original Transformer. Bidirectional Encoder Representations from Transformers (BERT) shows you a new way of perceiving the world of NLP. Instead of analyzing a past sequence to predict a future sequence, BERT attends to the whole sequence! We will first go through the key innovations of BERT’s architecture and then fine-tune a BERT model by going through each step in a Google Colaboratory notebook. Like humans, BERT can learn tasks and perform other new ones without having to learn the topic from scratch.

Chapter 4, Pretraining a RoBERTa Model from Scratch, builds a RoBERTa transformer model from scratch using the Hugging Face PyTorch modules. The transformer will be both BERT-like and DistilBERT-like. First, we will train a tokenizer from scratch on a customized dataset. The trained transformer will then run on a downstream masked language modeling task.

Part II: Applying Transformers for Natural Language Understanding and Generation

Chapter 5, Downstream NLP Tasks with Transformers, reveals the magic of transformer models with downstream NLP tasks. A pretrained transformer model can be fine-tuned to solve a range of NLP tasks such as BoolQ, CB, MultiRC, RTE, WiC, and more, dominating the GLUE and SuperGLUE leaderboards. We will go through the evaluation process of transformers, the tasks, datasets, and metrics. We will then run some of the downstream tasks with Hugging Face’s pipeline of transformers.

Chapter 6, Machine Translation with the Transformer, defines machine translation to understand how to go from human baselines to machine transduction methods. We will then preprocess a WMT French-English dataset from the European Parliament. Machine translation requires precise evaluation methods, and in this chapter, we explore the BLEU scoring method. Finally, we will implement a Transformer machine translation model with Trax.

Chapter 7, The Rise of Suprahuman Transformers with GPT-3 Engines, explores many aspects of OpenAI’s GPT-2 and GPT-3 transformers. We will first examine the architecture of OpenAI’s GPT models before explaining the different GPT-3 engines. Then we will run a GPT-2 345M parameter model and interact with it to generate text. Next, we’ll see the GPT-3 playground in action before coding a GPT-3 model for NLP tasks and comparing the results to GPT-2.

Chapter 8, Applying Transformers to Legal and Financial Documents for AI Text Summarization, goes through the concepts and architecture of the T5 transformer model. We will initialize a T5 model from Hugging Face to summarize documents. We will task the T5 model to summarize various documents, including a sample from the Bill of Rights, exploring the successes and limitations of transfer learning approaches applied to transformers. Finally, we will use GPT-3 to summarize some corporation law text to a second-grader.

Chapter 9, Matching Tokenizers and Datasets, analyzes the limits of tokenizers and looks at some of the methods applied to improve the data encoding process’s quality. We will first build a Python program to investigate why some words are omitted or misinterpreted by word2vector tokenizers. Following this, we find the limits of pretrained tokenizers with a tokenizer-agonistic method.

We will improve a T5 summary by applying some of the ideas that show that there is still much room left to improve the methodology of the tokenization process. Finally, we will test the limits of GPT-3’s language understanding.

Chapter 10, Semantic Role Labeling with BERT-Based Transformers, explores how transformers learn to understand a text’s content. Semantic Role Labeling (SRL) is a challenging exercise for a human. Transformers can produce surprising results. We will implement a BERT-based transformer model designed by the Allen Institute for AI in a Google Colab notebook. We will also use their online resources to visualize SRL outputs. Finally, we will question the scope of SRL and understand the reasons behind its limitations.

Part III: Advanced Language Understanding Techniques

Chapter 11, Let Your Data Do the Talking: Story, Questions, and Answers, shows how a transformer can learn how to reason. A transformer must be able to understand a text, a story, and also display reasoning skills. We will see how question answering can be enhanced by adding NER and SRL to the process. We will build the blueprint for a question generator that can be used to train transformers or as a stand-alone solution.

Chapter 12, Detecting Customer Emotions to Make Predictions, shows how transformers have improved sentiment analysis. We will analyze complex sentences using the Stanford Sentiment Treebank, challenging several transformer models to understand not only the structure of a sequence but also its logical form. We will see how to use transformers to make predictions that trigger different actions depending on the sentiment analysis output. The chapter finishes with some edge cases using GPT-3.

Chapter 13, Analyzing Fake News with Transformers, delves into the hot topic of fake news and how transformers can help us understand the different perspectives of the online content we see each day. Every day, billions of messages, posts, and articles are published on the web through social media, websites, and every form of real-time communication available. Using several techniques from the previous chapters, we will analyze debates on climate change and gun control and the Tweets from a former president. We will go through the moral and ethical problem of determining what can be considered fake news beyond reasonable doubt and what news remains subjective.

Chapter 14, Interpreting Black Box Transformer Models, lifts the lid on the black box that is transformer models by visualizing their activity. We will use BertViz to visualize attention heads and Language Interpretability Tool (LIT) to carry out a principal component analysis (PCA). Finally, we will use LIME to visualize transformers via dictionary learning.

Chapter 15, From NLP to Task-Agnostic Transformer Models, delves into the advanced models, Reformer and DeBERTa, running examples using Hugging Face. Transformers can process images as sequences of words. We will also look at different vision transformers such as ViT, CLIP, and DALL-E. We will test them on computer vision tasks, including generating computer images.

Chapter 16, The Emergence of Transformer-Driven Copilots, explores the maturity of Industry 4.0. The chapter begins with prompt engineering examples using informal/casual English. Next, we will use GitHub Copilot to assist with creating code. We will see that vision transformers can help NLP transformers visualize the world around them. We will create a transformer-based recommendation system, which can be used by digital humans in whatever metaverse you may end up in!

Chapter 17, The Consolidation of Suprahuman Transformers with OpenAI’s ChatGPT and GPT-4, builds on the previous chapters, exploring OpenAI’s state-of-the-art transformer models. We will set up conversational AI with ChatGPT and learn how it can explain transformer outputs using explainable AI. We will explore GPT-4 and see how it creates a k-means clustering program from a simple prompt. Advanced Prompt Engineering will be introduced, building on the prompt engineering learned earlier in the book. Finally, we use DALL-E 2 to create and produce variations of an image.

Appendix I, Terminology of Transformer Models, examines the high-level structure of a transformer, from stacks and sublayers to attention heads.

Appendix II, Hardware Constraints for Transformer Models, looks at CPU and GPU performance running transformers. We will see why transformers and GPUs and transformers are a perfect match, concluding with a test using Google Colab CPU, Google Colab Free GPU, and Google Colab Pro GPU.

Appendix III, Generic Text Completion with GPT-2, provides a detailed explanation of generic text completion using GPT-2 from Chapter 7, The Rise of Suprahuman Transformers with GPT-3 Engines.

Appendix IV, Custom Text Completion with GPT-2, supplements Chapter 7, The Rise of Suprahuman Transformers with GPT-3 Engines by building and training a GPT-2 model and making it interact with custom text.

Appendix V, Answers to the Questions, provides answers to the questions at the end of each chapter.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at £16.99/month. Cancel anytime