Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Advanced Natural Language Processing with TensorFlow 2

You're reading from   Advanced Natural Language Processing with TensorFlow 2 Build effective real-world NLP applications using NER, RNNs, seq2seq models, Transformers, and more

Arrow left icon
Product type Paperback
Published in Feb 2021
Publisher Packt
ISBN-13 9781800200937
Length 380 pages
Edition 1st Edition
Languages
Arrow right icon
Authors (2):
Arrow left icon
Tony Mullen Tony Mullen
Author Profile Icon Tony Mullen
Tony Mullen
Ashish Bansal Ashish Bansal
Author Profile Icon Ashish Bansal
Ashish Bansal
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. Essentials of NLP 2. Understanding Sentiment in Natural Language with BiLSTMs FREE CHAPTER 3. Named Entity Recognition (NER) with BiLSTMs, CRFs, and Viterbi Decoding 4. Transfer Learning with BERT 5. Generating Text with RNNs and GPT-2 6. Text Summarization with Seq2seq Attention and Transformer Networks 7. Multi-Modal Networks and Image Captioning with ResNets and Transformer Networks 8. Weakly Supervised Learning for Classification with Snorkel 9. Building Conversational AI Applications with Deep Learning 10. Installation and Setup Instructions for Code 11. Other Books You May Enjoy
12. Index

Generative Pre-Training (GPT-2) model

OpenAI released the first version of the GPT model in June 2018. They followed up with GPT-2 in February 2019. This paper attracted much attention as full details of the large GPT-2 model were not released with the paper due to concerns of nefarious uses. The large GPT-2 model was released subsequently in November 2019. The GPT-3 model is the most recent, released in May 2020.

Figure 5.5 shows the number of parameters in the largest of each of these models:

Figure 5.5: Parameters in different GPT models

The first model used the standard Transformer decoder architecture with twelve layers, each with twelve attention heads and 768-dimensional embeddings, for a total of approximately 110 million parameters, which is very similar to the BERT model. The largest GPT-2 has over 1.5 billion parameters, and the most recently released GPT-3 model's largest variant has over 175 billion parameters!

Cost of training language...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at £16.99/month. Cancel anytime