Section 2: Transformer Models – From Autoencoding to Autoregressive Models
In this section, you will learn about the architecture of autoencoding models such as BERT and autoregressive models such as GPT. You will learn how to train, test, and fine-tune the models for a variety of natural language understanding and natural language generation problems. You will also learn how to share the models with the community and how to fine-tune other pre-trained language models shared by the community.
This section comprises the following chapters:
- Chapter 3, Autoencoding Language Models
- Chapter 4, Autoregressive and Other Language Models
- Chapter 5, Fine-Tuning Language Models for Text Classification
- Chapter 6, Fine-Tuning Language Models for Token Classification
- Chapter 7, Text Representation