Part 2: Transformer Models: From Autoencoders to Autoregressive Models
In this part, we’ll delve into the fascinating world of encoder-based and generative language models. We’ll explore how these models are pre-trained and fine-tuned, providing you with valuable insights into everyday topics such as fine-tuning, inference, and parameter optimization. But we’re not stopping there. We’ll also tackle advanced subjects such as enhancing model performance and effective fine-tuning strategies. The knowledge gained here will pave the way for the more advanced topics to come.
This part has the following chapters:
- Chapter 3, Autoencoding Language Models
- Chapter 4, From Generative Models to Large Language Models
- Chapter 5, Fine-Tuning Language Model for Text Classification
- Chapter 6, Fine-Tuning Language Model for Token Classification
- Chapter 7, Text Representation
- Chapter 8, Boosting Model Performance
- Chapter 9, Parameter...