Training LLMs
Since most LLMs are decoder-only, the most common LLM pre-training task is NWP. The large number of model parameters (up to hundreds of billions) requires comparatively large training datasets to prevent overfitting and realize the full capabilities of the models. This requirement poses two significant challenges: ensuring training data quality and the ability to process large volumes of data. In the following sections, we’ll discuss various aspects of the LLM training pipeline, starting from the training datasets.
Training datasets
We can categorize the training data into two broad categories:
- General: Examples include web pages, books, or conversational text. LLMs almost always train on general data because it’s widely available and diverse, improving the language modeling and generalization capabilities of LLMs.
- Specialized: Code, scientific articles, textbooks, or multilingual data for providing LLMs with task-specific capabilities...