Why large language models?
What exactly defines a “normal” language model when we talk about a large language model? Sometimes people use the terms “language model” and “large language model” interchangeably, but they may actually refer to different things as every LLM is an LM, but not every LM is an LLM.
In the past, we referred to models that calculate the likelihood of the next word in a sequence using n-grams as “language models.” Nowadays, “large language models” typically refers to transformer-based neural models with billions of parameters trained on massive datasets such as ChatGPT and LLaMA. These models are all generative language models. Encoder-only language models with around 100 million parameters, such as BERT, and simple n-gram models can be considered language models (LMs).
To simplify things, we can say that the term LLM refers to decoder-only models with more than 1 billion parameters and...