Challenges in developing LLMs
Developing LLMs poses a unique set of challenges, including but not limited to handling massive amounts of data, requiring vast computational resources, and the risk of introducing or perpetuating bias. The following subsections outline the detailed explanations of these challenges.
Amounts of data
LLMs require enormous amounts of data for training. As the model size grows, so does the need for diverse, high-quality training data. However, collecting and curating such large datasets is a challenging task. It can be time - consuming and expensive. There’s also the risk of inadvertently including sensitive or inappropriate data in the training set. To have more of an idea, BERT has been trained using 3.3 billion words from Wikipedia and BookCorpus. GPT-2 has been trained on 40 GB of text data, and GPT-3 has been trained on 570 GB of text data. Table 7.2 shows the number of parameters and size of training data of a few recent LMs.