A dataset for natural language
As a dataset, any text corpus can be used, such as Wikipedia, web articles, or even with symbols such as code or computer programs, theater plays, and poems; the model will catch and reproduce the different patterns in the data.
In this case, let's use tiny Shakespeare texts to predict new Shakespeare texts or at least, new texts written in a style inspired by Shakespeare; two levels of predictions are possible, but can be handled in the same way:
At the character level: Characters belong to an alphabet that includes punctuation, and given the first few characters, the model predicts the next characters from an alphabet, including spaces to build words and sentences. There is no constraint for the predicted word to belong to a dictionary and the objective of training is to build words and sentences close to real ones.
At the word level: Words belong to a dictionary that includes punctuation, and given the first few words, the model predicts the next word out...