Autoencoding language model training for any language
We have discussed how BERT works and that it is possible to use the pretrained version of it provided by the Hugging Face repository. In this section, you will learn how to use the Hugging Face library to train your own BERT.
Before we start, it is essential to have good training data, because it will be used for the language modeling. This data is called the corpus, which is normally a huge pile of data (sometimes it is preprocessed and cleaned). This unlabeled corpus must be appropriate for the use case you wish to have your language model trained on; for example, if you are trying to have a special BERT for, let’s say, the English language. Although there are tons of huge, good datasets, such as Common Crawl (https://commoncrawl.org/), we would prefer a small one for faster training.
The IMDB dataset of 50 K movie reviews (available at https://www.kaggle.com/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews) is a...