Training a deep learning classification model with a curated text dataset
In the previous section, we trained a language model using the curated text IMDb dataset. The model in the previous section predicted the next set of words that would follow a given set of words. In this section, we will take the language model that was fine-tuned on the IMDb dataset and use it to train a text classification model that classifies text samples that are specific to the movie review use case.
Getting ready
This recipe makes use of the encoder that you trained in the previous section, so ensure that you have followed the steps in the recipe in that section, in particular, that you have saved the encoder from the trained language model.
As mentioned in the previous section, you need to take some additional steps before you can run recipes in Gradient that use the language model pre-trained on the Wikipedia corpus. To ensure that you have access to the pre-trained language model that you...