Utilizing run_glue.py to fine-tune the models
So far, we have designed a fine-tuning architecture from scratch using both native PyTorch and the Trainer
class. The HuggingFace community also provides another powerful script called run_glue.py
for GLUE benchmark and GLUE-like classification downstream tasks. This script can handle and organize the entire training/validation process for us. If you want to do quick prototyping, you should use this script. It can fine-tune any pre-trained models on the HuggingFace hub. We can also feed it with our own data in any format.
Please go to the following link to access the script and to learn more: https://github.com/huggingface/transformers/tree/master/examples.
The script can perform nine different GLUE tasks. With the script, we can do everything that we have done with the Trainer
class so far. The task name could be one of the following GLUE tasks: cola
, sst2
, mrpc
, stsb
, qqp
, mnli
, qnli
, rte
, or wnli
.
Here is the script scheme for...