Hands-on PEFT experiments
In this section, you may encounter some problems or errors when using the CPU. We suggest you use GPU for training. We will solve two different classification problems using PEFT. Throughout the book, we have mentioned the sentiment analysis problem several times. Now, to diversify our experiments, let’s also include the NLI (Natural Language Inference) problem.
We will conduct two experiments as follows:
- Efficiently fine-tuning the BERT model for an IMDb sentiment dataset with
adapter-transformers
- Efficiently fine-tune FLAN-T5 for an NLI task with the LoRA framework
Fine-tuning a BERT checkpoint with adapter tuning
For the first problem, we will specifically address the IMDb sentiment task that we already fine-tuned with full fine-tuning methods before. This way, we will have the opportunity to compare:
- Start with installing the necessary packages as follows:
!pip install datasets==2.12.0 adapter-transformers==3.2.1
As...