Practice project: Fine-tuning for Q&A using PEFT
For our practice project, we will experiment with AdaLoRA to efficiently fine-tune a model for a customer query and compare it directly to the output of a state-of-the-art (SOTA) model using in-context learning. Like the previous chapter, we can rely on a prototyping environment such as Google Colab to complete the evaluation and comparison of the two approaches. We will demonstrate how to configure model training to use AdaLoRA as our PEFT method.
Background regarding question-answering fine-tuning
Our project utilizes the Hugging Face training pipeline library, a widely recognized resource in the machine learning community. This library offers a variety of pre-built pipelines, including one for question-answering, which allows us to fine-tune pre-trained models with minimal setup. Hugging Face pipelines abstract much of the complexity involved in model training, making it accessible for developers to implement advanced natural...