Summary
In this chapter, we discussed how to make the fine-tuning process more effective with PEFT. We covered three different PEFT methods: additive, selective, and low-rank. We utilized the adapter-transformers and HF’s PEFT framework for the hands-on experiments. We took text classification and NLI tasks and solved them with two Python PEFT libraries.
Besides their benefits, LLMs place big barriers in front of us such as training, fine-tuning, and inference. How to overcome these barriers is an important field of study. In the future, we will focus on how to handle, control, and utilize LLMs, as well as how to democratize them. In this chapter, we only focused on how to efficiently fine-tune them, but there are many aspects that we can work on!
In the next chapter, we will discuss how to work with LLMs.