Technical requirements
If you have Diffusers
package running in your computer, you should be able to execute all code in this chapter as well as the code used to load LoRA with Diffusers.
Diffusers use PEFT (Parameter-Efficient Fine-Tuning) [10] to manage the LoRA loading and offloading. PEFT is a library developed by Hugging Face that provides parameter-efficient ways to adapt large pre-trained models for specific downstream applications. The key idea behind PEFT is to fine-tune only a small fraction of a model’s parameters instead of fine-tuning all of them, resulting in significant savings in terms of computation and memory usage. This makes it possible to fine-tune very large models even on consumer hardware with limited resources. Turn to Chapter 21 for more about LoRA.
We will need to install the PEFT package to enable Diffusers’ PEFT LoRA loading:
pip install PEFT
You can also refer to Chapter 2, if you encounter other execution errors from the code...