Using Community-Shared LoRAs
To meet specific needs and generate higher fidelity images, we may need to fine-tune a pre-trained Stable Diffusion model, but the fine-tuning process is extremely slow without powerful GPUs. Even if you have all the hardware or resources on hand, the fine-tuned model is large, usually the same size as the original model file.
Fortunately, researchers from the Large Language Model (LLM) neighbor community developed an efficient fine-tuning method, Low-Rank Adaptation (LoRA — “Low” is why the “o” is in lowercase) [1]. With LoRA, the original checkpoint is frozen without any modification, and the tuned weight changes are stored in an independent file, which we usually call the LoRA file. Additionally, there are countless community-shared LoRAs on sites such as CIVITAI [4] and HuggingFace.
In this chapter, we are going to delve into the theory of LoRA, and then introduce the Python way to load up LoRA into a Stable...