What this book covers
Chapter 1, Introducing Stable Diffusion, provides an introduction to the AI image generation technology Stable Diffusion.
Chapter 2, Setting Up an Environment for Stable Diffusion, covers how to set up the CUDA and Python environments to run Stable Diffusion models.
Chapter 3, Generate Images Using Stable Diffusion, is a whirlwind chapter that helps you start using Python to generate images with Stable Diffusion.
Chapter 4, Understand the Theory behind the Diffusion Models, digs into the internals of the Diffusion model.
Chapter 5, Understanding How Stable Diffusion Works, covers the theory behind Stable Diffusion.
Chapter 6, Using the Stable Diffusion Model, covers model data handling and converting and loading model files.
Chapter 7, Optimizing Performance and VRAM Usage, teaches you how to improve performance and reduce VRAM usage.
Chapter 8, Using Community-Shared LoRAs, shows you how to use the community-shared LoRAs with Stable Diffusion checkpoint models.
Chapter 9, Using Textual Inversion, uses the community-shared Textual Inversion with Stable Diffusion checkpoint models.
Chapter 10, Unlocking 77 Token Limitations and Enabling Prompt Weighting, covers how to build custom prompt-handling code to use unlimited-size prompts with weighted importance scores. Specifically, we'll explore how to assign different weights to individual prompts or tokens, allowing us to fine-tune the model's attention and generate more accurate results.
Chapter 11, Image Restore and Super-Resolution, shows you how to fix and upscale images using Stable Diffusion.
Chapter 12, Scheduled Prompt Parsing, shows you how to build a custom pipeline to support scheduled prompts.
Chapter 13, Generating Images with ControlNet, covers how to use ControlNet with Stable Diffusion checkpoint models.
Chapter 14, Generating Video Using Stable Diffusion, shows you how to use AnimateDiff together with Stable Diffusion to generate a short video clip and understand the theory behind the video generation.
Chapter 15, Generating Image Descriptions Using BLIP-2 and LLaVA, covers how to use large language models (LLMs) to extract descriptions from images.
Chapter 16, Exploring Stable Diffusion XL, shows you how to start using Stable Diffusion XL, a newer and better Stable Diffusion model.
Chapter 17, Building Optimized Prompts for Stable Diffusion, discusses techniques to write up Stable Diffusion prompts to generate better images, as well as leveraging LLMs to help generate prompts automatically.
Chapter 18, Applications: Object Editing and Style Transferring, covers how to use Stable Diffusion and related machine learning models to edit images and transfer styles from one image to another.
Chapter 19, Generation Data Persistence, shows you how to save image generation prompts and parameters into the generated PNG image.
Chapter 20, Creating Interactive User Interfaces, shows you how to build a Stable Diffusion WebUI using the open source framework Gradio.
Chapter 21, Diffusion Model Transfer Learning, covers how to train a Stable Diffusion LoRA from scratch.
Chapter 22, Exploring Beyond Stable Diffusion, provides additional information about Stable Diffusion, AI, and how to learn more about the latest developments.