Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Using Stable Diffusion with Python

You're reading from   Using Stable Diffusion with Python Leverage Python to control and automate high-quality AI image generation using Stable Diffusion

Arrow left icon
Product type Paperback
Published in Jun 2024
Publisher Packt
ISBN-13 9781835086377
Length 352 pages
Edition 1st Edition
Languages
Concepts
Arrow right icon
Author (1):
Arrow left icon
Andrew Zhu (Shudong Zhu) Andrew Zhu (Shudong Zhu)
Author Profile Icon Andrew Zhu (Shudong Zhu)
Andrew Zhu (Shudong Zhu)
Arrow right icon
View More author details
Toc

Table of Contents (29) Chapters Close

Preface 1. Part 1 – A Whirlwind of Stable Diffusion FREE CHAPTER
2. Chapter 1: Introducing Stable Diffusion 3. Chapter 2: Setting Up the Environment for Stable Diffusion 4. Chapter 3: Generating Images Using Stable Diffusion 5. Chapter 4: Understanding the Theory Behind Diffusion Models 6. Chapter 5: Understanding How Stable Diffusion Works 7. Chapter 6: Using Stable Diffusion Models 8. Part 2 – Improving Diffusers with Custom Features
9. Chapter 7: Optimizing Performance and VRAM Usage 10. Chapter 8: Using Community-Shared LoRAs 11. Chapter 9: Using Textual Inversion 12. Chapter 10: Overcoming 77-Token Limitations and Enabling Prompt Weighting 13. Chapter 11: Image Restore and Super-Resolution 14. Chapter 12: Scheduled Prompt Parsing 15. Part 3 – Advanced Topics
16. Chapter 13: Generating Images with ControlNet 17. Chapter 14: Generating Video Using Stable Diffusion 18. Chapter 15: Generating Image Descriptions Using BLIP-2 and LLaVA 19. Chapter 16: Exploring Stable Diffusion XL 20. Chapter 17: Building Optimized Prompts for Stable Diffusion 21. Part 4 – Building Stable Diffusion into an Application
22. Chapter 18: Applications – Object Editing and Style Transferring 23. Chapter 19: Generation Data Persistence 24. Chapter 20: Creating Interactive User Interfaces 25. Chapter 21: Diffusion Model Transfer Learning 26. Chapter 22: Exploring Beyond Stable Diffusion 27. Index 28. Other Books You May Enjoy

What this book covers

Chapter 1, Introducing Stable Diffusion, provides an introduction to the AI image generation technology Stable Diffusion.

Chapter 2, Setting Up an Environment for Stable Diffusion, covers how to set up the CUDA and Python environments to run Stable Diffusion models.

Chapter 3, Generate Images Using Stable Diffusion, is a whirlwind chapter that helps you start using Python to generate images with Stable Diffusion.

Chapter 4, Understand the Theory behind the Diffusion Models, digs into the internals of the Diffusion model.

Chapter 5, Understanding How Stable Diffusion Works, covers the theory behind Stable Diffusion.

Chapter 6, Using the Stable Diffusion Model, covers model data handling and converting and loading model files.

Chapter 7, Optimizing Performance and VRAM Usage, teaches you how to improve performance and reduce VRAM usage.

Chapter 8, Using Community-Shared LoRAs, shows you how to use the community-shared LoRAs with Stable Diffusion checkpoint models.

Chapter 9, Using Textual Inversion, uses the community-shared Textual Inversion with Stable Diffusion checkpoint models.

Chapter 10, Unlocking 77 Token Limitations and Enabling Prompt Weighting, covers how to build custom prompt-handling code to use unlimited-size prompts with weighted importance scores. Specifically, we'll explore how to assign different weights to individual prompts or tokens, allowing us to fine-tune the model's attention and generate more accurate results.

Chapter 11, Image Restore and Super-Resolution, shows you how to fix and upscale images using Stable Diffusion.

Chapter 12, Scheduled Prompt Parsing, shows you how to build a custom pipeline to support scheduled prompts.

Chapter 13, Generating Images with ControlNet, covers how to use ControlNet with Stable Diffusion checkpoint models.

Chapter 14, Generating Video Using Stable Diffusion, shows you how to use AnimateDiff together with Stable Diffusion to generate a short video clip and understand the theory behind the video generation.

Chapter 15, Generating Image Descriptions Using BLIP-2 and LLaVA, covers how to use large language models (LLMs) to extract descriptions from images.

Chapter 16, Exploring Stable Diffusion XL, shows you how to start using Stable Diffusion XL, a newer and better Stable Diffusion model.

Chapter 17, Building Optimized Prompts for Stable Diffusion, discusses techniques to write up Stable Diffusion prompts to generate better images, as well as leveraging LLMs to help generate prompts automatically.

Chapter 18, Applications: Object Editing and Style Transferring, covers how to use Stable Diffusion and related machine learning models to edit images and transfer styles from one image to another.

Chapter 19, Generation Data Persistence, shows you how to save image generation prompts and parameters into the generated PNG image.

Chapter 20, Creating Interactive User Interfaces, shows you how to build a Stable Diffusion WebUI using the open source framework Gradio.

Chapter 21, Diffusion Model Transfer Learning, covers how to train a Stable Diffusion LoRA from scratch.

Chapter 22, Exploring Beyond Stable Diffusion, provides additional information about Stable Diffusion, AI, and how to learn more about the latest developments.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at AU $24.99/month. Cancel anytime