Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Using Stable Diffusion with Python

You're reading from   Using Stable Diffusion with Python Leverage Python to control and automate high-quality AI image generation using Stable Diffusion

Arrow left icon
Product type Paperback
Published in Jun 2024
Publisher Packt
ISBN-13 9781835086377
Length 352 pages
Edition 1st Edition
Languages
Concepts
Arrow right icon
Author (1):
Arrow left icon
Andrew Zhu (Shudong Zhu) Andrew Zhu (Shudong Zhu)
Author Profile Icon Andrew Zhu (Shudong Zhu)
Andrew Zhu (Shudong Zhu)
Arrow right icon
View More author details
Toc

Table of Contents (29) Chapters Close

Preface 1. Part 1 – A Whirlwind of Stable Diffusion FREE CHAPTER
2. Chapter 1: Introducing Stable Diffusion 3. Chapter 2: Setting Up the Environment for Stable Diffusion 4. Chapter 3: Generating Images Using Stable Diffusion 5. Chapter 4: Understanding the Theory Behind Diffusion Models 6. Chapter 5: Understanding How Stable Diffusion Works 7. Chapter 6: Using Stable Diffusion Models 8. Part 2 – Improving Diffusers with Custom Features
9. Chapter 7: Optimizing Performance and VRAM Usage 10. Chapter 8: Using Community-Shared LoRAs 11. Chapter 9: Using Textual Inversion 12. Chapter 10: Overcoming 77-Token Limitations and Enabling Prompt Weighting 13. Chapter 11: Image Restore and Super-Resolution 14. Chapter 12: Scheduled Prompt Parsing 15. Part 3 – Advanced Topics
16. Chapter 13: Generating Images with ControlNet 17. Chapter 14: Generating Video Using Stable Diffusion 18. Chapter 15: Generating Image Descriptions Using BLIP-2 and LLaVA 19. Chapter 16: Exploring Stable Diffusion XL 20. Chapter 17: Building Optimized Prompts for Stable Diffusion 21. Part 4 – Building Stable Diffusion into an Application
22. Chapter 18: Applications – Object Editing and Style Transferring 23. Chapter 19: Generation Data Persistence 24. Chapter 20: Creating Interactive User Interfaces 25. Chapter 21: Diffusion Model Transfer Learning 26. Chapter 22: Exploring Beyond Stable Diffusion 27. Index 28. Other Books You May Enjoy

Summary

In this chapter, we introduced a way to precisely control image generation using SD ControlNets. From the detailed samples we have provided, you can start using one or multiple ControlNet models with SD v1.5 and also SDXL.

We also drilled down into the internals of ControlNet, explaining how it works in a nutshell.

We can use ControlNet in lots of applications, including applying a style to an image, applying a shape to an image, merging two images into one, and generating a human body using a posed image. It is powerful and amazingly useful in many ways. Our imagination is the only limitation.

However, there is one other limitation: it is hard to align the background and overall context between two generations (with different seeds). You may want to use ControlNet to generate a video from the extracted frames from a source video, but the results are still not ideal.

In the next chapter, we will cover a solution to generate video and animation using SD.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime