Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Using Stable Diffusion with Python

You're reading from   Using Stable Diffusion with Python Leverage Python to control and automate high-quality AI image generation using Stable Diffusion

Arrow left icon
Product type Paperback
Published in Jun 2024
Publisher Packt
ISBN-13 9781835086377
Length 352 pages
Edition 1st Edition
Languages
Concepts
Arrow right icon
Author (1):
Arrow left icon
Andrew Zhu (Shudong Zhu) Andrew Zhu (Shudong Zhu)
Author Profile Icon Andrew Zhu (Shudong Zhu)
Andrew Zhu (Shudong Zhu)
Arrow right icon
View More author details
Toc

Table of Contents (29) Chapters Close

Preface 1. Part 1 – A Whirlwind of Stable Diffusion FREE CHAPTER
2. Chapter 1: Introducing Stable Diffusion 3. Chapter 2: Setting Up the Environment for Stable Diffusion 4. Chapter 3: Generating Images Using Stable Diffusion 5. Chapter 4: Understanding the Theory Behind Diffusion Models 6. Chapter 5: Understanding How Stable Diffusion Works 7. Chapter 6: Using Stable Diffusion Models 8. Part 2 – Improving Diffusers with Custom Features
9. Chapter 7: Optimizing Performance and VRAM Usage 10. Chapter 8: Using Community-Shared LoRAs 11. Chapter 9: Using Textual Inversion 12. Chapter 10: Overcoming 77-Token Limitations and Enabling Prompt Weighting 13. Chapter 11: Image Restore and Super-Resolution 14. Chapter 12: Scheduled Prompt Parsing 15. Part 3 – Advanced Topics
16. Chapter 13: Generating Images with ControlNet 17. Chapter 14: Generating Video Using Stable Diffusion 18. Chapter 15: Generating Image Descriptions Using BLIP-2 and LLaVA 19. Chapter 16: Exploring Stable Diffusion XL 20. Chapter 17: Building Optimized Prompts for Stable Diffusion 21. Part 4 – Building Stable Diffusion into an Application
22. Chapter 18: Applications – Object Editing and Style Transferring 23. Chapter 19: Generation Data Persistence 24. Chapter 20: Creating Interactive User Interfaces 25. Chapter 21: Diffusion Model Transfer Learning 26. Chapter 22: Exploring Beyond Stable Diffusion 27. Index 28. Other Books You May Enjoy

Generating latent vectors using diffusers

In this section, we are going to use a pre-trained Stable Diffusion model to encode an image into latent space so that we have a concrete impression of what a latent vector looks and feels like. Then, we will decode the latent vector back into an image. This operation will also establish the foundation for building the image-to-image custom pipeline:

  1. Load an image: We can use the load_image function from diffusers to load an image from local storage or a URL. In the following code, we load an image named dog.png from the same directory of the current program:
    from diffusers.utils import load_image
    image = load_image("dog.png")
    display(image)
  2. Pre-process the image: Each pixel of the loaded image is represented by a number ranging from 0 to 255. The image encoder from the Stable Diffusion process handles image data ranging from -1.0 to 1.0. So, we first need to make the data range conversion:
    import numpy as np
    # convert image...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime