Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Using Stable Diffusion with Python

You're reading from   Using Stable Diffusion with Python Leverage Python to control and automate high-quality AI image generation using Stable Diffusion

Arrow left icon
Product type Paperback
Published in Jun 2024
Publisher Packt
ISBN-13 9781835086377
Length 352 pages
Edition 1st Edition
Languages
Concepts
Arrow right icon
Author (1):
Arrow left icon
Andrew Zhu (Shudong Zhu) Andrew Zhu (Shudong Zhu)
Author Profile Icon Andrew Zhu (Shudong Zhu)
Andrew Zhu (Shudong Zhu)
Arrow right icon
View More author details
Toc

Table of Contents (29) Chapters Close

Preface 1. Part 1 – A Whirlwind of Stable Diffusion FREE CHAPTER
2. Chapter 1: Introducing Stable Diffusion 3. Chapter 2: Setting Up the Environment for Stable Diffusion 4. Chapter 3: Generating Images Using Stable Diffusion 5. Chapter 4: Understanding the Theory Behind Diffusion Models 6. Chapter 5: Understanding How Stable Diffusion Works 7. Chapter 6: Using Stable Diffusion Models 8. Part 2 – Improving Diffusers with Custom Features
9. Chapter 7: Optimizing Performance and VRAM Usage 10. Chapter 8: Using Community-Shared LoRAs 11. Chapter 9: Using Textual Inversion 12. Chapter 10: Overcoming 77-Token Limitations and Enabling Prompt Weighting 13. Chapter 11: Image Restore and Super-Resolution 14. Chapter 12: Scheduled Prompt Parsing 15. Part 3 – Advanced Topics
16. Chapter 13: Generating Images with ControlNet 17. Chapter 14: Generating Video Using Stable Diffusion 18. Chapter 15: Generating Image Descriptions Using BLIP-2 and LLaVA 19. Chapter 16: Exploring Stable Diffusion XL 20. Chapter 17: Building Optimized Prompts for Stable Diffusion 21. Part 4 – Building Stable Diffusion into an Application
22. Chapter 18: Applications – Object Editing and Style Transferring 23. Chapter 19: Generation Data Persistence 24. Chapter 20: Creating Interactive User Interfaces 25. Chapter 21: Diffusion Model Transfer Learning 26. Chapter 22: Exploring Beyond Stable Diffusion 27. Index 28. Other Books You May Enjoy

BLIP-2 – Bootstrapping Language-Image Pre-training

In the BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation paper [4], Junnan Li et al. proposed a solution to bridge the gap between natural language and vision modalities. Notably, the BLIP model has demonstrated exceptional capabilities in generating high-quality image descriptions, surpassing existing benchmarks at the time of its publication.

The reason behind its excellent quality is that Junnan Li et al. used an innovative technique to build two models from their first pretrained model:

  • Filter model
  • Captioner model

The filter model can filter out low-quality text-image pairs, thus improving the training data quality, while its caption generation model can generate surprisingly good, short descriptions for the image. With the help of these two models, the authors of the paper not only improved the training data quality but also enlarged its size...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image