Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Modern Computer Vision with PyTorch

You're reading from   Modern Computer Vision with PyTorch A practical roadmap from deep learning fundamentals to advanced applications and Generative AI

Arrow left icon
Product type Paperback
Published in Jun 2024
Publisher Packt
ISBN-13 9781803231334
Length 746 pages
Edition 2nd Edition
Languages
Tools
Arrow right icon
Authors (2):
Arrow left icon
V Kishore Ayyadevara V Kishore Ayyadevara
Author Profile Icon V Kishore Ayyadevara
V Kishore Ayyadevara
Yeshwanth Reddy Yeshwanth Reddy
Author Profile Icon Yeshwanth Reddy
Yeshwanth Reddy
Arrow right icon
View More author details
Toc

Table of Contents (26) Chapters Close

Preface 1. Section 1: Fundamentals of Deep Learning for Computer Vision
2. Artificial Neural Network Fundamentals FREE CHAPTER 3. PyTorch Fundamentals 4. Building a Deep Neural Network with PyTorch 5. Section 2: Object Classification and Detection
6. Introducing Convolutional Neural Networks 7. Transfer Learning for Image Classification 8. Practical Aspects of Image Classification 9. Basics of Object Detection 10. Advanced Object Detection 11. Image Segmentation 12. Applications of Object Detection and Segmentation 13. Section 3: Image Manipulation
14. Autoencoders and Image Manipulation 15. Image Generation Using GANs 16. Advanced GANs to Manipulate Images 17. Section 4: Combining Computer Vision with Other Techniques
18. Combining Computer Vision and Reinforcement Learning 19. Combining Computer Vision and NLP Techniques 20. Foundation Models in Computer Vision 21. Applications of Stable Diffusion 22. Moving a Model to Production 23. Other Books You May Enjoy
24. Index
Appendix

Understanding variational autoencoders

So far, we have seen a scenario where we can group similar images into clusters. Furthermore, we have learned that when we take embeddings of images that fall in a given cluster, we can re-construct (decode) them. However, what if an embedding (a latent vector) falls in between two clusters? There is no guarantee that we would generate realistic images. Variational autoencoders (VAEs) come in handy in such a scenario.

The need for VAEs

Before we dive into understanding and building a VAE, let’s explore the limitations of generating images from embeddings that do not fall into a cluster (or in the middle of different clusters). First, we generate images by sampling vectors by following these steps (available in the conv_auto_encoder.ipynb file):

  1. Calculate the latent vectors (embeddings) of the validation images used in the previous section:
    latent_vectors = []
    classes = []
    for im,clss in val_dl:
        latent_vectors...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime