So far, we have seen a scenario where we can group similar images into clusters. Furthermore, we have learned that when we take embeddings of images that fall in a given cluster, we can re-construct (decode) them. However, what if an embedding (a latent vector) falls in between two clusters? There is no guarantee that we would generate realistic images. Variational autoencoders come in handy in such a scenario.
Before we dive into building a variational autoencoder, let's explore the limitations of generating images from embeddings that do not fall into a cluster (or in the middle of different clusters). First, we generate images by sampling vectors:
The following code is a continuation of the code built in the previous section, Understanding convolutional autoencoders, and is available as conv_auto_encoder.ipynb in the chapter11 folder of this book's GitHub repository - https://tinyurl.com/mcvp-packt
- Calculate the latent vectors (embeddings...