Understanding variational autoencoders
So far, we have seen a scenario where we can group similar images into clusters. Furthermore, we have learned that when we take embeddings of images that fall in a given cluster, we can re-construct (decode) them. However, what if an embedding (a latent vector) falls in between two clusters? There is no guarantee that we would generate realistic images. Variational autoencoders (VAEs) come in handy in such a scenario.
The need for VAEs
Before we dive into understanding and building a VAE, let’s explore the limitations of generating images from embeddings that do not fall into a cluster (or in the middle of different clusters). First, we generate images by sampling vectors by following these steps (available in the conv_auto_encoder.ipynb
file):
- Calculate the latent vectors (embeddings) of the validation images used in the previous section:
latent_vectors = [] classes = [] for im,clss in val_dl: latent_vectors...