Generating images using VAEs
From Chapter 4, Deep Learning for IOT, you should be familiar with autoencoders and their functions. VAEs are a type of autoencoder; here, we retain the (trained) Decoder part, which can be used by feeding random latent features z to generate data similar to the training data. Now, if you remember, in autoencoders, the Encoder results in the generation of low-dimensional features, z:

The architecture of autoencoders
The VAEs are concerned with finding the likelihood function p(x) from the latent features z:Â

This is an intractable density function, and it isn't possible to directly optimize it; instead, we obtain a lower bound by using a simple Gaussian prior p(z) and making both Encoder and Decoder networks probabilistic:

 Architecture of a VAE
This allows us to define a tractable lower bound on the log likelihood, given by the following:

In the preceding, θ represents the decoder network parameters and φ the encoder network parameters. The network is trained by maximizing...