Variational autoencoder in TensorFlow
Variational autoencoders are the modern generative version of autoencoders. Let's build a variational autoencoder for the same preceding problem. We will test the autoencoder by providing images from the original and noisy test set.
We will use a different coding style to build this autoencoder for the purpose of demonstrating the different styles of coding with TensorFlow:
- Start by defining the hyper-parameters:
learning_rate = 0.001 n_epochs = 20 batch_size = 100 n_batches = int(mnist.train.num_examples/batch_size) # number of pixels in the MNIST image as number of inputs n_inputs = 784 n_outputs = n_inputs
- Next, define a parameter dictionary to hold the weight and bias parameters:
params={}
- Define the number of hidden layers in each of the encoder and decoder:
n_layers = 2 # neurons in each hidden layer n_neurons = [512,256]
- The new addition in a variational encoder is that we define the dimensions of the latent variable
z
:
n_neurons_z = 128 # the dimensions...