Stacked autoencoder in TensorFlow
The steps to build a stacked autoencoder model in TensorFlow are as follows:
- First, define the hyper-parameters as follows:
learning_rate = 0.001 n_epochs = 20 batch_size = 100 n_batches = int(mnist.train.num_examples/batch_size)
- Define the number of inputs (that is, features) and outputs (that is, targets). The number of outputs will be the same as the number of inputs:
# number of pixels in the MNIST image as number of inputs n_inputs = 784 n_outputs = n_inputs
- Define the placeholders for input and output images:
x = tf.placeholder(dtype=tf.float32, name="x", shape=[None, n_inputs]) y = tf.placeholder(dtype=tf.float32, name="y", shape=[None, n_outputs])
- Add the number of neurons for encoder and decoder layers as
[512,256,256,512]
:
# number of hidden layers n_layers = 2 # neurons in each hidden layer n_neurons = [512,256] # add number of decoder layers: n_neurons.extend(list(reversed(n_neurons))) n_layers = n_layers * 2
- Define the w and b parameters:
w=[] b=[] ...