Now let's build the same autoencoder in Keras.
We clear the graph in the notebook using the following commands so that we can build a fresh graph that does not carry over any of the memory from the previous session or graph:
tf.reset_default_graph()
keras.backend.clear_session()
tf.reset_default_graph()
keras.backend.clear_session()
- First, we import the keras libraries and define hyperparameters and layers:
import keras
from keras.layers import Dense
from keras.models import Sequential
learning_rate = 0.001
n_epochs = 20
batch_size = 100
n_batches = int(mnist.train.num_examples/batch_sizee
# number of pixels in the MNIST image as number of inputs
n_inputs = 784
n_outputs = n_i
# number of hidden layers
n_layers = 2
# neurons in each hidden layer
n_neurons = [512,256]
# add decoder layers:
n_neurons.extend(list(reversed(n_neurons)))
n_layers = n_layers * 2
- Next, we build a sequential model and add dense layers...