Implementing autoencoders with TensorFlow
Training an autoencoder is a simple process. It is an NN, where an output is the same as its input. There is an input layer, which is followed by a few hidden layers, and then after a certain depth, the hidden layers follow the reverse architecture until we reach a point where the final layer is the same as the input layer. We pass data into the network whose embedding we wish to learn.
In this example, we use images from the MNIST dataset as input. We begin our implementation by importing all the main libraries:
import tensorflow as tf import numpy as np import matplotlib.pyplot as plt
Then we prepare the MNIST dataset. We use the built-in input_data
class from TensorFlow to load and set up the data. This class ensures that the data is downloaded and preprocessed to be consumed by the autoencoder. Therefore, basically, we don't need to do any feature engineering at all:
from tensorflow.examples.tutorials.mnist import input_data mnist = input_data...