Next, we will investigate how much better reconstructions from autoencoders can get, and whether they can generate images a bit better than the blurry representations that we just saw. For this, we will design a deep feed-forward autoencoder. As you know, this simply means that we will be adding additional hidden layers between the input and the output layer of our autoencoder. To keep things interesting, we will also use a different dataset of images. You are welcome to reimplement this method on the fashion_mnist dataset if you're curious to further explore the sense of fashion that's attainable by autoencoders.
For the next exercise, we will use the 10 Monkey species dataset, available at Kaggle. We will try to reconstruct pictures of our playful and mischievous cousins from the jungle, and see how well our autoencoder performs at a more...