Just as we did in the previous example, we will refer to the functional API to construct our deep autoencoder. We will import the input and dense layers, as well as the model object that we will later use to initialize the network. We will also define the input dimension for our images (64 x 64 x 3 = 12,288), and an encoding dimension of 256, leaving us with a compression ratio of 48. This simply means that each image will be compressed by a factor of 48, before our network attempts to reconstruct it from the latent space:
from keras.layers import Input, Dense
from keras.models import Model
##Input dimension
input_dim=12288
##Encoding dimension for the latent space
encoding_dim=256
The compression factor can be a very important parameter to consider, as mapping the input to a very low dimensional space will result in too much information...