Now that we have seen how RBMs perform, a comparison with AEs is in order. To make this comparison fair, we can propose the closest configuration to an RBM that an AE can have; that is, we will have the same number of hidden units (neurons in the encoder layer) and the same number of neurons in the visible layer (the decoder layer), as shown in Figure 10.6:
Figure 10.6 – AE configuration that's comparable to RBM
We can model and train our AE using the tools we covered in Chapter 7, Autoencoders, as follows:
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
inpt_dim = 28*28 # 784 dimensions
ltnt_dim = 100 # 100 components
inpt_vec = Input(shape=(inpt_dim,))
encoder = Dense(ltnt_dim, activation='sigmoid') (inpt_vec)
latent_ncdr = Model(inpt_vec, encoder)
decoder = Dense(inpt_dim, activation='sigmoid') (encoder)
autoencoder = Model(inpt_vec, decoder)
autoencoder.compile(loss='binary_crossentropy...