RBM models are a neural network with just two layers: the input, that is, the visible layer, and the hidden layer with latent features. However, it is possible to add additional hidden layers and an output layer. When this is done within the context of an RBM, it is referred to as a deep belief network. In this way, deep belief networks are like other deep learning architectures. For a deep belief network, each hidden layer is fully connected meaning that it learns the entire input.
The first layer is the typical RBM, where latent features are calculated from the input units. In the next layer, the new hidden layer learns the latent features from the previous hidden layer. This, in turn, can lead to an output layer for classification tasks.
Implementing a deep belief network uses a similar syntax to what was used to train the RBM....