Summary
In this chapter, you learned about one of the most important models from the beginnings of the deep learning revolution, the DBN. You saw that DBNs are constructed by stacking together RBMs, and how these undirected models can be trained using CD.
The chapter then described a greedy, layer-wise procedure for priming a DBN by sequentially training each of a stack of RBMs, which can then be fine-tuned using the wake-sleep algorithm or backpropagation. We then explored practical examples of using the TensorFlow 2 API to create an RBM layer and a DBN model, illustrating the use of the GradientTape
class to compute updates using CD.
You also learned how, following the wake-sleep algorithm, we can compile the DBN as a normal Deep Neural Network and perform backpropagation for supervised training. We applied these models to MNIST data and saw how an RBM can generate digits after training converges, and has features resembling the convolutional filters described in Chapter...