Together, we have seen the power of unsupervised learning and hopefully convinced ourselves that it can be applied to different problems. We will finish the topic of unsupervised learning with an exciting approach known as Restricted Boltzmann Machines (RBMs). When we do not care about having a large number of layers, we can use RBMs to learn from the data and find ways to satisfy an energy function that will produce a model that is robust at representing input data.
This chapter complements Chapter 8, Deep Autoencoders, by introducing the backward-forward nature of RBMs, while contrasting it with the forward-only nature of Autoencoders (AEs). This chapter compares RBMs and AEs in the problem of dimensionality reduction, using MNIST as the case study. Once you are finished with this chapter, you should be able to use an RBM using scikit-learn and...