Now that you know the basic idea behind RBMs, we will use the BernoulliRBM model to learn data representations in an unsupervised manner. As before, we will do this with the MNIST dataset to facilitate comparisons.
For some people, the task of learning representations can be thought of as feature engineering. The latter has an explicability component to the term, while the former does not necessarily require us to prescribe meaning to the learned representations.
In scikit-learn, we can create an instance of the RBM by invoking the following instructions:
from sklearn.neural_network import BernoulliRBM
rbm = BernoulliRBM()
The default parameters in the constructor of the RBM are the following:
- n_components=256, which is the number of hidden units, , while the number of visible units, , is inferred from the dimensionality of the input.
- learning_rate=0.1 controls the strength of the learning algorithm with respect to updates, and it is recommended...