Gibbs sampling plays a key role in constructing an RBM, so we will take a moment here to define this sampling type. We will briefly walk through a couple of quick concepts that lead to how to perform Gibbs sampling and why it matters for this type of modeling. With RBM models, we are first using a neural network to map our input or visible units to hidden units, which can be thought of as latent features. After training our model, we want to either take a new visible unit and define the probability that it belongs to the hidden units in the model, or do the reverse. We also want this to be computationally efficient, so we use a Monte Carlo approach.
Monte Carlo methods involve sampling random points to approximate an area or distribution. A classic example involves drawing a 10-by-10 inch square and inside this square draw a circle. We know...