Image generation with diffusion models
So far, we’ve used NNs as discriminative models. This simply means that, given input data, a discriminative model will map it to a certain label (in other words, a classification). A typical example is the classification of MNIST images in one of ten digit classes, where the NN maps input data features (pixel intensities) to the digit label. We can also say this in another way: a discriminative model gives us the probability of y (class), given x (input). In the case of MNIST, this is the probability of the digit when given the pixel intensities of the image. In the next section, we’ll introduce NNs as generative models.
Introducing generative models
A generative model learns the distribution of data. In a way, it is the opposite of the discriminative model we just described. It predicts the probability of the input sample, given its class, y – .
For example, a generative model will be able to create an image based on...