Over the years, we developed methods that operationalize this notion of injecting some controlled randomness, which in a sense are guided by the intuition of the inputs. When we speak of generative models, we essentially wish to implement a mechanism that allows controlled and quasi-randomized transformations of our input, to generate something new, yet still plausibly resembling the original input.
Let's consider for a moment how this can be achieved. We wish to train a neural network to use some input variables (x) to generate some output variables (y), from a latent space produced by a model. An easy way to solve this is to simply add an element of randomness as input to our generator network, defined here by the variable (z). The value of z may be sampled from some probability distribution (a Gaussian distribution, for example) and...