In this chapter, we learned about some of the most exciting networks in AI, variational autoencoders and GANs. Each of these relies on the same fundamental concepts of condensing data, and then generating from again from that condensed form of data. You will recall that both of these networks are probabilistic models, meaning that they rely on inference from probability distributions in order to generate data. We worked through examples of both of these networks, and showed how we can use them to generate new images.
In addition to learning about these exciting new techniques, most importantly you learned that the building blocks of advanced networks can be broken down into smaller, simpler, and repetitive parts. When you think about writing advanced models in TensorFlow, you need to remember what kind of layers you need, what type of activation functions you need, how...