So far in this book, we have studied a variety of neural networks and, as we have seen, each of them has its own strengths and weaknesses with regard to a variety of tasks. We have also learned that deep learning architectures require a large amount of training data because of their size and their large number of trainable parameters. As you can imagine, for a lot of the problems that we want to build models for, it may not be possible to collect enough data, and even if we are able to do so, this would be very difficult and time-consuming—perhaps even costly—to carry out. One way to combat this is to use generative models to create synthetic data (something we encountered in Chapter 8, Regularization) that is generated from a small dataset that we collect for our task.
In this chapter, we will cover two topics that have recently grown...