In this chapter, we will work on GANs that directly generate sequential data, such as text and audio. While doing so, we will go back to the previous image-synthesizing models we've looked at so that you can become familiar with NLP models quickly.
Throughout this chapter, you will get to know the commonly used techniques of the NLP field, such as RNN and LSTM. You will also get to know some of the basic concepts of reinforcement learning (RL) and how it differs from supervised learning (such as SGD-based CNNs). Later on, we will learn how to build a custom vocabulary from a collection of text so that we can train our own NLP models and learn how to train SeqGAN so that it can generate short English jokes. You will also learn how to use SEGAN to remove background noise and enhance the quality of speech audio.
The following topics will be covered...