In this chapter, we discussed recurrent neural networks, how to train them, the training problems unique to RNNs, and how to solve those problems with LSTM and GRU. We described the task of language modeling and how RNNs help us solve some of the difficulties in modeling languages. Then, we put this all together in the form of a practical example on how to train a character-level language model to generate text based on Leo Tolstoy's War and Peace. Next, we introduced seq2seq models and the attention mechanism. Finally, we gave a brief overview of how to apply deep learning, and especially RNNs, to the problem of speech recognition.
In the next two chapters, we'll discuss how to teach a computer-controlled agent to navigate a physical or virtual environment with the help of reinforcement learning. Thanks to deep neural networks, this exciting ML area has seen...