What this book covers
Chapter 1, Deep Learning Walkthrough and PyTorch Introduction, is an introduction to the PyTorch way of doing deep learning and to the basic APIs of PyTorch. It starts by showing the history of PyTorch and why PyTorch should be the go-to framework for deep learning development. It also covers an introduction of the different deep learning approaches that we will be covering in the upcoming chapters.
Chapter 2, A Simple Neural Network, helps you build your first simple neural network and shows how we can connect bits and pieces such as neural networks, optimizers, and parameter updates to build a novice deep learning model. It also covers how PyTorch does backpropagation, the key behind all state-of-the-art deep learning algorithms.
Chapter 3, Deep Learning Workflow, goes deeper into the deep learning workflow implementation and the PyTorch ecosystem that helps build the workflow. This is probably the most crucial chapter if you are planning to set up a deep learning team or a pipeline for an upcoming project. In this chapter, we'll go through the different stages of a deep learning pipeline and see how the PyTorch community has advanced in each stage in the workflow iteratively by making appropriate tools.
Chapter 4, Computer Vision, being the most successful result of deep learning so far, talks about the key ideas behind that success and runs through the most widely used vision algorithm – the convolutional neural network (CNN). We'll implement a CNN step by step to understand the working principles, and then use a predefined CNN from PyTorch's nn package. This chapter helps you make a simple CNN and an advanced CNN-based vision algorithm called semantic segmentation.
Chapter 5, Sequential Data Processing, looks at the recurrent neural network, which is currently the most successful sequential data processing algorithm. The chapter introduces you to the major RNN components, such as the long short-term memory (LSTM) network and gated recurrent units (GRUs). Then we'll go through algorithmic changes in RNN implementation, such as bidirectional RNNs, and increasing the number of layers, before we explore recursive neural networks. To understand recursive networks, we'll use the renowned example, from the Stanford NLP group, the stack-augmented parser-interpreter neural network (SPINN), and implement that in PyTorch.
Chapter 6, Generative Networks, talks about the history of generative networks in brief and then explains the different kinds of generative networks. Among those different categories, this chapter introduces us to autoregressive models and GANs. We'll work through the implementation details of PixelCNN and WaveNet as part of autoregressive models, and then look at GANs in detail.
Chapter 7, Reinforcement Learning, introduces the concept of reinforcement learning, which is not really a subcategory of deep learning. We'll first take a look at defining problem statements. Then we'll explore the concept of cumulative rewards. We'll explore Markov decision processes and the Bellman equation, and then move to deep Q-learning. We'll also see an introduction to Gym, the toolkit developed by OpenAI for developing and experimenting with reinforcement learning algorithms.
Chapter 8, PyTorch to Production, looks at the difficulties people face, even the deep learning experts, during the deployment of a deep learning model to production. We'll explore different options for production deployment, including using a Flask wrapper around PyTorch as well as using RedisAI, which is a highly optimized runtime for deploying models in multicluster environments and can handle millions of requests per second.