In this chapter, we covered the building blocks, such as shallow and deep neural networks that included logistic regression, single hidden layer neural network, RNNs, LSTMs, CNNs, and their other variations. Catering to the these topics, we also covered multiple activation functions, how forward and backward propagation works, and the problems associated with the training of deep neural networks, such as vanishing and exploding gradients.
Then, we covered the very basic terminologies in reinforcement learning that we will explore in detail in the coming chapters. These were the optimality criteria, which are value function and policy. We also gained an understanding of some reinforcement learning algorithms, such as Q-learning and A3C algorithms. Then, we covered some basic computations in the TensorFlow framework, an introduction to OpenAI Gym, and also discussed some of the influential pioneers and research breakthroughs in the field of reinforcement learning.
In the following chapter, we will implement a basic reinforcement learning algorithm to a couple of OpenAI Gym framework environments and get a better understanding of OpenAI Gym.