In Chapter 2, Temporal Difference, SARSA, and Q-Learning, we will look into our first two RL algorithms: Q-learning and SARSA. Both of these algorithms are tabular-based and do not require the use of neural networks. Thus, we will code them in Python and NumPy. In Chapter 3, Deep Q-Network, we will cover DQN and use TensorFlow to code the agent for the rest of the book. We will then train it to play Atari Breakout. In Chapter 4, Double DQN, Dueling Architectures, and Rainbow, we will cover double DQN, dueling network architectures, and rainbow DQN. In Chapter 5, Deep Deterministic Policy Gradient, we will look at our first Actor-Critic RL algorithm called DDPG, learn about policy gradients, and apply them to a continuous action problem. In Chapter 6, Asynchronous Methods – A3C and A2C, we will investigate A3C, which is another RL algorithm that uses a master and several worker processes. In Chapter 7, Trust Region Policy Optimization and Proximal Policy Optimization, we will investigate two more RL algorithms: TRPO and PPO. Finally, we will apply DDPG and PPO to train an agent to drive a car autonomously in Chapter 8, Deep RL Applied to Autonomous Driving. From Chapter 3, Deep Q-Network, to Chapter 8, Deep RL Applied to Autonomous Driving, we'll use TensorFlow agents. Have fun learning RL.
United States
Great Britain
India
Germany
France
Canada
Russia
Spain
Brazil
Australia
Singapore
Hungary
Philippines
Mexico
Thailand
Ukraine
Luxembourg
Estonia
Lithuania
Norway
Chile
South Korea
Ecuador
Colombia
Taiwan
Switzerland
Indonesia
Cyprus
Denmark
Finland
Poland
Malta
Czechia
New Zealand
Austria
Turkey
Sweden
Italy
Egypt
Belgium
Portugal
Slovenia
Ireland
Romania
Greece
Argentina
Malaysia
South Africa
Netherlands
Bulgaria
Latvia
Japan
Slovakia