Extending from where we left off with DQN, we looked at ways of extending this model with CNN and adding additional networks to create double DQN and dueling DQN, or DDQN. Before exploring CNN, we looked at what visual observation encoding is and why we need it. Then, we briefly introduced CNN and used the TensorSpace Playground to explore some well-known, state-of-the-art models. Next, we added CNN to a DQN model and used that to play the Atari game environment Pong. After, we took a closer look at how we could extend DQN by adding another network as the target and adding another network to duel against or to contradict the other network, also known as the dueling DQN or DDQN. This introduced the concept of advantage in choosing an action. Finally, we looked at extending the experience replay buffer so that we can prioritize events that get captured there. Using this...
United States
United Kingdom
India
Germany
France
Canada
Russia
Spain
Brazil
Australia
Argentina
Austria
Belgium
Bulgaria
Chile
Colombia
Cyprus
Czechia
Denmark
Ecuador
Egypt
Estonia
Finland
Greece
Hungary
Indonesia
Ireland
Italy
Japan
Latvia
Lithuania
Luxembourg
Malaysia
Malta
Mexico
Netherlands
New Zealand
Norway
Philippines
Poland
Portugal
Romania
Singapore
Slovakia
Slovenia
South Africa
South Korea
Sweden
Switzerland
Taiwan
Thailand
Turkey
Ukraine