Up until this point, we have worked with discrete control tasks such as the Atari games in Chapter 5, Deep Q-Network, and LunarLander in Chapter 6, Learning Stochastic and PG Optimization. To play these games, only a few discrete actions have to be controlled, that is, approximately two to five actions. As we learned in Chapter 6, Learning Stochastic and PG Optimization, policy gradient algorithms can be easily adapted to continuous actions. To show these properties, we'll deploy the next few policy gradient algorithms in a new set of environments called Roboschool, in which the goal is to control a robot in different situations. Roboschool has been developed by OpenAI and uses the famous OpenAI Gym interface that we used in the previous chapters. These environments are based on the Bullet Physics Engine (a physics engine that simulates soft and rigid body dynamics...
United States
Great Britain
India
Germany
France
Canada
Russia
Spain
Brazil
Australia
Singapore
Hungary
Ukraine
Luxembourg
Estonia
Lithuania
South Korea
Turkey
Switzerland
Colombia
Taiwan
Chile
Norway
Ecuador
Indonesia
New Zealand
Cyprus
Denmark
Finland
Poland
Malta
Czechia
Austria
Sweden
Italy
Egypt
Belgium
Portugal
Slovenia
Ireland
Romania
Greece
Argentina
Netherlands
Bulgaria
Latvia
South Africa
Malaysia
Japan
Slovakia
Philippines
Mexico
Thailand