- Is the DDPG an on-policy or off-policy algorithm?
- We used the same neural network architectures for both the actor and the critic. Is this required, or can we choose different neural network architectures for the actor and the critic?
- Can we use the DDPG for Atari Breakout?
- Why are the biases of the neural networks initialized to small positive values?
- This is left as an exercise: Can you modify the code in this chapter to train an agent to learn InvertedDoublePendulum-v2, which is more challenging than the Pendulum-v0 that you saw in this chapter?
- Here is another exercise: Vary the neural network architecture and check whether the agent can learn the Pendulum-v0 problem. For instance, keep decreasing the number of neurons in the first hidden layer with the values 400, 100, 25, 10, 5, and 1, and check how the agent performs for the different number of neurons in the...
United States
Great Britain
India
Germany
France
Canada
Russia
Spain
Brazil
Australia
Singapore
Hungary
Philippines
Mexico
Thailand
Ukraine
Luxembourg
Estonia
Lithuania
Norway
Chile
South Korea
Ecuador
Colombia
Taiwan
Switzerland
Indonesia
Cyprus
Denmark
Finland
Poland
Malta
Czechia
New Zealand
Austria
Turkey
Sweden
Italy
Egypt
Belgium
Portugal
Slovenia
Ireland
Romania
Greece
Argentina
Malaysia
South Africa
Netherlands
Bulgaria
Latvia
Japan
Slovakia