- Can we apply Adam or SGD optimization in TRPO?
- What is the role of the entropy term in the policy optimization?
- Why do we clip the policy ratio? What will happen if the clipping parameter epsilon is large?
- Why do we use the tanh activation function for mu and softplus for sigma? Can we use the tanh activation function for sigma?
- Does reward shaping always help in the training?
- Do we need reward shaping when we test an already trained agent?
United States
United Kingdom
India
Germany
France
Canada
Russia
Spain
Brazil
Australia
Argentina
Austria
Belgium
Bulgaria
Chile
Colombia
Cyprus
Czechia
Denmark
Ecuador
Egypt
Estonia
Finland
Greece
Hungary
Indonesia
Ireland
Italy
Japan
Latvia
Lithuania
Luxembourg
Malaysia
Malta
Mexico
Netherlands
New Zealand
Norway
Philippines
Poland
Portugal
Romania
Singapore
Slovakia
Slovenia
South Africa
South Korea
Sweden
Switzerland
Taiwan
Thailand
Turkey
Ukraine