A2C on Pong
In the previous chapter, we saw a (not very successful) attempt to solve our favorite Pong environment with PG. Let's try it again with the actor-critic method at hand.
GAMMA = 0.99 LEARNING_RATE = 0.001 ENTROPY_BETA = 0.01 BATCH_SIZE = 128 NUM_ENVS = 50 REWARD_STEPS = 4 CLIP_GRAD = 0.1
We're starting, as usual, by defining hyperparameters (imports are omitted). These values are not tuned, as we'll do this in the next section of this chapter. We have one new value here: CLIP_GRAD
. This hyperparameter is specifying the threshold for gradient clipping, which, basically, prevents our gradients at optimization stage from becoming too large and pushing our policy too far. Clipping is implemented using the PyTorch functionality, but the idea is very simple: if the L2 norm of the gradient is larger than this hyperparameter, then the gradient vector is clipped to this value.
The REWARD_STEPS
hyperparameter determines how many steps ahead we'll take to approximate the...