CartPole variance
To check this theoretical conclusion in practice, let’s plot our policy gradient variance during the training for both the baseline version and the version without the baseline. The complete example is in Chapter12/01_cartpole_pg.py, and most of the code is the same as in Chapter 11. The differences in this version are the following:
-
It now accepts the command-line option --baseline, which enables the mean subtraction from the reward. By default, no baseline is used.
-
On every training loop, we gather the gradients from the policy loss and use this data to calculate the variance.
To gather only the gradients from the policy loss and exclude the gradients from the entropy bonus added for exploration, we need to calculate the gradients in two stages. Luckily, PyTorch allows this to be done easily. In the following code, only the relevant part of the training loop is included to...