Policy gradient methods on Pong
As we’ve seen in the previous section, the vanilla policy gradient method works well on a simple CartPole environment, but it works surprisingly badly in more complicated environments.
For the relatively simple Atari game Pong, our DQN was able to completely solve it in 1 million frames and showed positive reward dynamics in just 100,000 frames, whereas the policy gradient method failed to converge. Due to the instability of policy gradient training, it became very hard to find good hyperparameters and was still very sensitive to initialization. This doesn’t mean that the policy gradient method is bad, because, as you will see in the next chapter, just one tweak of the network architecture to get a better baseline in the gradients will turn the policy gradient method into one of the best methods (the asynchronous advantage actor-critic method). Of course, there is a good chance that my hyperparameters are completely wrong or the...