PG on Pong
As covered in the previous section, the vanilla PG method works well on a simple CartPole environment, but surprisingly badly on more complicated environments. Even in the relatively simple Atari game Pong, our DQN was able to completely solve it in 1M frames and showed positive reward dynamics in just 100k frames, whereas PG failed to converge. Due to the instability of PG training, it became very hard to find good hyperparameters, which is still very sensitive to initialization.
This doesn’t mean that the PGs are bad, because, as we’ll see in the next chapter, just one tweak of the network architecture to get the better baseline in the gradients will turn PG into one of the best methods (Asynchronous Advantage Actor-Critic (A3C) method). Of course, there is a good chance that my hyperparameters are completely wrong or the code has some hidden bugs or whatever. Regardless, unsuccessful results still have value, at least as a demonstration of bad convergence...