I2A on Atari Breakout
The training path of I2A is a bit complicated and includes a lot of code and several steps. To understand it better, let's start with a brief overview. In this example, we will implement the I2A architecture described in the paper [2], adopted to the Atari environments, and test it on the Breakout game. The overall goal is to check the training dynamics and the effect of imagination augmentation on the final policy.
Our example consists of three parts, which correspond to different steps in the training:
- The baseline advantage actor-critic (A2C) agent in
Chapter22/01_a2c.py
. The resulting policy is used for obtaining observations of the EM. - The EM training in
Chapter22/02_imag.py
. It uses the model obtained on the previous step to train the EM in an unsupervised way. The result is the EM weights. - The final I2A agent training in
Chapter22/03_i2a.py
. In this step, we use the EM from step 2 to train a full I2A agent, which combines the model...