I2A on Atari Breakout
The code and training path of I2A is a bit complicated and includes lots of code and several steps. To understand it better, let's start with a brief overview. In this example, we'll implement the I2A architecture described in the paper, adopted to the Atari environments, and test it on the Breakout game. The overall goal is to check the training dynamics and the effect of imagination augmentation on the final policy.
Our example consists of three parts, which correspond to different steps in the training:
- Baseline A2C agent in
Chapter17/01_a2c.py
. The resulting policy is used for obtaining observations of the environment model. - Environment model training in
Chapter17/02_imag.py
. It uses the model obtained on the previous step to train EM in an unsupervised way. The result is EM weights. - The final I2A agent training in
Chapter17/03_i2a.py
. In this step, we use the EM from step 2 to train a full I2A agent, which combines the model-free and rollouts paths...