Implementing GAIL
In this section, let's explore how to implement Generative Adversarial Imitation Learning (GAIL) with Stable Baselines. In Chapter 15, Imitation Learning and Inverse RL, we learned that we use the generator to generate the state-action pair in a way that the discriminator is not able to distinguish whether the state-action pair is generated using the expert policy or the agent policy. We train the generator to generate a policy similar to an expert policy using TRPO, while the discriminator is a classifier and it is optimized using Adam.
To implement GAIL, we need expert trajectories so that our generator learns to mimic the expert trajectory. Okay, so how can we obtain the expert trajectory? First, we use the TD3 algorithm to generate expert trajectories and then create an expert dataset. Then, using this expert dataset, we train our GAIL agent. Note that instead of using TD3, we can also use any other algorithm for generating expert trajectories.
...