As you have learned, the workflow for training RL and DRL agents in Unity is much more integrated and seamless than in OpenAI Gym. We didn't have to write a line of code to train an agent in a grid world environment, and the visuals are just plain better. For this chapter, we started by installing the ML-Agents toolkit. Then we loaded up a GridWorld environment and set it up to train with an RL agent. From there, we looked at TensorBoard for monitoring agent training and progress. After we were done training, we first loaded up a Unity pre-trained brain and ran that in the GridWorld environment. Then we used a brain we just trained and imported that into Unity as an asset and then as the GridWorldLearning brain's model.
In the next chapter, we will explore how to construct a new RL environment or game we can use an agent to learn and play. This will allow us...