Summary
We started the chapter by understanding how to set up our machine by installing Anaconda and the Gym toolkit. We learned how to create a Gym environment using the gym.make()
function. Later, we also explored how to obtain the state space of the environment using env.observation_space
and the action space of the environment using env.action_space
. We then learned how to obtain the transition probability and reward function of the environment using env.P
. Following this, we also learned how to generate an episode using the Gym environment. We understood that in each step of the episode we select an action using the env.step()
function.
We understood the classic control methods in the Gym environment. We learned about the continuous state space of the classic control environments and how they are stored in an array. We also learned how to balance a pole using a random agent. Later, we learned about interesting Atari game environments, and how Atari game environments are named in Gym, and then we explored their state space and action space. We also learned how to record the agent's gameplay using the wrapper class, and at the end of the chapter, we discovered other environments offered by Gym.
In the next chapter, we will learn how to find the optimal policy using two interesting algorithms called value iteration and policy iteration.