Good news, we're finally ready to start coding. In this section, I'm going to demonstrate two Keras-RL agents called CartPole and Lunar Lander. I've chosen these examples because they won't consume your GPU and your cloud budget to run. They can be easily extended to Atari problems, and I've included one of those as well in the book's Git repository. You can find all this code in the Chapter12 folder, as usual. Let's talk quickly about these two environments:
- CartPole: The CartPole environment consists of a pole, balanced on a cart. The agent has to learn how to balance the pole vertically, while the cart underneath it moves. The agent is given the position of the cart, the velocity of the cart, the angle of the pole, and the rotational rate of the pole as inputs. The agent can apply a force on...