The problems we have worked with so far are fairly simple, and applying DQNs is sometimes overkill. In this and the next recipe, we'll use DQNs to solve Atari games, which are far more complicated problems.
We will use Pong (https://gym.openai.com/envs/Pong-v0/) as an example in this recipe. It simulates the Atari 2600 game Pong, where the agent plays table tennis with another player. The observation in this environment is an RGB image of the screen (refer to the following screenshot):
This is a matrix of shape (210, 160, 3), which means that the image is of size 210 * 160 and in three RGB channels.
The agent (on the right-hand side) moves up and down during the game to hit the ball. If it misses it, the other player (on the left-hand side) will get 1 point; similarly, if the other player misses it, the agent will get 1 point. The...