Typically, we leave problems with large observation spaces to be tackled with deep learning. Deep learning, as we will learn, is very well-suited to such problems. However, deep learning is not without its own issues and it is sometimes prudent to try and solve an environment without deep learning. Now, not all environments will discretize well, as we mentioned previously, but we do want to look at another example. The next example we will look at is the infamous Cart Pole environment, which is almost always tackled with deep RL, primarily because it uses a continuous action space with four dimensions. Keep in mind that our previous observation spaces only had one dimension, and, in our last example, we only had two.
Being able to convert an agent's observation space can be a useful trick especially in more abstract game environments. Remember...