Partially observable Markov decision process
The definition of policy we have used in this chapter so far is that it is a mapping from the states of an environment to actions. Now, the question we should ask is whether the state is really known to the agent in all types of environments.
Remember the definition of the state: it describes everything in an environment related to the agent's decision-making (in the grid world example, if the color of the walls is not important, for instance, so it will not be part of the state).
If you think about it: This is a very strong definition. Consider the situation when someone is driving a car. Does the driver know everything about the world around them while making their driving decisions? Of course not! To begin with, the cars would be blocking each other in their sight more often than not. Not knowing the precise state of the world does not stop anyone from driving, though. In such cases, we base our decision on our observations...