Starting with Markov chains
We start this chapter with Markov chains, which do not involve any decision-making. They only model a special type of stochastic processes that are governed by some internal transition dynamics. Therefore, we don't talk about an agent yet. Understanding how Markov chains work will allow us to lay the foundation for MDPs that we will cover later.
Stochastic processes with Markov property
We already defined the state as the set information that completely describes the situation an environment is in. If the next state that the environment will transition into only depends on the current state, not the past, we say that the process has the Markov property. This is named after the Russian mathematician Andrey Markov.
Imagine a broken robot that randomly moves in a grid world. At any given step, the robot goes up, down, left and right with 0.2, 0.3, 0.25 and 0.25 probability, respectively. This is depicted in Figure 4.1, as follows: