If you have studied Reinforcement Learning previously, you may have already come across the term MDP, for the Markov Decision Process, and the Bellman equation. An MDP is defined as a discrete time stochastic control (https://en.wikipedia.org/wiki/Stochastic) process, but, put more simply, it is any process that makes decisions based on some amount of uncertainty combined with mathematics. This rather vague description still fits well with how we have been using RL to make decisions. In fact, we have been developing MDP processes all of this chapter, and you should be fairly comfortable with the concept now.
Up until now, we have only modeled the partial RL or one-step problem. Our observation of state was only for one step or action, which meant that the agent always received an immediate reward or punishment. In a full RL problem, an agent may need...