Solving dynamic programming problems
Finite MDPs are a simple yet fundamental framework. We will introduce the trajectories of rewards that the agent aims to optimize, define the policy and value functions used to formulate the optimization problem, and the Bellman equations that form the basis for the solution methods.
Finite Markov decision problems
MDPs frame the agent-environment interaction as a sequential decision problem over a series of time steps t =1, …, T that constitute an episode. Time steps are assumed as discrete, but the framework can be extended to continuous time.
The abstraction afforded by MDPs makes its application easily adaptable to many contexts. The time steps can be at arbitrary intervals, and actions and states can take any form that can be expressed numerically.
The Markov property implies that the current state completely describes the process, that is, the process has no memory. Information from past states adds no value when trying...