An MDP is considered solved if its optimal policy is found. In this recipe, we will figure out the optimal policy for the FrozenLake environment using a value iteration algorithm.
The idea behind value iteration is quite similar to that of policy evaluation. It is also an iterative algorithm. It starts with arbitrary policy values and then iteratively updates the values based on the Bellman optimality equation until they converge. So in each iteration, instead of taking the expectation (average) of values across all actions, it picks the action that achieves the maximal policy values:
Here, V*(s) denotes the optimal value, which is the value of the optimal policy; T(s, a, s') is the transition probability from state s to state s’ by taking action a; and R(s, a) is the reward received in state s by taking action a.
Once...