Monte Carlo prediction
In DP, we solve the Markov Decision Process (MDP) by using value iteration and policy iteration. Both of these techniques require transition and reward probabilities to find the optimal policy. But how can we solve MDP when we don't know the transition and reward probabilities? In that case, we use the Monte Carlo method. The Monte Carlo method requires only sample sequences of states, actions, and rewards. the Monte Carlo methods are applied only to the episodic tasks. Since Monte Carlo doesn't require any model, it is called the model-free learning algorithm.Â
The basic idea of the Monte Carlo method is very simple. Do you recall how we defined the optimal value function and how we derived the optimal policy in the Chapter 3, Markov Decision Process and Dynamic Programming?
A value function is basically the expected return from a state S with a policy Ï€. Here, instead of expected return, we use mean return.Â
Note
Thus, in Monte Carlo prediction, we approximate the value...