Chapter 5 – Understanding Temporal Difference Learning
- Unlike the Monte Carlo method, the Temporal Difference (TD) learning method makes use of bootstrapping so that we don't have to wait until the end of the episode to compute the value of a state.
- The TD learning algorithm takes the benefits of both the dynamic programming and the Monte Carlo methods into account. That is, just like the dynamic programming method, we perform bootstrapping so that we don't have to wait till the end of an episode to compute the state value or Q value and just like the Monte Carlo method, it is a model-free method, and so it does not require the model dynamics of the environment to compute the state value or Q value.
- The TD error can be defined as the difference between the target value and predicted value.
- The TD learning update rule is given as .
- In a TD prediction task, given a policy, we estimate the value function using the given policy. So, we...