The core of policy gradient algorithms has already been covered, but we have another important concept to explain. We are yet to look at how action values are computed.
We already saw with the formula (6.4):
that we are able to estimate the gradient of the objective function by sampling directly from the experience that is collected following the policy.
The only two terms that are involved are the values of and the derivative of the logarithm of the policy, which can be obtained through modern deep learning frameworks (such as TensorFlow and PyTorch). While we defined , we haven't explained how to estimate the action-value function, yet.
The simpler way, introduced for the first time in the REINFORCE algorithm by Williams, is to estimate the return is using Monte Carlo (MC) returns. For this reason, REINFORCE is considered an MC algorithm...