Need for policy-based methods
We start this chapter by first discussing why we need policy-based methods as we have already introduced many value-based methods. Policy-based methods i) are arguably more principled as they directly optimize the policy parameters, ii) allow us to use continuous action spaces, and iii) are able to learn truly random stochastic policies. Let's now go into the details of each of these points.
A more principled approach
In Q-learning, a policy is obtained in an indirect manner by learning action values, which are then used to determine the best action(s). But do we really need to know the value of an action? Most of the time we don't, as they are only proxies to get us to optimal policies. Policy-based methods learn function approximations that directly give policies without such an intermediate step. This is arguably a more principled approach because we can take gradient steps directly to optimize the policy, not the proxy action-value...