In our deep Q-learner-based intelligent agent implementation, we used a deep neural network as the function approximator to represent the action-value function. The agent then used the action-value function to come up with a policy based on the value function. In particular, we used the -greedy algorithm in our implementation. So, we understand that ultimately the agent has to know what actions are good to take given an observation/state. Instead of parametrizing or approximating a state/action action function and then deriving a policy based on that function, can we not parametrize the policy directly? Yes we can! That is the exact idea behind policy gradient methods.
In the following subsections, we will briefly look at policy gradient-based learning methods and then transition to actor-critic methods that combine and make use...