REINFORCE and Actor-Critic are very intuitive methods that work well on small to medium-sized RL tasks. However, they present some problems that need to be addressed so that we can adapt policy gradient algorithms so that they work on much larger and complex tasks. The main problems are as follows:
- Difficult to choose a correct step size: This comes from the nature of RL being non-stationary, meaning that the distribution of the data changes continuously over time and as the agent learns new things, it explores a different state space. Finding an overall stable learning rate is very tricky.
- Instability: The algorithms aren't aware of the amount by which the policy will change. This is also related to the problem we stated previously. A single, not controlled update could induce a substantial shift of the policy that will drastically change the action...