In the REINFORCE with baseline algorithm, there are two separate components, the policy model and the value function. We can actually combine the learning of these two components, since the goal of learning the value function is to update the policy network. This is what the actor-critic algorithm does, and which we are going to develop in this recipe.
The network for the actor-critic algorithm consists of the following two parts:
- Actor: This takes in the input state and outputs the action probabilities. Essentially, it learns the optimal policy by updating the model using information provided by the critic.
- Critic: This evaluates how good it is to be at the input state by computing the value function. The value guides the actor on how it should adjust.
These two components share parameters of input and hidden layers in the network, as...