Combining everything
We've now seen all DQN improvements mentioned in the paper [1] Rainbow: Combining Improvements in Deep Reinforcement Learning. Let's combine all of them into one hybrid method. First of all, we need to define our network architecture and the three methods that have contributed to it:
- Categorical DQN: Our network will predict the value probability distribution of actions.
- Dueling DQN: Our network will have two separate paths for value of state distribution and advantage distribution. On the output, both paths will be summed together, providing the final value probability distributions for actions. To force advantage distribution to have a zero mean, we'll subtract distribution with mean advantage in every atom.
- NoisyNet: Our linear layers in the value and advantage paths will be noisy variants of
nn.Linear
.
In addition to network architecture changes, we'll use prioritized replay buffer to keep environment transitions and sample them proportionally...