Several environments
The first idea that we usually apply to speed up deep learning training is larger batch size. It's applicable to the domain of deep RL, but you need to be careful here. In the normal supervised learning case, the simple rule "a large batch is better" is usually true: you just increase your batch as your GPU memory allows, and a larger batch normally means more samples will be processed in a unit of time thanks to enormous GPU parallelism.
The RL case is slightly different. During the training, two things happen simultaneously:
- Your network is trained to get better predictions on the current data
- Your agent explores the environment
As the agent explores the environment and learns about the outcome of its actions, the training data changes. In a shooter example, your agent can run randomly for a time while being shot by monsters and have only a miserable "death is everywhere" experience in the training buffer. But after...