Why asynchronous methods?
Asynchronous methods for deep reinforcement learning was published in June 2016 by the combined team of Google DeepMind and MILA (https://arxiv.org/pdf/1602.01783.pdf).  It was faster and was able to show good results on a multi-core CPU instead of using a GPU. Asynchronous methods also work on continuous as well as discrete action spaces.
If we recall the approach of deep Q-network, we use experience replay as a storage to store all the experiences, and then use a random sample from that to train our deep neural network, which in turn predicts maximum Q-value for the most favorable action. But, it has the drawbacks of high memory usage and heavy computation over time. The basic idea behind this was to overcome this issue. Therefore, instead of using experience replay, multiple instances of the environment are created and multiple agents asynchronously execute actions in parallel (shown in the following diagram):
High-level diagram of the asynchronous method in deep...