If you're not familiar with asynchronous programming then what we're about to talk about may seem a little confusing at first, but I promise that in practice, it's actually quite simple. All it means is performing individual computational tasks out of order, or out of sync. It allows engineers to defer blocking the execution of their program to wait for a long-running task until they absolutely have to. To make this clear, let's look at an example.
Let's imagine we have a method that must have step A send a request for a massive amount of data, with step B performing long-running calculations locally, and finally, C returns the two results as a single response. If we were to read the response from our network request synchronously, then the time it takes to complete our method would be the total of...