Both the network and the services you're running have to take some time to respond even under normal conditions. Occasionally they'll have to take longer, especially when being under a bigger-than-average load. Sometimes instead of a few milliseconds, your requests can take seconds to complete.
Try to design your system so it doesn't wait on too many fine-grained remote calls, as each such call can add to your total processing time. Even in a local network, 10,000 requests for 1 record will be much slower than 1 request for 10,000 records. To reduce network latency, consider sending and handling requests in bulk. You can also try to hide the cost of small calls by doing other processing tasks while waiting for their results.
Other ways to deal with latency are to introduce caches, push the data in a publisher-subscriber model instead of waiting for requests, or deploy closer to the customers, for example, by using Content Delivery Networks (CDNs).