Benchmarking the cloud
Scaling up hardware resources is the standard way to address capacity and performance limits. With the help of a monitoring system, cloud operators can react proactively to add more resources in a defined window of time to accommodate the additional load. However, monitoring systems are not enough to better know our limits. In distributed computing systems, every circulated request incurs a performance hit. In the OpenStack world, a load of API requests can be complicated to trace and develop an approximate measurement of how much a part or service can handle.
From the early stages of the cloud journey, cloud operators should define and develop a strategic approach to measure their cloud limits and performance metrics. However, the challenging part is the absence of efficient tools that could be integrated into the life cycle of cloud deployments.
To address this gap of performance measurement, one key factor is to benchmark the private cloud setup under...