Load-balancing control services
When more compute services are added to the cluster, OpenStack's scheduler distributes the new instances appropriately. When new control or network services are added, traffic has to be deliberately sent to them. There is not anything natively in OpenStack that handles traffic being distributed across the API services. There is a load-balancing service called HAProxy that can do this for us. HAProxy can be run anywhere it can access the endpoints that will be balanced. It could go on its own node or it could be put on a node that already has a bit of OpenStack installed on it. Triple-O will run HAProxy on each of the control nodes.
HAProxy has a concept of frontends and backends. The frontends are where HAProxy listens for incoming traffic, and the backends define where the incoming traffic will be sent to and balanced across. When a user makes an API call to one of the OpenStack services, the HAProxy frontend assigned to the service will receive the...