Introduction
Critical applications and services need to be resilient to failures and capable of handling high network traffic. One of the strategies to achieve the scale and high availability is using a load balancer. A load balancer distributes an incoming service request to a pool of servers, which process the request, thus providing a higher throughput. If one of the servers in the pool fails, the load balancer removes it from the pool and the subsequent service requests are distributed among the remaining servers. The load balancer acts as a frontend to a cluster of worker nodes, which provide the actual service.
To implement these recipes, we will use an OpenStack setup, as described in the following image:
This setup has two compute nodes and one node for the controller and networking services.