We have just learned how to scale our service. The next natural step is to configure the load balancer. The good news is that OpenShift will do most of the stuff automatically for us.
In Chapter 6, Deploying Applications on the Cloud with OpenShift, where we introduced services, we learned that a service is reached using a virtual cluster IP. To understand how load balancing works, let's understand how cluster IP is implemented.
As we have also learned here, each node in a Kubernetes cluster runs a bunch of services, which allow a cluster to provide its functionality. One of those services is kube-proxy. Kube-proxy runs on every node and is, among other things, responsible for service implementation. Kube-proxy continuously monitors the object model describing the cluster and gathers information about currently active services and pods on which those services...