Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Differences between using a Load Balanced Service and an Ingress in Kubernetes from Blog Posts - SQLServerCentral

Save for later
  • 5 min read
  • 23 Nov 2020

article-image

What is the difference between using a load balanced service and an ingress to access applications in Kubernetes?

Basically, they achieve the same thing. Being able to access an application that’s running in Kubernetes from outside of the cluster, but there are differences!

The key difference between the two is that ingress operates at networking layer 7 (the application layer) so routes connections based on http host header or url path. Load balanced services operate at layer 4 (the transport layer) so can load balance arbitrary tcp/udp/sctp services.

Ok, that statement doesn’t really clear things up (for me anyway). I’m a practical person by nature…so let’s run through examples of both (running everything in Kubernetes for Docker Desktop).

What we’re going to do is spin up two nginx pages that will serve as our applications and then firstly use load balanced services to access them, followed by an ingress.

So let’s create two nginx deployments from a custom image (available on the GHCR): –

kubectl create deployment nginx-page1 --image=ghcr.io/dbafromthecold/nginx:page1
kubectl create deployment nginx-page2 --image=ghcr.io/dbafromthecold/nginx:page2

And expose those deployments with a load balanced service: –

kubectl expose deployment nginx-page1 --type=LoadBalancer --port=8000 --target-port=80
kubectl expose deployment nginx-page2 --type=LoadBalancer --port=9000 --target-port=80

Confirm that the deployments and services have come up successfully: –

kubectl get all

differences-between-using-a-load-balanced-service-and-an-ingress-in-kubernetes-from-blog-posts-sqlservercentral-img-0

Ok, now let’s check that the nginx pages are working. As we’ve used a load balanced service in k8s in Docker Desktop they’ll be available as localhost:PORT: –

curl localhost:8000
curl localhost:9000

differences-between-using-a-load-balanced-service-and-an-ingress-in-kubernetes-from-blog-posts-sqlservercentral-img-1

Great! So we’re using the external IP address (local host in this case) and a port number to connect to our applications.

Now let’s have a look at using an ingress.

First, let’s get rid of those load balanced services: –

kubectl delete service nginx-page1 nginx-page2

And create two new cluster IP services: –

kubectl expose deployment nginx-page1 --type=ClusterIP --port=8000 --target-port=80
kubectl expose deployment nginx-page2 --type=ClusterIP --port=9000 --target-port=80

So now we have our pods running and two cluster IP services, which aren’t accessible from outside of the cluster: –
differences-between-using-a-load-balanced-service-and-an-ingress-in-kubernetes-from-blog-posts-sqlservercentral-img-2

The services have no external IP so what we need to do is deploy an ingress controller.

An ingress controller will provide us with one external IP address, that we can map to a DNS entry. Once the controller is up and running we then use an ingress resources to define routing rules that will map external requests to different services within the cluster.

Kubernetes currently supports GCE and nginx controllers, we’re going to use an nginx ingress controller.

To spin up the controller run: –

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.40.2/deploy/static/provider/cloud/deploy.yaml

differences-between-using-a-load-balanced-service-and-an-ingress-in-kubernetes-from-blog-posts-sqlservercentral-img-3

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at AU $24.99/month. Cancel anytime

We can see the number of resources that’s going to create its own namespace, and to confirm they’re all up and running: –

kubectl get all -n ingress-nginx

differences-between-using-a-load-balanced-service-and-an-ingress-in-kubernetes-from-blog-posts-sqlservercentral-img-4

Note the external IP of “localhost” for the ingress-nginx-controller service.

Ok, now we can create an ingress to direct traffic to our applications. Here’s an example ingress.yaml file: –

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-testwebsite
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - host: www.testwebaddress.com
    http:
      paths:
       - path: /pageone
         pathType: Prefix
         backend:
           service:
             name: nginx-page1
             port:
               number: 8000
       - path: /pagetwo
         pathType: Prefix
         backend:
           service:
             name: nginx-page2
             port:
               number: 9000

Watch out here. In Kubernetes v1.19 ingress went GA so the apiVersion changed. The yaml above won’t work in any version prior to v1.19.

Anyway, the main points in this yaml are: –

  annotations:
    kubernetes.io/ingress.class: "nginx"

Which makes this ingress resource use our ingress nginx controller.

  rules:
  - host: www.testwebaddress.com

Which sets the URL we’ll be using to access our applications to http://www.testwebaddress.com

       - path: /pageone
         pathType: Prefix
         backend:
           service:
             name: nginx-page1
             port:
               number: 8000
       - path: /pagetwo
         pathType: Prefix
         backend:
           service:
             name: nginx-page2
             port:
               number: 9000

Which routes our requests to the backend cluster IP services depending on the path (e.g. – http://www.testwebaddress.com/pageone will be directed to the nginx-page1 service)

You can create the ingress.yaml file manually and then deploy to Kubernetes or just run: –

kubectl apply -f https://gist.githubusercontent.com/dbafromthecold/a6805ca732eac278e902bbcf208aef8a/raw/e7e64375c3b1b4d01744c7d8d28c13128c09689e/testnginxingress.yaml

Confirm that the ingress is up and running (it’ll take a minute to get an address): –

kubectl get ingress

differences-between-using-a-load-balanced-service-and-an-ingress-in-kubernetes-from-blog-posts-sqlservercentral-img-5
N.B. – Ignore the warning (if you get one like in the screen shot above), we’re using the correct API version

Finally, we now also need to add an entry for the web address into our hosts file (simulating a DNS entry): –

127.0.0.1 www.testwebaddress.com

And now we can browse to the web pages to see the ingress in action!

differences-between-using-a-load-balanced-service-and-an-ingress-in-kubernetes-from-blog-posts-sqlservercentral-img-6

And that’s the differences between using load balanced services or an ingress to connect to applications running in a Kubernetes cluster. The ingress allows us to only use the one external IP address and then route traffic to different backend services whereas with the load balanced services, we would need to use different IP addresses (and ports if configured that way) for each application.

Thanks for reading!

The post Differences between using a Load Balanced Service and an Ingress in Kubernetes appeared first on SQLServerCentral.