Delivering Kubernetes-Native Applications
In the previous sections, we migrated a Docker-based application to Kubernetes and successfully accessed it from inside the Minikube VM, as well as externally. Now, let's see what other benefits Kubernetes can provide if we design our application from the ground up so that it can be deployed using Kubernetes.
Along with the increasing usage of your application, it may be common to run several replicas of certain pods to serve a business functionality. In this case, grouping different containers in a pod alone is not sufficient. We need to go ahead and create groups of pods that are working together. Kubernetes provides several abstractions for groups of pods, such as Deployments, DaemonSets, Jobs, CronJobs, and so on. Just like the Service object, these objects can also be created by using a spec that's been defined in a YAML file.
To start understanding the benefits of Kubernetes, let's use a Deployment to demonstrate how to replicate (scale up/down) an application in multiple pods.
Abstracting groups of pods using Kubernetes gives us the following advantages:
- Creating replicas of pods for redundancy: This is the main advantage of abstractions of groups of pods such as Deployments. A Deployment can create several pods with the given spec. A Deployment will automatically ensure that the pods that it creates are online, and it will automatically replace any pods that fail.
- Easy upgrades and rollbacks: Kubernetes provides different strategies that you can use to upgrade your applications, as well as rolling versions back. This is important because in modern software development, the software is often developed iteratively, and updates are released frequently. An upgrade can change anything in the Deployment specification. It can be an update of labels or any other field(s), an image version upgrade, an update on its embedded containers, and so on.
Let's take a look at some notable aspects of the spec of a sample Deployment:
k8s-for-beginners-deploy.yaml
apiVersion: apps/v1 kind: Deployment metadata:   name: k8s-for-beginners spec:   replicas: 3   selector:     matchLabels:       tier: frontend   template:     metadata:       labels:         tier: frontend     spec:       containers:       - name: k8s-for-beginners         image: packtworkshops/the-kubernetes-workshop:k8s-for- beginners
In addition to wrapping the pod spec as a "template", a Deployment must also specify its kind (Deployment
), as well as the API version (apps/v1
).
Note
For some historical reason, the spec name apiVersion
is still being used. But technically speaking, it literally means apiGroupVersion
. In the preceding Deployment example, it belongs to the apps
group and is version v1
.
In the Deployment spec, the replicas
field instructs Kubernetes to start three pods using the pod spec defined in the template
field. The selector
field plays the same role as we saw in the case of the Service – it aims to associate the Deployment object with specific pods in a loosely coupled manner. This is particularly useful if you want to bring any preexisting pods under the management of your new Deployment.
The replica number defined in a Deployment or other similar API object represents the desired state of how many pods are supposed to be running continuously. If some of these pods fail for some unexpected reason, Kubernetes will automatically detect that and create a corresponding number of pods to take their place. We will explore that in the following exercise.
We'll see a Deployment in action in the following exercise.
Exercise 2.04: Scaling a Kubernetes Application
In Kubernetes, it's easy to increase the number of replicas running the application by updating the replicas
field of a Deployment spec. In this exercise, we'll experiment with how to scale a Kubernetes application up and down. Follow these steps to complete this exercise:
- Create a file named
k8s-for-beginners-deploy.yaml
using the content shown here:apiVersion: apps/v1 kind: Deployment metadata:   name: k8s-for-beginners spec:   replicas: 3   selector:     matchLabels:       tier: frontend   template:     metadata:       labels:         tier: frontend     spec:       containers:       - name: k8s-for-beginners         image: packtworkshops/the-kubernetes-workshop:k8s-for- beginners
If you take a closer look, you'll see that this Deployment spec is largely based on the pod spec from earlier exercises (
k8s-for-beginners-pod1.yaml
), which you can see under thetemplate
field. - Next, we can use kubectl to create the Deployment:
kubectl apply -f k8s-for-beginners-deploy.yaml
You should see the following output:
deployment.apps/k8s-for-beginners created
- Given that the Deployment has been created successfully, we can use the following command to show all the Deployment's statuses, such as their names, running pods, and so on:
kubectl get deploy
You should get the following response:
NAME READY UP-TO-DATE AVAILABLE AGE k8s-for-beginners 3/3 3 3 41s
Note
As shown in the previous command, we are using
deploy
instead ofdeployment
. Both of these will work anddeploy
is an allowed short name fordeployment
. You can find a quick list of some commonly used short names at this link: https://kubernetes.io/docs/reference/kubectl/overview/#resource-types.You can also view the short names by running
kubectl api-resources
, without specifying the resource type. - A pod called
k8s-for-beginners
exists that we created in the previous exercise. To ensure that we see only the pods being managed by the Deployment, let's delete the older pod:kubectl delete pod k8s-for-beginners
You should see the following response:
pod "k8s-for-beginners" deleted
- Now, get a list of all the pods:
kubectl get pod
You should see the following response:
The Deployment has created three pods, and their labels (specified in the
labels
field in step 1) happen to match the Service we created in the previous section. So, what will happen if we try to access the Service? Will the network traffic going to the Service be smartly routed to the new three pods? Let's test this out. - To see how the traffic is distributed to the three pods, we can simulate a number of consecutive requests to the Service endpoint by running the
curl
command inside a Bashfor
loop, as follows:for i in $(seq 1 30); do curl <minikube vm ip>:<service node port>; done
Note
In this command, use the same IP and port that you used in the previous exercise if you are running the same instance of Minikube. If you have restarted Minikube or have made any other changes, please get the proper IP of your Minikube cluster by following step 9 of the previous exercise.
Once you've run the command with the proper IP and port, you should see the following output:
From the output, we can tell that all 30 requests get the expected response.
- You can run
kubectl logs <pod name>
to check the log of each pod. Let's go one step further and figure out the exact number of requests each pod has responded to, which might help us find out whether the traffic was evenly distributed. To do that, we can pipe the logs of each pod into thewc
command to get the number of lines:kubectl logs <pod name> | wc -l
Run the preceding command three times, copying the pod name you obtained, as shown in Figure 2.16:
The result shows that the three pods handled
9
,10
, and11
requests, respectively. Due to the small sample size, the distribution is not absolutely even (that is,10
for each), but it is sufficient to indicate the default round-robin distribution strategy used by a Service.Note
You can read more about how kube-proxy leverages iptables to perform the internal load balancing by looking at the official documentation: https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-iptables.
- Next, let's learn how to scale up a Deployment. There are two ways of accomplishing this: one way is to modify the Deployment's YAML config, where we can set the value of
replicas
to another number (such as5
), while the other way is to use thekubectl scale
command, as follows:kubectl scale deploy k8s-for-beginners --replicas=5
You should see the following response:
deployment.apps/k8s-for-beginners scaled
- Let's verify whether there are five pods running:
kubectl get pod
You should see a response similar to the following:
The output shows that the existing three pods are kept and that two new pods are created.
- Similarly, you can specify replicas that are smaller than the current number. In our example, let's say that we want to shrink the replica's number to
2
. The command for this would look as follows:kubectl scale deploy k8s-for-beginners --replicas=2
You should see the following response:
deployment.apps/k8s-for-beginners scaled
- Now, let's verify the number of pods:
kubectl get pod
You should see a response similar to the following:
As shown in the preceding screenshot, there are two pods, and they are both running as expected. Thus, in Kubernetes' terms, we can say, "the Deployment is in its desired state".
- We can run the following command to verify this:
kubectl get deploy
You should see the following response:
NAME READY UP-TO-DATE AVAILABLE AGE k8s-for-beginners 2/2 2 2 19m
- Now, let's see what happens if we delete one of the two pods:
kubectl delete pod <pod name>
You should get the following response:
pod "k8s-for-beginners-66644bb776-7j9mw" deleted
- Check the status of the pods to see what has happened:
kubectl get pod
You should see the following response:
We can see that there are still two pods. From the output, it's worth noting that the first pod name is the same as the second pod in Figure 2.18 (this is the one that was not deleted), but that the highlighted pod name is different from any of the pods in Figure 2.18. This indicates that the highlighted one is the pod that was newly created to replace the deleted one. The Deployment created a new pod so that the number of running pods satisfies the desired state of the Deployment.
In this exercise, we have learned how to scale a deployment up and down. You can scale other similar Kubernetes objects, such as DaemonSets and StatefulSets, in the same way. Also, for such objects, Kubernetes will try to auto-recover the failed pods.