Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
The Kubernetes Bible

You're reading from   The Kubernetes Bible The definitive guide to deploying and managing Kubernetes across major cloud platforms

Arrow left icon
Product type Paperback
Published in Feb 2022
Publisher Packt
ISBN-13 9781838827694
Length 680 pages
Edition 1st Edition
Arrow right icon
Authors (3):
Arrow left icon
Nassim Kebbani Nassim Kebbani
Author Profile Icon Nassim Kebbani
Nassim Kebbani
Piotr Tylenda Piotr Tylenda
Author Profile Icon Piotr Tylenda
Piotr Tylenda
Russ McKendrick Russ McKendrick
Author Profile Icon Russ McKendrick
Russ McKendrick
Arrow right icon
View More author details
Toc

Table of Contents (28) Chapters Close

Preface 1. Section 1: Introducing Kubernetes
2. Chapter 1: Kubernetes Fundamentals FREE CHAPTER 3. Chapter 2: Kubernetes Architecture – From Docker Images to Running Pods 4. Chapter 3: Installing Your First Kubernetes Cluster 5. Section 2: Diving into Kubernetes Core Concepts
6. Chapter 4: Running Your Docker Containers 7. Chapter 5: Using Multi-Container Pods and Design Patterns 8. Chapter 6: Configuring Your Pods Using ConfigMaps and Secrets 9. Chapter 7: Exposing Your Pods with Services 10. Chapter 8: Managing Namespaces in Kubernetes 11. Chapter 9: Persistent Storage in Kubernetes 12. Section 3: Using Managed Pods with Controllers
13. Chapter 10: Running Production-Grade Kubernetes Workloads 14. Chapter 11: Deployment – Deploying Stateless Applications 15. Chapter 12: StatefulSet – Deploying Stateful Applications 16. Chapter 13: DaemonSet – Maintaining Pod Singletons on Nodes 17. Section 4: Deploying Kubernetes on the Cloud
18. Chapter 14: Kubernetes Clusters on Google Kubernetes Engine 19. Chapter 15: Launching a Kubernetes Cluster on Amazon Web Services with Amazon Elastic Kubernetes Service 20. Chapter 16: Kubernetes Clusters on Microsoft Azure with Azure Kubernetes Service 21. Section 5: Advanced Kubernetes
22. Chapter 17: Working with Helm Charts 23. Chapter 18: Authentication and Authorization on Kubernetes 24. Chapter 19: Advanced Techniques for Scheduling Pods 25. Chapter 20: Autoscaling Kubernetes Pods and Nodes 26. Chapter 21: Advanced Traffic Routing with Ingress 27. Other Books You May Enjoy

Exploring the problems that Kubernetes solves

You can imagine that launching containers on your local machine or a development environment is not going to require the same level of planning as launching these same containers on remote machines, which could face millions of users. Problems specific to production will arise, and Kubernetes is a top solution with which to address these problems when using containers in production:

  • Ensuring high availability
  • Handling release management and container deployments
  • Autoscaling containers

Ensuring high availability

High availability is the central principle of production. This means that your application should always remain accessible and should never be down. Of course, it's utopian. Even the biggest companies such as Google or Amazon are experiencing outages. However, you should always bear in mind that this is your goal. Microservice architecture is a way to mitigate the risk of a total outage in the event of a failure. Using microservices, the failure of a single microservice will not affect the overall stability of the application. Kubernetes includes a whole battery of functionality to make your Docker containers highly available by replicating them on several host machines and monitoring their health on a regular and frequent basis.

When you deploy Docker containers, the accessibility of your application will directly depend on the health of your containers. Let's imagine that for some reason, a container containing one of your microservice becomes inaccessible; how can you automatically guarantee that the container is terminated and recreated using only Docker without Kubernetes? This is impossible because, by default, Docker cannot do it alone. With Kubernetes, it becomes possible. Kubernetes will help you design applications that can automatically repair themselves by performing automating tasks such as health checking and container replacement.

If one machine in your cluster were to fail, all of the containers running on it would disappear. Kubernetes would immediately notice that and reschedule all of the containers on another machine. In this way, your applications will become highly available and fault-tolerant as well.

Release management and container deployment

Deployment management is another of these production-specific problems that Kubernetes answers. The process of deployment consists of updating your application in production in order to replace an old version of a given microservice with a new version.

Deployments in production are always complex because you have to update the containers that are responding to requests from end users. If you miss them, the consequences can be great for your application because it could become unstable or inaccessible, which is why you should always be able to quickly revert to the previous version of your application by running a rollback. The challenge of deployment is that it needs to be performed in the least visible way to the end user, with as little friction as possible.

When using Docker, each release is preceded by a build process. Indeed, before releasing a new container, you have to build a new Docker image containing the new version. A Docker image is a kind of template used by Docker to launch containers. A container can be considered a running instance of a Docker image.

Important Note

The Docker build process has absolutely nothing to do with Kubernetes: it's pure Docker. Kubernetes will come into play later when you'll have to deploy new containers based on a newly built image.

Triggering a build is straightforward. Perform the following steps:

  1. You just need to run the docker build command:
    $ docker build .
  2. Docker reads build instructions from the Dockerfile file inside the . directory and starts the build process.
  3. The build completes.
  4. The resulting image is stored on the local machine where the build ran.
  5. Then, you push the new image to a Docker repository with a specific tag to identify the software version included in the new image.

Once the push has been completed, another process starts, that is, the deployment. To deploy a containerized Docker app, you simply need to pull the image from the machine where you want to run it and then run a docker run command.

This is what you'll need to do to release a new version of your containerized software, and this is exactly where things can become hard if you don't use an orchestrator such as Kubernetes.

The next step to achieve the release is to delete the existing container and replace it with new containers created from this new image.

Without Kubernetes, you'll have to run a docker run command on the machine where you want to deploy a new version of the container and destroy the container containing the old version of the application. Then, you will have to repeat this operation on each server that runs a copy of the container. It should work, but it is extremely tedious since it is not automated. And guess what? Kubernetes can automate this for you.

Kubernetes has features that allow it to manage deployments and rollbacks of Docker containers, and this will make your life a lot easier when responding to this problem. With a single command, you can ask Kubernetes to update your containers on all of your machines. Here is the command, which we'll learn later, that allows you to do that:

$ kubectl set image deploy/myapp myapp_container=myapp:1.0.0
# Meaning of the command
# kubectl set image <deployment_name> <container_name>=<docker_image>:<docker_tag>

On a real Kubernetes cluster, this command will update the container called myapp_container, which is running as part of the application called myapp, on every single machine where myapp_container runs to the 1.0.0 tag.

Whether it has to update one container running on one machine or millions over multiple data centers, this command works the same. Even better, it ensures high availability.

Remember that the goal is always to meet the requirement of high availability; a deployment should not cause your application to crash or cause a service disruption. Kubernetes is natively capable of managing deployment strategies such as rolling updates aimed at avoiding service interruptions.

Additionally, Kubernetes keeps in memory all the revisions of a specific deployment and allows you to revert to a previous version with just one command. It's an incredibly powerful tool that allows you to update a cluster of Docker containers with just one command.

Autoscaling containers

Scaling is another production-specific problem that has been widely democratized through the use of public clouds such as Amazon Web Services (AWS) and Google Cloud Platform (GCP). Scaling is the ability to adapt your computing power to the load you are facing – again to meet the requirement of high availability. Never forget that the goal is to avoid outages and downtime.

When your production machines are facing a traffic spike and one of your containers is no longer able to cope with the load, you need to find a way in which to identify the failing container. Decide whether you wish to scale it vertically or horizontally; otherwise, if you don't act and the load doesn't decrease, your container or even the host machine will eventually fail, and your application might become inaccessible:

  • Vertical scaling: This allows your container to use more computing power offered by the host machine.
  • Horizontal scaling: You can duplicate your container to another machine, and you can load balance the traffic between the two containers.

Again, Docker is not able to respond to this problem alone; however, when you manage your Docker with Kubernetes, it becomes possible. Kubernetes is capable of managing both vertical and horizontal scaling automatically. It does this by letting your containers consume more computing power from the host or by creating additional containers that can be deployed on another node on the cluster. And if your Kubernetes cluster is not capable of handling more containers because all your nodes are full, Kubernetes will even be able to launch new virtual machines by interfacing with your cloud provider in a fully automated and transparent manner by using a component called a Cluster Autoscaler.

Important Note

The Cluster Autoscaler only works if the Kubernetes cluster is deployed on a cloud provider.

These goals cannot be achieved without using a container orchestrator. The reason for this is simple. You can't afford to do these tasks; you need to think about DevOps' culture and agility and seek to automate these tasks so that your applications can repair themselves, be fault-tolerant, and be highly available.

Contrary to scaling out your containers or cluster, you must also be able to decrease the number of containers if the load starts to decrease in order to adapt your resources to the load, whether it is rising or falling. Again, Kubernetes can do this, too.

When and where is Kubernetes not the solution?

Kubernetes has undeniable benefits; however, it is not always advisable to use it as a solution. Here, we have listed several cases where another solution might be more appropriate:

  • Container-less architecture: If you do not use a container at all, Kubernetes won't be of any use to you.
  • Monolithic architecture: While you can use Kubernetes to deploy containerized monoliths, Kubernetes shows all of its potential when it has to manage a high number of containers. A monolithic application, when containerized, often consists of a very small number of containers. Kubernetes won't have much to manage, and you'll find a better solution for your use case.
  • A very small number of microservices or applications: Kubernetes stands out when it has to manage a large number of containers. If your app consists of two to three microservices, a simpler orchestrator might be a better fit.
  • No cluster: Are you only running one machine and only one Docker installation? Kubernetes is good at managing a cluster of computers that executes a Docker daemon. If you do not plan to manage a real cluster, then Kubernetes is not for you.
You have been reading a chapter from
The Kubernetes Bible
Published in: Feb 2022
Publisher: Packt
ISBN-13: 9781838827694
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image