Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
The Kubernetes Bible

You're reading from   The Kubernetes Bible The definitive guide to deploying and managing Kubernetes across cloud and on-prem environments

Arrow left icon
Product type Paperback
Published in Nov 2024
Publisher Packt
ISBN-13 9781835464717
Length 720 pages
Edition 2nd Edition
Languages
Arrow right icon
Authors (2):
Arrow left icon
Gineesh Madapparambath Gineesh Madapparambath
Author Profile Icon Gineesh Madapparambath
Gineesh Madapparambath
Russ McKendrick Russ McKendrick
Author Profile Icon Russ McKendrick
Russ McKendrick
Arrow right icon
View More author details
Toc

Table of Contents (24) Chapters Close

Preface 1. Kubernetes Fundamentals FREE CHAPTER 2. Kubernetes Architecture – from Container Images to Running Pods 3. Installing Your First Kubernetes Cluster 4. Running Your Containers in Kubernetes 5. Using Multi-Container Pods and Design Patterns 6. Namespaces, Quotas, and Limits for Multi-Tenancy in Kubernetes 7. Configuring Your Pods Using ConfigMaps and Secrets 8. Exposing Your Pods with Services 9. Persistent Storage in Kubernetes 10. Running Production-Grade Kubernetes Workloads 11. Using Kubernetes Deployments for Stateless Workloads 12. StatefulSet – Deploying Stateful Applications 13. DaemonSet – Maintaining Pod Singletons on Nodes 14. Working with Helm Charts and Operators 15. Kubernetes Clusters on Google Kubernetes Engine 16. Launching a Kubernetes Cluster on Amazon Web Services with Amazon Elastic Kubernetes Service 17. Kubernetes Clusters on Microsoft Azure with Azure Kubernetes Service 18. Security in Kubernetes 19. Advanced Techniques for Scheduling Pods 20. Autoscaling Kubernetes Pods and Nodes 21. Advanced Kubernetes: Traffic Management, Multi-Cluster Strategies, and More 22. Other Books You May Enjoy 23. Index

Exploring the problems that Kubernetes solves

Now, why is Kubernetes such a good fit for DevOps teams? Here’s the connection: Kubernetes shines as a container orchestration platform, managing the deployment, scaling, and networking of containerized applications. Containers are lightweight packages that bundle an application with its dependencies, allowing faster and more reliable deployments across different environments. Users leverage Kubernetes for several reasons:

  • Automation: Kubernetes automates many manual tasks associated with deploying and managing containerized applications, freeing up time for developers to focus on innovation.
  • Scalability: Kubernetes facilitates easy scaling of applications up or down based on demand, ensuring optimal resource utilization.
  • Consistency: Kubernetes ensures consistent deployments across different environments, from development to production, minimizing configuration errors and streamlining the delivery process.
  • Flexibility: Kubernetes is compatible with various tools and technologies commonly used by DevOps teams, simplifying integration into existing workflows.

You can imagine that launching containers on your local machine or a development environment is not going to require the same level of planning as launching these same containers on remote machines, which could face millions of users. Problems specific to production will arise, and Kubernetes is a great way to address these problems when using containers in production:

  • Ensuring high availability
  • Handling release management and container deployments
  • Autoscaling containers
  • Network isolation
  • Role-Based Access Control (RBAC)
  • Stateful workloads
  • Resource management

Ensuring high availability

High availability is the central principle of production. This means that your application should always remain accessible and should never be down. Of course, it’s utopian. Even the biggest companies experience service outages. However, you should always bear in mind that this is your goal. Kubernetes includes a whole battery of functionality to make your containers highly available by replicating them on several host machines and monitoring their health on a regular and frequent basis.

When you deploy containers, the accessibility of your application will directly depend on the health of your containers. Let’s imagine that for some reason, a container containing one of your microservices becomes inaccessible; with Docker alone, you cannot automatically guarantee that the container is terminated and recreated to ensure the service restoration. With Kubernetes, it becomes possible as Kubernetes will help you design applications that can automatically repair themselves by performing automated tasks such as health checking and container replacement.

If one machine in your cluster were to fail, all the containers running on it would disappear. Kubernetes would immediately notice that and reschedule all the containers on another machine. In this way, your applications will become highly available and fault tolerant as well.

Release management and container deployment

Deployment management is another of these production-specific problems that Kubernetes solves. The process of deployment consists of updating your application in production to replace an old version of a given microservice with a new version.

Deployments in production are always complex because you have to update the containers that are responding to requests from end users. If you miss them, the consequences could be severe for your application because it could become unstable or inaccessible, which is why you should always be able to quickly revert to the previous version of your application by running a rollback. The challenge of deployment is that it needs to be performed in the least visible way to the end user, with as little friction as possible.

Whenever you release a new version of the application, there are multiple processes involved, as follows:

  1. Update the Dockerfile or Containerfile with the latest application info (if any).
  2. Build a new Docker container image with the latest version of the application.
  3. Push the new container image to the container registry.
  4. Pull the new container image from the container registry to the staging/UAT/production system (Docker host).
  5. Stop and delete the existing (old version) of the application container running on the system.
  6. Launch the new container image with the new version of the application container image in the staging/UAT/production system.

Refer to the following image to understand the high-level flow in a typical scenario (please note that this is an ideal scenario because, in an actual environment, you might be using different and isolated container registries for development, staging, and production environments).

Figure 1.9: High-level workflow of container management

IMPORTANT NOTE

The container build process has absolutely nothing to do with Kubernetes: it’s purely a container image management part. Kubernetes will come into play later when you have to deploy new containers based on a newly built image.

Without Kubernetes, you’ll have to run all these operations including docker pull, docker stop, docker delete, and docker run on the machine where you want to deploy a new version of the container. Then, you will have to repeat this operation on each server that runs a copy of the container. It should work, but it is extremely tedious since it is not automated. And guess what? Kubernetes can automate this for you.

Kubernetes has features that allow it to manage deployments and rollbacks of Docker containers, and this will make your life a lot easier when responding to this problem. With a single command, you can ask Kubernetes to update your containers on all of your machines as follows:

$ kubectl set image deploy/myapp myapp_container=myapp:1.0.0

On a real Kubernetes cluster, this command will update the container called myapp_container, which is running as part of the application deployment called myapp, on every single machine where myapp_container runs to the 1.0.0 tag.

Whether it must update one container running on one machine or millions over multiple datacenters, this command works the same. Even better, it ensures high availability.

Remember that the goal is always to meet the requirement of high availability; a deployment should not cause your application to crash or cause a service disruption. Kubernetes is natively capable of managing deployment strategies such as rolling updates, which aim to prevent service interruptions.

Additionally, Kubernetes keeps in memory all the revisions of a specific deployment and allows you to revert to a previous version with just one command. It’s an incredibly powerful tool that allows you to update a cluster of Docker containers with just one command.

Autoscaling containers

Scaling is another production-specific problem that has been widely democratized using public clouds such as Amazon Web Services (AWS) and Google Cloud Platform (GCP). Scaling is the ability to adapt your computing power to the load you are facing, again to meet the requirement of high availability and load balancing. Never forget that the goal is to prevent outages and downtime.

When your production machines are facing a traffic spike and one of your containers is no longer able to cope with the load, you need to find a way to scale the container workloads efficiently. There are two scaling methods:

  • Vertical scaling: This allows your container to use more computing power offered by the host machine.
  • Horizontal scaling: You can duplicate your container in the same or another machine, and you can load-balance the traffic between the multiple containers.

Docker is not able to respond to this problem alone; however, when you manage Docker with Kubernetes, it becomes possible.

Figure 1.10: Vertical scaling versus horizontal scaling for pods

Kubernetes can manage both vertical and horizontal scaling automatically. It does this by letting your containers consume more computing power from the host or by creating additional containers that can be deployed on the same or another node in the cluster. And if your Kubernetes cluster is not capable of handling more containers because all your nodes are full, Kubernetes will even be able to launch new virtual machines by interfacing with your cloud provider in a fully automated and transparent manner by using a component called a cluster autoscaler.

IMPORTANT NOTE

The cluster autoscaler only works if the Kubernetes cluster is deployed on a supported cloud provider (a private or public cloud).

These goals cannot be achieved without using a container orchestrator. The reason for this is simple. You can’t afford to do these tasks; you need to think about DevOps’ culture and agility and seek to automate these tasks so that your applications can repair themselves, be fault-tolerant, and be highly available.

Contrary to scaling out your containers or cluster, you must also be able to decrease the number of containers if the load starts to decrease to adapt your resources to the load, whether it is rising or falling. Again, Kubernetes can do this, too.

Network isolation

In a world of millions of users, ensuring secure communication between containers is paramount. Traditional approaches can involve complex manual configuration. This is where Kubernetes shines:

  • Pod networking: Kubernetes creates a virtual network overlay for your pods. By default, containers within the same Pod can communicate directly, while containers in different Pods are isolated by default. This prevents unintended communication between containers and enhances security.
  • Network policies: Kubernetes allows you to define granular network policies that further restrict how pods can communicate. You can specify allowed ingress (incoming traffic) and egress (outgoing traffic) for pods, ensuring they only access the resources they need. This approach simplifies network configuration and strengthens security in production environments.

Role-Based Access Control (RBAC)

Managing access to container resources in a production environment with multiple users is crucial. Here’s how Kubernetes empowers secure access control:

  • User roles: Kubernetes defines user roles that specify permissions for accessing and managing container resources. These roles can be assigned to individual users or groups, allowing granular control over who can perform specific actions (such as viewing pod logs and deploying new containers).
  • Service accounts: Kubernetes utilizes service accounts to provide identities for pods running within the cluster. These service accounts can be assigned roles, ensuring pods only have the access they require to function correctly.

This multi-layered approach of using user roles and service accounts strengthens security and governance in production deployments.

Stateful workloads

While containers are typically stateless (their data doesn’t persist after they stop), some applications require persistent storage. Kubernetes provides solutions to manage stateful workloads: Persistent Volumes (PVs) and Persistent Volume Claims (PVCs). Kubernetes introduces the concept of PVs, which are persistent storage resources provisioned by the administrator (e.g., host directory, cloud storage). Applications can then request storage using PVCs. This abstraction decouples storage management from the application, allowing containers to leverage persistent storage without worrying about the underlying details.

Resource management

Efficiently allocating resources to containers becomes critical in production to optimize performance and avoid resource bottlenecks. Kubernetes provides functionalities for managing resources:

  • Resource quotas: Kubernetes allows you to set resource quotas (limits and requests) for CPU, memory, and other resources for namespaces or pods. This ensures fair resource allocation and prevents individual pods from consuming excessive resources that could starve other applications.
  • Resource limits and requests: When defining deployments, you can specify resource requests (minimum guaranteed resources) and resource limits (maximum allowed resources) for containers. These ensure your application has the resources it needs to function properly while preventing uncontrolled resource usage.

We will learn about all of these features in the upcoming chapters.

Should we use Kubernetes everywhere? Let’s discuss that in the next section.

When and where is Kubernetes not the solution?

Kubernetes has undeniable benefits; however, it is not always advisable to use it as a solution. Here, we have listed several cases where another solution might be more appropriate:

  • Container-less architecture: If you do not use a container at all, Kubernetes won’t be of any use to you.
  • A very small number of microservices or applications: Kubernetes stands out when it must manage many containers. If your app consists of two to three microservices, a simpler orchestrator might be a better fit.
You have been reading a chapter from
The Kubernetes Bible - Second Edition
Published in: Nov 2024
Publisher: Packt
ISBN-13: 9781835464717
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image