Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
The Kubernetes Bible
The Kubernetes Bible

The Kubernetes Bible: The definitive guide to deploying and managing Kubernetes across cloud and on-prem environments , Second Edition

Arrow left icon
Profile Icon Gineesh Madapparambath Profile Icon Russ McKendrick
Arrow right icon
€44.99
Paperback Nov 2024 720 pages 2nd Edition
eBook
€8.99 €35.99
Paperback
€44.99
Subscription
Free Trial
Renews at €18.99p/m
Arrow left icon
Profile Icon Gineesh Madapparambath Profile Icon Russ McKendrick
Arrow right icon
€44.99
Paperback Nov 2024 720 pages 2nd Edition
eBook
€8.99 €35.99
Paperback
€44.99
Subscription
Free Trial
Renews at €18.99p/m
eBook
€8.99 €35.99
Paperback
€44.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

The Kubernetes Bible

Kubernetes Fundamentals

Welcome to The Kubernetes Bible, and we are happy to accompany you on your journey with Kubernetes. If you are working in the software development industry, you have probably heard about Kubernetes. This is normal because the popularity of Kubernetes has grown a lot in recent years.

Built by Google, Kubernetes is the leading container orchestrator solution in terms of popularity and adoption: it’s the tool you need if you are looking for a solution to manage containerized applications in production at scale, whether it’s on-premises or on a public cloud. Be focused on the word. Deploying and managing containers at scale is extremely difficult because, by default, container engines such as Docker do not provide any way on their own to maintain the availability and scalability of containers at scale.

Kubernetes first emerged as a Google project, and Google has put a lot of effort into building a solution to deploy a huge number of containers on their massively distributed infrastructure. By adopting Kubernetes as part of your stack, you’ll get an open source platform that was built by one of the biggest companies on the internet, with the most critical needs in terms of stability.

Although Kubernetes can be used with a lot of different container runtimes, this book is going to focus on the Kubernetes and containers (Docker and Podman) combination.

Perhaps you are already using Docker on a daily basis, but the world of container orchestration might be completely unknown to you. It is even possible that you do not even see the benefits of using such technology because everything looks fine to you with just raw Docker. That’s why, in this first chapter, we’re not going to look at Kubernetes in detail. Instead, we will focus on explaining what Kubernetes is and how it can help you manage your application containers in production. It will be easier for you to learn a new technology if you understand why it was built.

In this chapter, we’re going to cover the following main topics:

  • Understanding monoliths and microservices
  • Understanding containers
  • How can Kubernetes help you to manage containers?
  • Understanding the history of Kubernetes
  • Exploring the problems that Kubernetes solves

You can download the latest code samples for this chapter from the official GitHub repository at https://github.com/PacktPublishing/The-Kubernetes-Bible-Second-Edition/tree/main/Chapter01

Understanding monoliths and microservices

Let’s put Kubernetes and Docker to one side for the moment, and instead, let’s talk a little bit about how the internet and software development evolved together over the past 20 years. This will help you to gain a better understanding of where Kubernetes sits and the problems it solves.

Understanding the growth of the internet since the late 1990s

Since the late 1990s, the popularity of the internet has grown rapidly. Back in the 1990s, and even in the early 2000s, the internet was only used by a few hundred thousand people in the world. Today, almost 2 billion people are using the internet for email, web browsing, video games, and more.

There are now a lot of people on the internet, and we’re using it for tons of different needs, and these needs are addressed by dozens of applications deployed on dozens of devices.

Additionally, the number of connected devices has increased, as each person can now have several devices of a different nature connected to the internet: laptops, computers, smartphones, TVs, tablets, and more.

Today, we can use the internet to shop, to work, to entertain, to read, or to do whatever. It has entered almost every part of our society and has led to a profound paradigm shift in the last 20 years. All of this has given the utmost importance to software development.

Understanding the need for more frequent software releases

To cope with this ever-increasing number of users who are always demanding more in terms of features, the software development industry had to evolve in order to make new software releases faster and more frequent.

Indeed, back in the 1990s, you could build an application, deploy it to production, and simply update it once or twice a year. Today, companies must be able to update their software in production, sometimes several times a day, whether to deploy a new feature, integrate with a social media platform, support the resolution of the latest fashionable smartphone, or even release a patch to a security breach identified the day before. Everything is far more complex today, and you must go faster than before.

We constantly need to update our software, and in the end, the survival of many companies directly depends on how often they can offer releases to their users. But how do we accelerate software development life cycles so that we can deliver new versions of our software to our users more frequently?

IT departments of companies had to evolve, both in an organizational sense and a technical sense. Organizationally, they changed the way they managed projects and teams in order to shift to agile methodologies, and technically, technologies such as cloud computing platforms, containers, and virtualization were adopted widely and helped a lot to align technical agility with organizational agility. All of this is to ensure more frequent software releases! So, let’s focus on this evolution next.

Understanding the organizational shift to agile methodologies

From a purely organizational point of view, agile methodologies such as Scrum, Kanban, and DevOps became the standard way to organize IT teams.

Typical IT departments that do not apply agile methodologies are often made of three different teams, each of them having a single responsibility in the development and release process life cycle.

Rest assured, even though we are currently discussing agile methodologies and the history of the internet, this book is really about Kubernetes! We just need to explain some of the problems that we have faced before introducing Kubernetes for real!

Before the adoption of agile methodologies, development and operations often worked in separate silos. This could lead to inefficiency and communication gaps. Agile methodologies helped bridge these gaps and foster collaboration. The three isolated teams are shown below.

  • The business team: They’re like the voice of the customer. Their job is to explain what features are needed in the app to meet user needs. They translate business goals into clear instructions for the developers.
  • The development team: These are the engineers who bring the app to life. They translate the business team’s feature requests into code, building the functionalities and features users will interact with. Clear communication from the business team is crucial. If the instructions aren’t well defined, it can be like a game of telephone – misunderstandings lead to delays and rework.
  • The operation team: They’re the keepers of the servers. Their main focus is keeping the app running smoothly. New features can be disruptive because they require updates, which can be risky. In the past, they weren’t always aware of what new features were coming because they weren’t involved in the planning.

These are what we call silos, as illustrated in Figure 1.1:

Figure 1.1: Isolated teams in a typical IT department

The roles are clearly defined, people from the different teams do not work together that much, and when something goes wrong, everyone loses time finding the right information from the right person.

This kind of siloed organization has led to major issues:

  • A significantly longer development time
  • Greater risk in the deployment of a release that might not work at all in production

And that’s essentially what agile methodologies and DevOps fixed. The change agile methodologies made was to make people work together by creating multidisciplinary teams.

DevOps is a collaborative culture and set of practices that aims to bridge the gap between development (Dev) and operations (Ops) teams. DevOps promotes collaboration and automation throughout the software lifecycle, from development and testing to deployment and maintenance.

An agile team consists of a product owner describing concrete features by writing them as user stories that are readable by the developers who are working in the same team as them. Developers should have visibility of the production environment and the ability to deploy on top of it, preferably using a continuous integration and continuous deployment (CI/CD) approach. Testers should also be part of agile teams in order to write tests.

With the collaborative approach, the teams will get better and clearer visibility of the full picture, as illustrated in the following diagram.

Figure 1.2: Team collaboration breaks silos

Simply understand that, by adopting agile methodologies and DevOps, these silos were broken and multidisciplinary teams capable of formalizing a need, implementing it, testing it, releasing it, and maintaining it in the production environment were created. Table 1.1 presents a shift from traditional development to agile and DevOps methodology.

Feature

Traditional Development

Agile & DevOps

Team Structure

Siloed departments (Development, Operations)

Cross-functional, multi-disciplinary teams

Work Style

Isolated workflows, limited communication

Collaborative, iterative development cycles

Ownership

Development hands off to Operations for deployment and maintenance

“You Build It, You Run It” - Teams own the entire lifecycle

Focus

Features and functionality

Business value, continuous improvement

Release Cycle

Long release cycles, infrequent deployments

Short sprints, frequent releases with feedback loops

Testing

Separate testing phase after development

Integrated testing throughout the development cycle

Infrastructure

Static, manually managed infrastructure

Automated infrastructure provisioning and management (DevOps)

Table 1.1: DevOps vs traditional development – a shift in collaboration

So, we’ve covered the organizational transition brought about by the adoption of agile methodologies. Now, let’s discuss the technical evolution that we’ve gone through over the past several years.

Understanding the shift from on-premises to the cloud

Having agile teams is very nice, but agility must also be applied to how software is built and hosted.

With the aim to always achieve faster and more recurrent releases, agile software development teams had to revise two important aspects of software development and release:

  • Hosting
  • Software architecture

Today, apps are not just for a few hundred users but potentially for millions of users concurrently. Having more users on the internet also means having more computing power capable of handling them. And, indeed, hosting an application became a very big challenge.

In the early days of web hosting, businesses primarily relied on two main approaches to housing their applications: one of these approaches is on-premises hosting. This method involved physically owning and managing the servers that ran their applications. There are two main ways to achieve on-premises hosting:

  1. Dedicated Servers: Renting physical servers from established data center providers: This involved leasing dedicated server hardware from a hosting company. The hosting provider would manage the physical infrastructure (power, cooling, security) but the responsibility for server configuration, software installation, and ongoing maintenance fell to the business. This offered greater control and customization compared to shared hosting, but still required significant in-house technical expertise.
  2. Building Your Own Data Center: Constructing and maintaining a private data center: This option involved a massive investment by the company to build and maintain its own physical data center facility. This included purchasing server hardware, networking equipment, and storage solutions, and implementing robust power, cooling, and security measures. While offering the highest level of control and security, this approach was very expensive and resource-intensive and was typically only undertaken by large corporations with significant IT resources.

Also note that on-premises hosting also encompasses managing the operating system, security patches, backups, and disaster recovery plans for the servers. Companies often needed a dedicated IT staff to manage and maintain their on-premises infrastructure, adding to the overall cost.

When your user base grows, you need to get more powerful machines to handle the load. The solution is to purchase a more powerful server and install your app on it from the start or to order and rack new hardware if you manage your data center. This is not very flexible. Today, a lot of companies are still using an on-premises solution, and often, it’s not very flexible.

The game-changer was the adoption of the other approach, which is the public cloud, which is the opposite of on-premises. The idea behind cloud computing is that big companies such as Amazon, Google, and Microsoft, which own a lot of datacenters, decided to build virtualization on top of their massive infrastructure to ensure the creation and management of virtual machines was accessible by APIs. In other words, you can get virtual machines with just a few clicks or just a few commands.

The following table provides high-level information about why cloud computing is good for organizations.

Feature

On-Premises

Cloud

Scalability

Limited – requires purchasing new hardware when scaling up

Highly scalable – easy to add or remove resources on demand

Flexibility

Inflexible – changes require physical hardware adjustments

Highly flexible – resources can be provisioned and de-provisioned quickly

Cost

High upfront cost for hardware, software licenses, and IT staff

Low upfront cost – pay-as-you-go model for resources used

Maintenance

Requires dedicated IT staff for maintenance and updates

Minimal maintenance required – cloud provider manages infrastructure

Security

High level of control over security, but requires significant expertise

Robust security measures implemented by cloud providers

Downtime

Recovery from hardware failures can be time-consuming

Cloud providers offer high availability and disaster recovery features

Location

Limited to the physical location of datacenter

Access from anywhere with an internet connection

Table 1.2: Importance of cloud computing for organizations

We will learn how cloud computing technology has helped organizations scale their IT infrastructure in the next section.

Understanding why the cloud is well suited for scalability

Today, virtually anyone can get hundreds or thousands of servers, in just a few clicks, in the form of virtual machines or instances created on physical infrastructure maintained by cloud providers such as Amazon Web Services, Google Cloud Platform, and Microsoft Azure. A lot of companies decided to migrate their workloads from on-premises to a cloud provider, and their adoption has been massive over the last few years.

Thanks to that, now, computing power is one of the simplest things you can get.

Cloud computing providers are now typical hosting solutions that agile teams possess in their arsenal. The main reason for this is that the cloud is extremely well suited to modern development.

Virtual machine configurations, CPUs, OSes, network rules, and more are publicly displayed and fully configurable, so there are no secrets for your team in terms of what the production environment is made of. Because of the programmable nature of cloud providers, it is very easy to replicate a production environment in a development or testing environment, providing more flexibility to teams, and helping them face their challenges when developing software. That’s a useful advantage for an agile development team built around the DevOps philosophy that needs to manage the development, release, and maintenance of applications in production.

Cloud providers have provided many benefits, as follows:

  • Elasticity and scalability
  • Helping to break up silos and enforcing agile methodologies
  • Fitting well with agile methodologies and DevOps
  • Low costs and flexible billing models
  • Ensuring there is no need to manage physical servers
  • Allowing virtual machines to be destroyed and recreated at will
  • More flexible compared to renting a bare-metal machine monthly

Due to these benefits, the cloud is a wonderful asset in the arsenal of an agile development team. Essentially, you can build and replicate a production environment over and over again without the hassle of managing the physical machine by yourself. The cloud enables you to scale your app based on the number of users using it or the computing resources they are consuming. You’ll make your app highly available and fault tolerant. The result is a better experience for your end users.

IMPORTANT NOTE

Please note that Kubernetes can run both on the cloud and on-premises. Kubernetes is very versatile, and you can even run it on a Raspberry Pi. Kubernetes and the public cloud are a good match, but you are not required or forced to run it on the cloud.

Now that we have explained the changes the cloud produced, let’s move on to software architecture because, over the years, a few things have also changed there.

Exploring the monolithic architecture

In the past, applications were mostly composed of monoliths. A typical monolith application consists of a simple process, a single binary, or a single package, as shown in Figure 1.3.

This unique component is responsible for the entire implementation of the business logic, to which the software must respond. Monoliths are a good choice if you want to develop simple applications that might not necessarily be updated frequently in production. Why? Well, because monoliths have one major drawback. If your monolith becomes unstable or crashes for some reason, your entire application will become unavailable:

Figure 1.3: A monolith application consists of one big component that contains all your software

The monolithic architecture can allow you to gain a lot of time during your development and that’s perhaps the only benefit you’ll find by choosing this architecture. However, it also has many disadvantages. Here are a few of them:

  • A failed deployment to production can break your whole application.
  • Scaling activities become difficult to achieve; if you fail to scale, all your applications might become unavailable.
  • A failure of any kind on a monolith can lead to a complete outage of your app.

In the 2010s, these drawbacks started to cause real problems. With the increase in the frequency of deployments, it became necessary to think of a new architecture that would be capable of supporting frequent deployments and shorter update cycles, while reducing the risk or general unavailability of the application. This is why the microservices architecture was designed.

Exploring the microservices architecture

The microservices architecture consists of developing your software application as a suite of independent micro-applications. Each of these applications, which is called a microservice, has its own versioning, life cycle, environment, and dependencies. Additionally, it can have its own deployment life cycle. Each of your microservices must only be responsible for a limited number of business rules, and all your microservices, when used together, make up the application. Think of a microservice as real full-featured software on its own, with its own life cycle and versioning process.

Since microservices are only supposed to hold a subset of all the features that the entire application has, they must be accessible in order to expose their functions. You must get data from a microservice, but you might also want to push data into it. You can make your microservice accessible through widely supported protocols such as HTTP or AMQP, and they need to be able to communicate with each other.

That’s why microservices are generally built as web services that expose their functionality through well-defined APIs. While HTTP (or HTTPS) REST APIs are a popular choice due to their simplicity and widespread adoption, other protocols, such as GraphQL, AMQP, and gRPC, are gaining traction and are used commonly.

The key requirement is that a microservice provides a well-documented and discoverable API endpoint, regardless of the chosen protocol. This allows other microservices to seamlessly interact and exchange data.

This is something that greatly differs from the monolithic architecture:

Figure 1.4: A microservice architecture where different microservices communicate via the HTTP protocol

Another key aspect of the microservice architecture is that microservices need to be decoupled: if a microservice becomes unavailable or unstable, it must not affect the other microservices or the entire application’s stability. You must be able to provision, scale, start, update, or stop each microservice independently without affecting anything else. If your microservices need to work with a database engine, bear in mind that even the database must be decoupled. Each microservice should have its own database and so on. So, if the database of microservice A crashes, it won’t affect microservice B:

Figure 1.5: A microservice architecture where different microservices communicate with each other and with a dedicated database server; this way, the microservices are isolated and have no common dependencies

The key rule is to decouple as much as possible so that your microservices are fully independent. Because they are meant to be independent, microservices can also have completely different technical environments and be implemented in different languages. You can have one microservice implemented in Go, another one in Java, and another one in PHP, and all together they form one application. In the context of a microservice architecture, this is not a problem. Because HTTP is a standard, they will be able to communicate with each other even if their underlying technologies are different.

Microservices must be decoupled from other microservices, but they must also be decoupled from the operating system running them. Microservices should not operate at the host system level but at the upper level. You should be able to provision them, at will, on different machines without needing to rely on a strong dependency on the host system; that’s why microservice architectures and containers are a good combination.

If you need to release a new feature in production, you simply deploy the microservices that are impacted by the new feature version. The others can remain the same.

As you can imagine, the microservice architecture has tremendous advantages in the context of modern application development:

  • It is easier to enforce recurring production deliveries with minimal impact on the stability of the whole application.
  • You can only upgrade to a specific microservice each time, not the whole application.
  • Scaling activities are smoother since you might only need to scale specific services.

However, on the other hand, the microservice architecture has a couple of disadvantages too:

  • The architecture requires more planning and is hard to develop.
  • There are problems in managing each microservice’s dependencies.

Microservice applications are considered hard to develop. This approach might be hard to understand, especially for junior developers. Dependency management can also become complex since all microservices can potentially have different dependencies.

Choosing between monolithic and microservices architectures

Building a successful software application requires careful planning, and one of the key decisions you’ll face is which architecture to use. Two main approaches dominate the scene: monoliths and microservices:

  • Monoliths: Imagine a compact, all-in-one system. That’s the essence of a monolith. Everything exists in a single codebase, making development and initial deployment simple for small projects or teams with limited resources. Additionally, updates tend to be quick for monoliths because there’s only one system to manage.
  • Microservices: Think of a complex application broken down into independent, modular components. Each service can be built, scaled, and deployed separately. This approach shines with large, feature-rich projects and teams with diverse skillsets. Microservices provide flexibility and potentially fast development cycles. However, they also introduce additional complexity in troubleshooting and security management.

Ultimately, the choice between a monolith and microservices hinges on your specific needs. Consider your project’s size, team structure, and desired level of flexibility. Don’t be swayed by trends – pick the architecture that empowers your team to develop and manage your application efficiently.

Kubernetes provides flexibility. It caters to both fast-moving monoliths and microservices, allowing you to choose the architecture that best suits your project’s needs.

In the next section, we will learn about containers and how they help microservice software architectures.

Understanding containers

Following this comparison between monolithic and microservice architectures, you should have understood that the architecture that best combines agility and DevOps is the microservice architecture. It is this architecture that we will discuss throughout the book because this is the architecture that Kubernetes manages well.

Now, we will move on to discuss how Docker, which is a container engine for Linux, is a good option for managing microservices. If you already know a lot about Docker, you can skip this section. Otherwise, I suggest that you read through it carefully.

Understanding why containers are good for microservices

Recall the two important aspects of the microservice architecture:

  1. Each microservice can have its own technical environment and dependencies.
  2. At the same time, it must be decoupled from the operating system it’s running on.

Let’s put the latter point aside for the moment and discuss the first one: two microservices of the same app can be developed in two different languages or be written in the same language but as two different versions. Now, let’s say that you want to deploy these two microservices on the same Linux machine. That would be a nightmare.

The reason for this is that you’ll have to install all the versions of the different runtimes, as well as the dependencies, and there might also be different versions or overlaps between the two microservices. Additionally, all of this will be on the same host operating system. Now, let’s imagine you want to remove one of these two microservices from the machine to deploy it on another server and clean the former machine of all the dependencies used by that microservice. Of course, if you are a talented Linux engineer, you’ll succeed in doing this. However, for most people, the risk of conflicts between the dependencies is huge, and in the end, you might just make your app unavailable while running such a nightmarish infrastructure.

There is a solution to this: you could build a machine image for each microservice and then put each microservice on a dedicated virtual machine. In other words, you refrain from deploying multiple microservices on the same machine. However, in this example, you will need as many machines as you have microservices. Of course, with the help of AWS or GCP, it’s going to be easy to bootstrap tons of servers, each of them tasked with running one and only one microservice, but it would be a huge waste of money to not mutualize the computing power provided by the host.

You have similar solutions in the container world, but not with the default container runtimes because they don’t guarantee complete isolation between microservices. This is exactly how the Kata runtime and the Confidential Container projects come into play. These technologies provide enhanced security and isolation for containerized applications. We’ll delve deeper into these container isolation concepts later in this book.

We will learn about how containers help with isolation in the next section.

Understanding the benefits of container isolation

Container engines such as Docker and Podman play a crucial role in managing microservices. Unlike virtual machines (VMs) that require a full guest operating system, containers are lightweight units that share the host machine’s Linux kernel. This makes them much faster to start and stop than VMs.

Container engines provide a user-friendly API to build, deploy, and manage containers. Container engines don’t introduce an additional layer of virtualization. Instead, they use the built-in capabilities of the Linux kernel for process isolation, security, and resource allocation. This efficient approach makes containerization a compelling solution for deploying microservices.

The following diagram shows how containers are different from virtual machines:

Figure 1.6: The difference between virtual machines and containers

Your microservices are going to be launched on top of this layer, not directly on the host system whose sole role will be to run your containers.

Since containers are isolated, you can run as many containers as you want and have them run applications written in different languages without any conflicts. Microservice relocation becomes as easy as stopping a running container and launching another one from the same image on another machine.

The usage of containers with microservices provides three main benefits:

  • It reduces the footprint on the host system.
  • It mutualizes the host system without conflicts between different microservices.
  • It removes the coupling between the microservice and the host system.

Once a microservice has been containerized, you can eliminate its coupling with the host operating system. The microservice will only depend on the container in which it will operate. Since a container is much lighter than a real full-featured Linux operating system, it will be easy to share and deploy on many different machines. Therefore, the container and your microservice will work on any machine that is running a container engine.

The following diagram shows a microservice architecture where each microservice is wrapped by a container:

Figure 1.7: A microservice application where all microservices are wrapped by a container; the life cycle of the app becomes tied to the container, and it is easy to deploy it on any machine that is running a container engine

Containers fit well with the DevOps methodology too. By developing locally in a container, which would later be built and deployed in production, you ensure you develop in the same environment as the one that will eventually run the application.

Container engines are not only capable of managing the life cycle of a container but also an entire ecosystem around containers. They can manage networks, and the intercommunication between different containers, and all these features respond particularly well to the properties of the microservice architecture that we mentioned earlier.

By using the cloud and containers together, you can build a very strong infrastructure to host your microservice. The cloud will give you as many machines as you want. You simply need to install a container engine on each of them, and you’ll be able to deploy multiple containerized microservices on each of these machines.

Container engines such as Docker or Podman are very nice tools on their own. However, you’ll discover that it’s hard to run them in production alone, just as they are.

Container engines excel in development environments because of their:

  • Simplicity: Container engines are easy to install and use, allowing developers to quickly build, test, and run containerized applications.
  • Flexibility: Developers can use container engines to experiment with different container configurations and explore the world of containerization.
  • Isolation: Container engines ensure isolation between applications, preventing conflicts and simplifying debugging.

However, production environments have strict requirements. Container engines alone cannot address all of these needs:

  • Scaling: Container engines (such as Docker or Podman) don’t provide built-in auto-scaling features to dynamically adapt container deployments based on resource utilization.
  • Disaster Recovery: Container engines don’t provide comprehensive disaster recovery capabilities to ensure service availability in case of outages.
  • Security: While container engines provide basic isolation, managing security policies for large-scale containerized deployments across multiple machines can be challenging.
  • Standardization: Container engines require custom scripting or integrations for interacting with external systems, such as CI/CD pipelines or monitoring tools.

While container engines excel in development environments, production deployments demand a more robust approach. Kubernetes, a powerful container orchestration platform, tackles this challenge by providing a comprehensive suite of functionalities. It manages the entire container lifecycle, from scheduling them to run on available resources to scaling deployments up or down based on demand and distributing traffic for optimal performance (load balancing). Unlike custom scripting with container engines, Kubernetes provides a well-defined API for interacting with containerized applications, simplifying integration with other tools used in production environments. Beyond basic isolation, Kubernetes provides advanced security features such as role-based access control and network policies. This allows the efficient management of containerized workloads from multiple teams or projects on the same infrastructure, optimizing resource utilization and simplifying complex deployments.

Before we dive into the Kubernetes topics, let’s discuss the basics of containers and container engines in the next section.

Container engines

A container engine acts as the interface for end-users and REST clients, managing user inputs, downloading container images from container registries, extracting downloaded images onto the disk, transforming user or REST client data for interaction with container engines, preparing container mount points, and facilitating communication with container engines. In essence, container engines serve as the user-facing layer, streamlining image and container management, while the underlying container runtimes handle the intricate low-level details of container and image management.

Docker stands out as one of the most widely adopted container engines, but it’s important to note that various alternatives exist in the containerization landscape. Some notable ones are LXD, Rkt, CRI-O, and Podman.

At its core, Docker relies on the containerd container runtime, which oversees critical aspects of container management, including the container life cycle, image transfer and storage, execution, and supervision, as well as storage and network attachments. containerd, in turn, relies on components such as runc and hcsshim. Runc is a command-line tool that facilitates creating and running containers in Linux, while hcsshim plays a crucial role in the creation and management of Windows containers.

It’s worth noting that containerd is typically not meant for direct end-user interaction. Instead, container engines, such as Docker, interact with the container runtime to facilitate the creation and management of containers. The essential role of runc is evident, serving not only containerd but also being used by Podman, CRI-O, and indirectly by Docker itself.

The basics of containers

As we learned in the previous section, Docker is a well-known and widely used container engine. Let’s learn the basic terminology related to containers in general.

Container image

A container image is a kind of template used by container engines to launch containers. A container image is a self-contained, executable package that encapsulates an application and its dependencies. It includes everything needed to run the software, such as code, runtime, libraries, and system tools. Container images are created from a Dockerfile or Containerfile, which specify the build steps. Container images are stored in image repositories and shared through container registries such as Docker Hub, making them a fundamental component of containerization.

Container

A container can be considered a running instance of a container image. Containers are like modular shipping containers for applications. They bundle an application’s code, dependencies, and runtime environment into a single, lightweight package. Containers run consistently across different environments because they include everything needed. Each container runs independently, preventing conflicts with other applications on the same system. Containers share the host operating system’s kernel, making them faster to start and stop than virtual machines.

Container registry

A container registry is a centralized repository for storing and sharing container images. It acts as a distribution mechanism, allowing users to push and pull images to and from the registry. Popular public registries include Docker Hub, Red Hat Quayi, Amazon’s Elastic Container Registry (ECR), Azure Container Registry, Google Container Registry, and GitHub Container Registry. Organizations often use private registries to securely store and share custom images. Registries play a crucial role in the Docker ecosystem, facilitating collaboration and efficient management of containerized applications.

Dockerfile or Containerfile

A Dockerfile or Containerfile is a text document that contains a set of instructions for building a container image. It defines the base image, sets up the environment, copies the application code, installs the dependencies, and configures the runtime settings. Dockerfiles or Containerfiles provide a reproducible and automated way to create consistent images, enabling developers to version and share their application configurations.

A sample Dockerfile can be seen in the following code snippet:

# syntax=docker/dockerfile:1
 
FROM node:18-alpine
WORKDIR /app
COPY . .
RUN yarn install --production
CMD ["node", "src/index.js"]
EXPOSE 3000

And, here’s a line-by-line explanation of the provided Dockerfile:

  1. # syntax=docker/dockerfile:1: This line defines the Dockerfile syntax version used to build the image. In this case, it specifies version 1 of the standard Dockerfile syntax.
  2. FROM node:18-alpine: This line defines the base image for your container. It instructs the container engine to use the official Node.js 18 image with the Alpine Linux base. This provides a lightweight and efficient foundation for your application.
  3. WORKDIR /app: This line sets the working directory within the container. Here, it specifies /app as the working directory. This is where subsequent commands in the Dockerfile will be executed relative to.
  4. COPY . .: This line copies all files and directories from the current context (the directory where you have your Dockerfile) into the working directory (/app) defined in the previous step. This essentially copies your entire application codebase into the container.
  5. RUN yarn install --production: This line instructs the container engine to execute a command within the container. In this case, it runs yarn install --production. This command uses the yarn package manager to install all production dependencies listed in your package.json file. The --production flag ensures that only production dependencies are installed, excluding development dependencies.
  6. CMD ["node", "src/index.js"]: This line defines the default command to be executed when the container starts. Here, it specifies an array with two elements: “node” and “src/index.js”. This tells Docker to run the Node.js interpreter (node) and execute the application’s entry point script (src/index.js) when the container starts up.
  7. EXPOSE 3000: This line exposes a port on the container. Here, it exposes port 3000 within the container. This doesn’t map the port to the host machine by default, but it allows you to do so later when running the container with the -p flag (e.g., docker run -p 3000:3000 my-image). Exposing port 3000 suggests your application might be listening on this port for incoming connections.

    IMPORTANT NOTE

    To build the container image, you can use a supported container engine (such as Docker or Podman) or a container build tool, such as Buildah or kaniko.

Docker Compose or Podman Compose

Docker Compose is a tool for defining and running multi-container applications. It uses a YAML file to configure the services, networks, and volumes required for an application, allowing developers to define the entire application stack in a single file. Docker Compose or Podman Compose simplifies the orchestration of complex applications, making it easy to manage multiple containers as a single application stack.

The following compose.yaml file will spin up two containers for a WordPress application stack using a single docker compose or podman compose command:

# compose.yaml
services:
  db:
    image: docker.io/library/mariadb
    command: '--default-authentication-plugin=mysql_native_password'
    volumes:
      - db_data:/var/lib/mysql
    restart: always
    environment:
      - MYSQL_ROOT_PASSWORD=somewordpress
      - MYSQL_DATABASE=wordpress
      - MYSQL_USER=wordpress
      - MYSQL_PASSWORD=wordpress
    expose:
      - 3306
      - 33060
    networks:
      - wordpress
  wordpress:
    image: wordpress:latest
    ports:
      - 8081:80
    restart: always
    environment:
      - WORDPRESS_DB_HOST=db
      - WORDPRESS_DB_USER=wordpress
      - WORDPRESS_DB_PASSWORD=wordpress
      - WORDPRESS_DB_NAME=wordpress
    networks:
      - wordpress
volumes:
  db_data:
networks:
  wordpress: {}

In the next section, we will learn how Kubernetes can efficiently orchestrate all these container operations.

How can Kubernetes help you to manage your containers?

In this section, we will focus on Kubernetes, which is the purpose of this book.

Kubernetes – designed to run workloads in production

If you open the official Kubernetes website (at https://kubernetes.io), the title you will see is Production-Grade Container Orchestration:

Figure 1.8: The Kubernetes home page showing the header and introducing Kubernetes as a production container orchestration platform

Those four words perfectly sum up what Kubernetes is: it is a container orchestration platform for production. Kubernetes does not aim to replace Docker or any of the features of Docker or other container engines; rather, it aims to manage the clusters of machines running container runtimes. When working with Kubernetes, you use both Kubernetes and the full-featured standard installations of container runtimes.

The title mentions production. Indeed, the concept of production is central to Kubernetes: it was conceived and designed to answer modern production needs. Managing production workloads is different today compared to what it was in the 2000s. Back in the 2000s, your production workload would consist of just a few bare-metal servers, if not even one on-premises. These servers mostly ran monoliths directly installed on the host Linux system. However, today, thanks to public cloud platforms such as Amazon Web Services (AWS) or Google Cloud Platform (GCP), anyone can now get hundreds or even thousands of machines in the form of instances or virtual machines with just a few clicks. Even better, we no longer deploy our applications on the host system but as containerized microservices on top of Docker Engine instead, thereby reducing the footprint of the host system.

A problem will arise when you must manage Docker installations on each of these virtual machines on the cloud. Let’s imagine that you have 10 (or 100 or 1,000) machines launched on your preferred cloud and you want to achieve a very simple task: deploy a containerized Docker app on each of these machines.

You could do this by running the docker run command on each of your machines. It would work, but of course, there is a better way to do it. And that’s by using a container orchestrator such as Kubernetes. To give you an extremely simplified vision of Kubernetes, it is a REST API that keeps a registry of your machines executing a Docker daemon.

Again, this is an extremely simplified definition of Kubernetes. In fact, it’s not made of a single centralized REST API, because as you might have gathered, Kubernetes was built as a suite of microservices.

Also note that while Kubernetes excels at managing containerized workloads, it doesn’t replace virtual machines (VMs) entirely. VMs can still be valuable for specific use cases, such as running legacy applications or software with complex dependencies that are difficult to containerize. However, Kubernetes is evolving to bridge the gap between containers and VMs.

KubeVirt – a bridge between containers and VMs

KubeVirt is a project that extends Kubernetes’ ability to manage virtual machines using the familiar Kubernetes API. This allows users to leverage the power and flexibility of Kubernetes for VM deployments alongside containerized applications. KubeVirt embraces Infrastructure as Code (IaC) principles, enabling users to define and manage VMs declaratively within their Kubernetes manifests. This simplifies VM management and integrates it seamlessly into existing Kubernetes workflows.

By incorporating VMs under the Kubernetes umbrella, KubeVirt provides a compelling approach for organizations that require a hybrid environment with both containers and VMs. It demonstrates the ongoing evolution of Kubernetes as a platform for managing diverse workloads, potentially leading to a more unified approach to application deployment and management.

We have learned about containers and the complications of managing and orchestrating containers at a large scale. In the next section, we will learn about the history and evolution of Kubernetes.

Understanding the history of Kubernetes

Now, let’s discuss the history of the Kubernetes project. It will be useful for you to understand the context in which the Kubernetes project started and the people who are keeping this project alive.

Understanding how and where Kubernetes started

Since its founding in 1998, Google has gained huge experience in managing high-demanding workloads at scale, especially container-based workloads. Since the mid-2000s, Google has been at the forefront of developing its applications as Linux containers. Well before Docker simplified container usage for the general public, Google recognized the advantages of containerization, giving rise to an internal project known as Borg. To enhance the architecture of Borg, making it more extensible and robust, Google initiated another container orchestrator project called Omega. Subsequently, several improvements introduced by Omega found their way into the Borg project.

Kubernetes was born as an internal project at Google, and the first commit of Kubernetes was in 2014 by Brendan Burns, Joe Beda, and Craig McLendon, among others. However, Google didn’t open source Kubernetes on its own. It was the efforts of individuals like Clayton Coleman, who was working at Red Hat at the time, and who played a crucial role in championing the idea of open-sourcing Kubernetes and ensuring its success as a community-driven project. Kelsey Hightower, an early Kubernetes champion at CoreOS, became a prominent voice advocating for the technology. Through his work as a speaker, writer, and co-founder of KubeCon, he significantly boosted Kubernetes’ adoption and community growth.

Today, in addition to Google, Red Hat, Amazon, Microsoft, and other companies are also contributing to the Kubernetes project actively.

IMPORTANT NOTE

Borg is not the ancestor of Kubernetes because the project is not dead and is still in use at Google. It would be more appropriate to say that a lot of ideas from Borg were reused to make Kubernetes. Bear in mind that Kubernetes is not Borg or Omega. Borg was built in C++ and Kubernetes in Go. In fact, they are two entirely different projects, but one is heavily inspired by the other. This is important to understand: Borg and Omega are two internal Google projects. They were not built for the public.

Kubernetes was developed with the experience gained by Google to manage containers in production. Most importantly, it inherited Borg’s and Omega’s ideas, concepts, and architectures. Here is a brief list of ideas and concepts taken from Borg and Omega, which have now been implemented in Kubernetes:

  • The concept of Pods to manage your containers: Kubernetes uses a logical object, called a pod, to create, update, and delete your containers.
  • Each pod has its own IP address in the cluster.
  • There are distributed components that all watch the central Kubernetes API to retrieve the cluster state.
  • There is internal load balancing between Pods and Services.
  • Labels and selectors are metadata that are used together to manage and orchestrate resources in Kubernetes.

That’s why Kubernetes is so powerful when it comes to managing containers in production at scale. In fact, the concepts you’ll learn from Kubernetes are older than Kubernetes itself. Although Kubernetes is a young project, it was built on solid foundations.

Who manages Kubernetes today?

Kubernetes is no longer maintained by Google because Google handed over operational control of the Kubernetes project to the Cloud Native Computing Foundation (CNCF) on August 29, 2018. CNCF is a non-profit organization that aims to foster and sustain an open ecosystem of cloud-native technologies.

Google is a founding member of CNCF, along with companies such as Cisco, Red Hat, and Intel. The Kubernetes source code is hosted on GitHub and is an extremely active project on the platform. The Kubernetes code is under Apache License version 2.0, which is a permissive open source license. You won’t have to pay to use Kubernetes, and if you are good at coding with Go, you can even contribute to the code.

Where is Kubernetes today?

In the realm of container orchestration, Kubernetes faces competition from various alternatives, including both open-source solutions and platform-specific offerings. Some notable contenders include:

  • Apache Mesos
  • HashiCorp Nomad
  • Docker Swarm
  • Amazon ECS

While each of these orchestrators comes with its own set of advantages and drawbacks, Kubernetes stands out as the most widely adopted and popular choice in the field.

Kubernetes has won the fight for popularity and adoption and has become the standard way of deploying container-based workloads in production. As its immense growth has made it one of the hottest topics in the IT industry, it has become crucial for cloud providers to come up with a Kubernetes offering as part of their services. Therefore, Kubernetes is supported almost everywhere now.

The following Kubernetes-based services can help you get a Kubernetes cluster up and running with just a few clicks:

  • Google Kubernetes Engine (GKE) on Google Cloud Platform
  • Elastic Kubernetes Service (Amazon EKS)
  • Azure Kubernetes Service on Microsoft Azure
  • Alibaba Cloud Container Service for Kubernetes (ACK)

It’s not just about the cloud offerings. It’s also about the Platform-as-a-Service market. Red Hat started incorporating Kubernetes into its OpenShift container platform with the release of OpenShift version 3 in 2015. This marked a significant shift in OpenShift’s architecture, moving from its original design to a Kubernetes-based container orchestration system, providing users with enhanced container management capabilities and offering a complete set of enterprise tools to build, deploy, and manage containers entirely on top of Kubernetes. In addition to this, other projects, such as Rancher, were built as Kubernetes distributions to offer a complete set of tools around the Kubernetes orchestrator, whereas projects such as Knative manage serverless workloads with the Kubernetes orchestrator.

IMPORTANT NOTE

AWS is an exception because it has two container orchestrator services. The first one is Amazon ECS, which is entirely made by AWS. The second one is Amazon EKS, which was released later than ECS and is a complete Kubernetes offering on AWS. These services are not the same, so do not be misguided by their similar names.

Where is Kubernetes going?

Kubernetes isn’t stopping at containers! It’s evolving to manage a wider range of workloads. KubeVirt extends its reach to virtual machines, while integration with AI/ML frameworks such as TensorFlow could allow Kubernetes to orchestrate even machine learning tasks. The future of Kubernetes is one of flexibility, potentially becoming a one-stop platform for managing diverse applications across containers, VMs, and even AI/ML workflows.

Learning Kubernetes today is one of the smartest decisions you can take if you are into managing cloud-native applications in production. Kubernetes is evolving rapidly, and there is no reason to wonder why its growth would stop.

By mastering this wonderful tool, you’ll get one of the hottest skills being searched for in the IT industry today. We hope you are now convinced!

In the next section, we will learn how Kubernetes can simplify operations.

Exploring the problems that Kubernetes solves

Now, why is Kubernetes such a good fit for DevOps teams? Here’s the connection: Kubernetes shines as a container orchestration platform, managing the deployment, scaling, and networking of containerized applications. Containers are lightweight packages that bundle an application with its dependencies, allowing faster and more reliable deployments across different environments. Users leverage Kubernetes for several reasons:

  • Automation: Kubernetes automates many manual tasks associated with deploying and managing containerized applications, freeing up time for developers to focus on innovation.
  • Scalability: Kubernetes facilitates easy scaling of applications up or down based on demand, ensuring optimal resource utilization.
  • Consistency: Kubernetes ensures consistent deployments across different environments, from development to production, minimizing configuration errors and streamlining the delivery process.
  • Flexibility: Kubernetes is compatible with various tools and technologies commonly used by DevOps teams, simplifying integration into existing workflows.

You can imagine that launching containers on your local machine or a development environment is not going to require the same level of planning as launching these same containers on remote machines, which could face millions of users. Problems specific to production will arise, and Kubernetes is a great way to address these problems when using containers in production:

  • Ensuring high availability
  • Handling release management and container deployments
  • Autoscaling containers
  • Network isolation
  • Role-Based Access Control (RBAC)
  • Stateful workloads
  • Resource management

Ensuring high availability

High availability is the central principle of production. This means that your application should always remain accessible and should never be down. Of course, it’s utopian. Even the biggest companies experience service outages. However, you should always bear in mind that this is your goal. Kubernetes includes a whole battery of functionality to make your containers highly available by replicating them on several host machines and monitoring their health on a regular and frequent basis.

When you deploy containers, the accessibility of your application will directly depend on the health of your containers. Let’s imagine that for some reason, a container containing one of your microservices becomes inaccessible; with Docker alone, you cannot automatically guarantee that the container is terminated and recreated to ensure the service restoration. With Kubernetes, it becomes possible as Kubernetes will help you design applications that can automatically repair themselves by performing automated tasks such as health checking and container replacement.

If one machine in your cluster were to fail, all the containers running on it would disappear. Kubernetes would immediately notice that and reschedule all the containers on another machine. In this way, your applications will become highly available and fault tolerant as well.

Release management and container deployment

Deployment management is another of these production-specific problems that Kubernetes solves. The process of deployment consists of updating your application in production to replace an old version of a given microservice with a new version.

Deployments in production are always complex because you have to update the containers that are responding to requests from end users. If you miss them, the consequences could be severe for your application because it could become unstable or inaccessible, which is why you should always be able to quickly revert to the previous version of your application by running a rollback. The challenge of deployment is that it needs to be performed in the least visible way to the end user, with as little friction as possible.

Whenever you release a new version of the application, there are multiple processes involved, as follows:

  1. Update the Dockerfile or Containerfile with the latest application info (if any).
  2. Build a new Docker container image with the latest version of the application.
  3. Push the new container image to the container registry.
  4. Pull the new container image from the container registry to the staging/UAT/production system (Docker host).
  5. Stop and delete the existing (old version) of the application container running on the system.
  6. Launch the new container image with the new version of the application container image in the staging/UAT/production system.

Refer to the following image to understand the high-level flow in a typical scenario (please note that this is an ideal scenario because, in an actual environment, you might be using different and isolated container registries for development, staging, and production environments).

Figure 1.9: High-level workflow of container management

IMPORTANT NOTE

The container build process has absolutely nothing to do with Kubernetes: it’s purely a container image management part. Kubernetes will come into play later when you have to deploy new containers based on a newly built image.

Without Kubernetes, you’ll have to run all these operations including docker pull, docker stop, docker delete, and docker run on the machine where you want to deploy a new version of the container. Then, you will have to repeat this operation on each server that runs a copy of the container. It should work, but it is extremely tedious since it is not automated. And guess what? Kubernetes can automate this for you.

Kubernetes has features that allow it to manage deployments and rollbacks of Docker containers, and this will make your life a lot easier when responding to this problem. With a single command, you can ask Kubernetes to update your containers on all of your machines as follows:

$ kubectl set image deploy/myapp myapp_container=myapp:1.0.0

On a real Kubernetes cluster, this command will update the container called myapp_container, which is running as part of the application deployment called myapp, on every single machine where myapp_container runs to the 1.0.0 tag.

Whether it must update one container running on one machine or millions over multiple datacenters, this command works the same. Even better, it ensures high availability.

Remember that the goal is always to meet the requirement of high availability; a deployment should not cause your application to crash or cause a service disruption. Kubernetes is natively capable of managing deployment strategies such as rolling updates, which aim to prevent service interruptions.

Additionally, Kubernetes keeps in memory all the revisions of a specific deployment and allows you to revert to a previous version with just one command. It’s an incredibly powerful tool that allows you to update a cluster of Docker containers with just one command.

Autoscaling containers

Scaling is another production-specific problem that has been widely democratized using public clouds such as Amazon Web Services (AWS) and Google Cloud Platform (GCP). Scaling is the ability to adapt your computing power to the load you are facing, again to meet the requirement of high availability and load balancing. Never forget that the goal is to prevent outages and downtime.

When your production machines are facing a traffic spike and one of your containers is no longer able to cope with the load, you need to find a way to scale the container workloads efficiently. There are two scaling methods:

  • Vertical scaling: This allows your container to use more computing power offered by the host machine.
  • Horizontal scaling: You can duplicate your container in the same or another machine, and you can load-balance the traffic between the multiple containers.

Docker is not able to respond to this problem alone; however, when you manage Docker with Kubernetes, it becomes possible.

Figure 1.10: Vertical scaling versus horizontal scaling for pods

Kubernetes can manage both vertical and horizontal scaling automatically. It does this by letting your containers consume more computing power from the host or by creating additional containers that can be deployed on the same or another node in the cluster. And if your Kubernetes cluster is not capable of handling more containers because all your nodes are full, Kubernetes will even be able to launch new virtual machines by interfacing with your cloud provider in a fully automated and transparent manner by using a component called a cluster autoscaler.

IMPORTANT NOTE

The cluster autoscaler only works if the Kubernetes cluster is deployed on a supported cloud provider (a private or public cloud).

These goals cannot be achieved without using a container orchestrator. The reason for this is simple. You can’t afford to do these tasks; you need to think about DevOps’ culture and agility and seek to automate these tasks so that your applications can repair themselves, be fault-tolerant, and be highly available.

Contrary to scaling out your containers or cluster, you must also be able to decrease the number of containers if the load starts to decrease to adapt your resources to the load, whether it is rising or falling. Again, Kubernetes can do this, too.

Network isolation

In a world of millions of users, ensuring secure communication between containers is paramount. Traditional approaches can involve complex manual configuration. This is where Kubernetes shines:

  • Pod networking: Kubernetes creates a virtual network overlay for your pods. By default, containers within the same Pod can communicate directly, while containers in different Pods are isolated by default. This prevents unintended communication between containers and enhances security.
  • Network policies: Kubernetes allows you to define granular network policies that further restrict how pods can communicate. You can specify allowed ingress (incoming traffic) and egress (outgoing traffic) for pods, ensuring they only access the resources they need. This approach simplifies network configuration and strengthens security in production environments.

Role-Based Access Control (RBAC)

Managing access to container resources in a production environment with multiple users is crucial. Here’s how Kubernetes empowers secure access control:

  • User roles: Kubernetes defines user roles that specify permissions for accessing and managing container resources. These roles can be assigned to individual users or groups, allowing granular control over who can perform specific actions (such as viewing pod logs and deploying new containers).
  • Service accounts: Kubernetes utilizes service accounts to provide identities for pods running within the cluster. These service accounts can be assigned roles, ensuring pods only have the access they require to function correctly.

This multi-layered approach of using user roles and service accounts strengthens security and governance in production deployments.

Stateful workloads

While containers are typically stateless (their data doesn’t persist after they stop), some applications require persistent storage. Kubernetes provides solutions to manage stateful workloads: Persistent Volumes (PVs) and Persistent Volume Claims (PVCs). Kubernetes introduces the concept of PVs, which are persistent storage resources provisioned by the administrator (e.g., host directory, cloud storage). Applications can then request storage using PVCs. This abstraction decouples storage management from the application, allowing containers to leverage persistent storage without worrying about the underlying details.

Resource management

Efficiently allocating resources to containers becomes critical in production to optimize performance and avoid resource bottlenecks. Kubernetes provides functionalities for managing resources:

  • Resource quotas: Kubernetes allows you to set resource quotas (limits and requests) for CPU, memory, and other resources for namespaces or pods. This ensures fair resource allocation and prevents individual pods from consuming excessive resources that could starve other applications.
  • Resource limits and requests: When defining deployments, you can specify resource requests (minimum guaranteed resources) and resource limits (maximum allowed resources) for containers. These ensure your application has the resources it needs to function properly while preventing uncontrolled resource usage.

We will learn about all of these features in the upcoming chapters.

Should we use Kubernetes everywhere? Let’s discuss that in the next section.

When and where is Kubernetes not the solution?

Kubernetes has undeniable benefits; however, it is not always advisable to use it as a solution. Here, we have listed several cases where another solution might be more appropriate:

  • Container-less architecture: If you do not use a container at all, Kubernetes won’t be of any use to you.
  • A very small number of microservices or applications: Kubernetes stands out when it must manage many containers. If your app consists of two to three microservices, a simpler orchestrator might be a better fit.

Summary

This first chapter gave us room for a big introduction. We covered a lot of subjects, such as monoliths, microservices, Docker containers, cloud computing, and Kubernetes. We also discussed how this project came to life. You should now have a global vision of how Kubernetes can be used to manage your containers in production. You have also learned why Kubernetes was introduced and how it became a well-known container orchestration tool.

In the next chapter, we will discuss the process Kubernetes follows to launch a Docker container. You will discover that you can issue commands to Kubernetes, and these commands will be interpreted by Kubernetes as instructions to run containers. We will list and explain each component of Kubernetes and its role in the whole cluster. There are a lot of components that make up a Kubernetes cluster, and we will discover all of them. We will explain how Kubernetes was built with a focus on the distinction between master nodes, worker nodes, and control plane components.

Further reading

Join our community on Discord

Join our community’s Discord space for discussions with the authors and other readers:

https://packt.link/cloudanddevops

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Comprehensive coverage of Kubernetes concepts - from deployment to cluster and resource management
  • Gain insights into the latest cloud-native trends and how they impact your Kubernetes deployments
  • Tap into the collective wisdom of acclaimed Kubernetes experts

Description

Kubernetes has become the go-to orchestration platform for containerized applications. As a Kubernetes user, you know firsthand how powerful yet complex this tool can be. The Kubernetes Bible cuts through the complexity, offering hands-on examples and expert advice to conquer containerization challenges With this new edition, you will master cutting edge security practices, deploy seamlessly and scale effortlessly, ensuring unwavering service availability. You will gain the expertise to craft production-grade applications, secure development environments, navigate complex deployments with ease, and become a security maestro. You will be able to optimize network communication and data management across major cloud platforms. Additionally, this book dives deep into these challenges, offering solutions such as multi-container Pods, advanced security techniques, and expert networking guidance. You will also explore persistent storage advancements, cloud-specific cluster management updates, and best practices for traffic routing By the end of this comprehensive guide, you will possess the skills and knowledge to orchestrate your containerized applications with precision, ensuring their optimal performance and scalability. Stop settling for basic container management. Order your copy today and orchestrate your containers to greatness.

Who is this book for?

Whether you're a software developer, DevOps engineer, or an existing Kubernetes user, this Kubernetes book is your comprehensive guide to mastering container orchestration and services in the cloud. It empowers you to overcome challenges in building secure, scalable, and cloud-native applications using Kubernetes. With a foundational understanding of Kubernetes, Docker, and leading cloud providers (AWS, Azure, GCP) recommended, this book equips you with the knowledge and skills needed to navigate complex deployments and master core Kubernetes concepts and architecture.

What you will learn

  • Secure your Kubernetes clusters with advanced techniques
  • Implement scalable deployments and autoscaling strategies
  • Design and learn to build production-grade containerized applications
  • Manage Kubernetes effectively on major cloud platforms (GKE, EKS, AKS)
  • Utilize advanced networking and service management practices
  • Use Helm charts and Kubernetes Operators for robust security measures
  • Optimize in-cluster traffic routing with advanced configurations
  • Enhance security with techniques like Immutable ConfigMaps and RBAC
Estimated delivery fee Deliver to Belgium

Premium delivery 7 - 10 business days

€17.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Nov 29, 2024
Length: 720 pages
Edition : 2nd
Language : English
ISBN-13 : 9781835464717
Languages :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to Belgium

Premium delivery 7 - 10 business days

€17.95
(Includes tracking information)

Product Details

Publication date : Nov 29, 2024
Length: 720 pages
Edition : 2nd
Language : English
ISBN-13 : 9781835464717
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
Banner background image

Table of Contents

23 Chapters
Kubernetes Fundamentals Chevron down icon Chevron up icon
Kubernetes Architecture – from Container Images to Running Pods Chevron down icon Chevron up icon
Installing Your First Kubernetes Cluster Chevron down icon Chevron up icon
Running Your Containers in Kubernetes Chevron down icon Chevron up icon
Using Multi-Container Pods and Design Patterns Chevron down icon Chevron up icon
Namespaces, Quotas, and Limits for Multi-Tenancy in Kubernetes Chevron down icon Chevron up icon
Configuring Your Pods Using ConfigMaps and Secrets Chevron down icon Chevron up icon
Exposing Your Pods with Services Chevron down icon Chevron up icon
Persistent Storage in Kubernetes Chevron down icon Chevron up icon
Running Production-Grade Kubernetes Workloads Chevron down icon Chevron up icon
Using Kubernetes Deployments for Stateless Workloads Chevron down icon Chevron up icon
StatefulSet – Deploying Stateful Applications Chevron down icon Chevron up icon
DaemonSet – Maintaining Pod Singletons on Nodes Chevron down icon Chevron up icon
Working with Helm Charts and Operators Chevron down icon Chevron up icon
Kubernetes Clusters on Google Kubernetes Engine Chevron down icon Chevron up icon
Launching a Kubernetes Cluster on Amazon Web Services with Amazon Elastic Kubernetes Service Chevron down icon Chevron up icon
Kubernetes Clusters on Microsoft Azure with Azure Kubernetes Service Chevron down icon Chevron up icon
Security in Kubernetes Chevron down icon Chevron up icon
Advanced Techniques for Scheduling Pods Chevron down icon Chevron up icon
Autoscaling Kubernetes Pods and Nodes Chevron down icon Chevron up icon
Advanced Kubernetes: Traffic Management, Multi-Cluster Strategies, and More Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela