Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
The Kubernetes Workshop
The Kubernetes Workshop

The Kubernetes Workshop: Learn how to build and run highly scalable workloads on Kubernetes

Arrow left icon
Profile Icon Zachary Arnold Profile Icon Sahil Dua Profile Icon Wei Huang Profile Icon Faisal Masood Profile Icon Mélony Qin Profile Icon Mohammed Abu Taleb +2 more Show less
Arrow right icon
Can$12.99 Can$44.99
Full star icon Full star icon Full star icon Full star icon Half star icon 4.9 (9 Ratings)
eBook Sep 2020 780 pages 1st Edition
eBook
Can$12.99 Can$44.99
Paperback
Can$55.99
Subscription
Free Trial
Arrow left icon
Profile Icon Zachary Arnold Profile Icon Sahil Dua Profile Icon Wei Huang Profile Icon Faisal Masood Profile Icon Mélony Qin Profile Icon Mohammed Abu Taleb +2 more Show less
Arrow right icon
Can$12.99 Can$44.99
Full star icon Full star icon Full star icon Full star icon Half star icon 4.9 (9 Ratings)
eBook Sep 2020 780 pages 1st Edition
eBook
Can$12.99 Can$44.99
Paperback
Can$55.99
Subscription
Free Trial
eBook
Can$12.99 Can$44.99
Paperback
Can$55.99
Subscription
Free Trial

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

The Kubernetes Workshop

2. An Overview of Kubernetes

Overview

In this chapter, we will have our first hands-on introduction to Kubernetes. This chapter will give you a brief overview of the different components of Kubernetes and how they work together. We will also try our hand at working with some fundamental Kubernetes components.

By the end of this chapter, you will have a single-node Minikube environment set up where you can run many of the exercises and activities in this book. You will be able to understand the high-level architecture of Kubernetes and identify the roles of the different components. You will also learn the basics required to migrate containerized applications to a Kubernetes environment.

Introduction

We ended the previous chapter by providing a brief and abstract introduction to Kubernetes, as well as some of its advantages. In this chapter, we will provide you with a much more concrete high-level understanding of how Kubernetes works. First, we will walk you through how to install Minikube, which is a handy tool that creates a single-node cluster and provides a convenient learning environment for Kubernetes. Then, we will take a 10,000-foot overview of all the components, including their responsibilities and how they interact with each other. After that, we will migrate the Docker application that we built in the previous chapter to Kubernetes and illustrate how it can enjoy the benefits afforded by Kubernetes, such as creating multiple replicas, and version updates. Finally, we will explain how the application responds to external and internal traffic.

Having an overview of Kubernetes is important before we dive deeper into the different aspects of it so that when we learn more specifics about the different aspects, you will have an idea of where they fit in the big picture. Also, when we go even further and explore how to use Kubernetes to deploy applications in a production environment, you will have an idea of how everything is taken care of in the background. This will also help you with optimization and troubleshooting.

Setting up Kubernetes

Had you asked the question, "How do you easily install Kubernetes?" three years ago, it would have been hard to give a compelling answer. Embarrassing, but true. Kubernetes is a sophisticated system, and getting it installed and managing it well isn't an easy task.

However, as the Kubernetes community has expanded and matured, more and more user-friendly tools have emerged. As of today, based on your requirements, there are a lot of options to choose from:

  • If you are using physical (bare-metal) servers or virtual machines (VMs), Kubeadm is a good fit.
  • If you're running on cloud environments, Kops and Kubespray can ease Kubernetes installation, as well as integration with the cloud providers. In fact, we will teach you how to deploy Kubernetes on AWS using Kops in Chapter 11, Build Your Own HA Cluster, and we will take another look at the various options we can use to set up Kubernetes.
  • If you want to drop the burden of managing the Kubernetes control plane (which we will learn about later in this chapter), almost all cloud providers have their Kubernetes managed services, such as Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS), and IBM Kubernetes Service (IKS).
  • If you just want a playground to study Kubernetes in, Minikube and Kind can help you spin up a Kubernetes cluster in minutes.

We will use Minikube extensively throughout this book as a convenient learning environment. But before we proceed to the installation process, let's take a closer look at Minikube itself.

An Overview of Minikube

Minikube is a tool that can be used to set up a single-node cluster, and it provides handy commands and parameters to configure the cluster. It primarily aims to provide a local testing environment. It packs a VM containing all the core components of Kubernetes that get installed onto your host machine, all at once. This allows it to support any operating system, as long as there is a virtualization tool (also known as a Hypervisor) pre-installed. The following are the most common Hypervisors supported by Minikube:

  • VirtualBox (works for all operating systems)
  • KVM (Linux-specific)
  • Hyperkit (macOS-specific)
  • Hyper-V (Windows-specific)

Regarding the required hardware resources, the minimum requirement is 2 GB RAM and any dual-core CPU that supports virtualization (Intel VT or AMD-V), but you will, of course, need a more powerful machine if you are trying out heavier workloads.

Just like any other modern software, Kubernetes provides a handy command-line client called kubectl that allows users to interact with the cluster conveniently. In the next exercise, we will set up Minikube and use some basic kubectl commands. We will go into more detail about kubectl in the next chapter.

Exercise 2.01: Getting Started with Minikube and Kubernetes Clusters

In this exercise, we will use Ubuntu 20.04 as the base operating system to install Minikube, using which we can start a single-node Kubernetes cluster easily. Once the Kubernetes cluster has been set up, you should be able to check its status and use kubectl to interact with it:

Note

Since this exercise deals with software installations, you will need to be logged in as root/superuser. A simple way to switch to being a root user is to run the following command: sudo su -.

In step 9 of this exercise, we will create a regular user and then switch back to it.

  1. First, ensure that VirtualBox is installed. You can confirm this by using the following command:
    which VirtualBox

    You should see the following output:

    /usr/bin/VirtualBox

    If VirtualBox has been successfully installed, the which command should show the path of the executable, as shown in the preceding screenshot. If not, then please ensure that you have installed VirtualBox as per the instructions provided in the Preface.

  2. Download the Minikube standalone binary by using the following command:
    curl -Lo minikube https://github.com/kubernetes/minikube/releases/download/<version>/minikube-<ostype-arch> && chmod +x minikube

    In this command, <version> should be replaced with a specific version, such as v1.5.2 (which is the version we will use in this chapter) or the latest. Depending on your host operating system, <ostype-arch> should be replaced with linux-amd64 (for Ubuntu) or darwin-amd64 (for macOS).

    Note

    To ensure compatibility with the commands provided in this book, we recommend that you install Minikube version v1.5.2.

    You should see the following output:

    Figure 2.1: Downloading the Minikube binary

    Figure 2.1: Downloading the Minikube binary

    The preceding command contains two parts: the first command, curl, downloads the Minikube binary, while the second command, chmod, changes the permission to make it executable.

  3. Move the binary into the system path (in the example, it's /usr/local/bin) so that we can directly run Minikube, regardless of which directory the command is run in:
    mv minikube /usr/local/bin

    When executed successfully, the move (mv) command does not give a response in the terminal.

  4. After running the move command, we need to confirm that the Minikube executable is now in the correct location:
    which minikube

    You should see the following output:

    /usr/local/bin/minikube

    Note

    If the which minikube command doesn't give you the expected result, you may need to explicitly add /usr/local/bin to your system path by running export PATH=$PATH:/usr/local/bin.

  5. You can check the version of Minikube using the following command:
    minikube version

    You should see the following output:

    minikube version: v1.5.2
    commit: 792dbf92a1de583fcee76f8791cff12e0c9440ad-dirty
  6. Now, let's download kubectl version v1.16.2 (so that it's compatible with the version of Kubernetes that our setup of Minikube will create later) and make it executable by using the following command:
    curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.16.2/bin/<ostype>/amd64/kubectl && chmod +x kubectl

    As mentioned earlier, <ostype> should be replaced with linux (for Ubuntu) or darwin (for macOS).

    You should see the following output:

    Figure 2.2: Downloading the kubectl binary

    Figure 2.2: Downloading the kubectl binary

  7. Then, move it to the system path, just like we did for the executable of Minikube earlier:
    mv kubectl /usr/local/bin
  8. Now, let's check whether the executable for kubectl is at the correct path:
    which kubectl

    You should see the following response:

    /usr/local/bin/kubectl
  9. Since we are currently logged in as the root user, let's create a regular user called k8suser by running the following command:
    useradd k8suser

    Enter your desired password when you are prompted for it. You will also be prompted to enter other details, such as your full name. You may choose to skip those details by simply pressing Enter. You should see an output similar to the following:

    Figure 2.3: Creating a new Linux user

    Figure 2.3: Creating a new Linux user

    Enter Y and hit Enter to confirm the final prompt for creating a user, as shown at the end of the previous screenshot.

  10. Now, switch user from root to k8suser:
    su - k8suser

    You should see the following output:

    root@ubuntu:~# su – k8suser
    k8suser@ubuntu:~$
  11. Now, we can create a Kubernetes cluster using minikube start:
    minikube start --kubernetes-version=v1.16.2

    Note

    If you want to manage multiple clusters, Minikube provides a --profile <profile name> parameter to each cluster.

    It will take a few minutes to download the VM images and get everything set up. After Minikube has started up successfully, you should see a response that looks similar to the following:

    Figure 2.4: Minikube first startup

    Figure 2.4: Minikube first startup

    As we mentioned earlier, Minikube starts up a VM instance with all the components of Kubernetes inside it. By default, it uses VirtualBox, and you can use the --vm-driver flag to specify a particular hypervisor driver (such as hyperkit for macOS). Minikube also provides the --kubernetes-version flag so you can specify the Kubernetes version you want to use. If not specified, it will use the latest version that was available when the Minikube release was finalized. In this chapter, to ensure compatibility of the Kubernetes version with the kubectl version, we have specified Kubernetes version v1.16.2 explicitly.

    The following commands should help establish that the Kubernetes cluster that was started by Minikube is running properly.

  12. Use the following command to get the basic status of the various components of the cluster:
    minikube status

    You should see the following response:

    host: Running
    kubelet: Running
    apiserver: Running
    kubeconfig: Configured
  13. Now, let's look at the version of the kubectl client and Kubernetes server:
    kubectl version --short

    You should see the following response:

    Client Version: v1.16.2
    Server Version: v1.16.2
  14. Let's learn how many machines comprise the cluster and get some basic information about them:
    kubectl get node

    You should see a response similar to the following:

    NAME          STATUS          ROLES          AGE          VERSION
    minikube      Ready           master         2m41s        v1.16.2

After finishing this exercise, you should have Minikube set up with a single-node Kubernetes cluster. In the next section, we will enter the Minikube VM to take a look at how the cluster is composed and the various components of Kubernetes that make it work.

Kubernetes Components Overview

By completing the previous exercise, you have a single-node Kubernetes cluster up and running. Before playing your first concert, let's hold on a second and pull the curtains aside to take a look backstage to see how Kubernetes is architected behind the scenes, and then check how Minikube glues its various components together inside its VM.

Kubernetes has several core components that make the wheels of the machine turn. They are as follows:

  • API server
  • etcd
  • Controller manager
  • Scheduler
  • Kubelet

These components are critical for the functioning of a Kubernetes cluster.

Besides these core components, you would deploy your applications in containers, which are bundled together as pods. We will learn more about pods in Chapter 5, Pods. These pods, and several other resources, are defined by something called API objects.

An API object describes how a certain resource should be honored in Kubernetes. We usually define API objects using a human-readable manifest file, and then use a tool (such as kubectl) to parse it and hand it over to a Kubernetes API server. Kubernetes then tries to create the resource specified in the object and match its state to the desired state in the object definition, as mentioned in the manifest file. Next, we will walk you through how these components are organized and behave in a single-node cluster created by Minikube.

Minikube provides a command called minikube ssh that's used to gain SSH access from the host machine (in our machine, it's the physical machine running Ubuntu 20.04) to the minikube virtual machine, which serves as the sole node in our Kubernetes cluster. Let's see how that works:

minikube ssh

You will see the following output:

Figure 2.5: Accessing the Minikube VM via SSH

Figure 2.5: Accessing the Minikube VM via SSH

Note

All the commands that will be shown later in this section are presumed to have been run inside the Minikube VM, after running minikube ssh.

Container technology brings the convenience of encapsulating your application. Minikube is no exception – it leverages containers to glue the Kubernetes components together. In the Minikube VM, Docker is pre-installed so that it can manage the core Kubernetes components. You can take a look at this by running docker ps; however, the result may be overwhelming as it includes all the running containers – both the core Kubernetes components and add-ons, as well as all the columns – which will output a very large table.

To simplify the output and make it easier to read, we will pipe the output from docker ps into two other Bash commands:

  1. grep -v pause: This will filter the results by not displaying the "sandbox" containers.

    Without grep -v pause, you would find that each container is "paired" with a "sandbox" container (in Kubernetes, it's implemented as a pause image). This is because, as mentioned in the previous chapter, Linux containers can be associated (or isolated) by joining the same (or different) Linux namespace. In Kubernetes, a "sandbox" container is used to bootstrap a Linux namespace, and then the containers that run the real application are able to join that namespace. Finer details about how all this works under the hood have been left out of scope for the sake of brevity.

    Note

    If not specified explicitly, the term "namespace" is used interchangeably with "Kubernetes namespace" across this book. In terms of "Linux namespace", "Linux" would not be omitted to avoid confusion.

  2. awk '{print $NF}': This will only print the last column with a container name.

    Thus, the final command is as follows:

    docker ps | grep -v pause | awk '{print $NF}'

    You should see the following output:

    Figure 2.6: Getting the list of containers by running the Minikube VM

Figure 2.6: Getting the list of containers by running the Minikube VM

The highlighted containers shown in the preceding screenshot are basically the core components of Kubernetes. We'll discuss each of these in detail in the following sections.

etcd

A distributed system may face various kinds of failures (network, storage, and so on) at any moment. To ensure it still works properly when failures arise, critical cluster metadata and state must be stored in a reliable way.

Kubernetes abstracts the cluster metadata and state as a series of API objects. For example, the node API object represents a Kubernetes worker node's specification, as well as its latest status.

Kubernetes uses etcd as the backend key-value database to persist the API objects during the life cycle of a Kubernetes cluster. It is important to note that nothing (internal cluster resources or external clients) is allowed to talk to etcd without going through the API server. Any updates to or requests from etcd are made only via calls to the API server.

In practice, etcd is usually deployed with multiple instances to ensure the data is persisted in a secure and fault-tolerant manner.

API Server

The API server allows standard APIs to access Kubernetes API objects. It is the only component that talks to backend storage (etcd).

Additionally, by leveraging the fact that it is the single point of contact for communicating to etcd, it provides a convenient interface for clients to "watch" any API objects that they may be interested in. Once the API object has been created, updated, or deleted, the client that is "watching" will get instant notifications so they can act upon those changes. The "watching" client is also known as the "controller", which has become a very popular entity that's used in both built-in Kubernetes objects and Kubernetes extensions.

Note

You will learn more about the API server in Chapter 4, How to Communicate with Kubernetes (API Server), and about controllers in Chapter 7, Kubernetes Controllers.

Scheduler

The scheduler is responsible for distributing the incoming workloads to the most suitable node. The decision regarding distribution is made by the scheduler's understanding of the whole cluster, as well as a series of scheduling algorithms.

Note

You will learn more about the scheduler in Chapter 17, Advanced Scheduling in Kubernetes.

Controller Manager

As we mentioned earlier in the API Server subsection, the API server exposes ways to "watch" almost any API object and notify the watchers about the changes in the API objects being watched.

It works pretty much like a Publisher-Subscriber pattern. The controller manager acts as a typical subscriber and watches the only API objects that it is interested in, and then attempts to make appropriate changes to move the current state toward the desired state described in the object.

For example, if it gets an update from the API server saying that an application claims two replicas, but right now there is only one living in the cluster, it will create the second one to make the application adhere to its desired replica number. The reconciliation process keeps running across the controller manager's life cycle to ensure that all applications stay in their expected state.

The controller manager aggregates various kinds of controllers to honor the semantics of API objects, such as Deployments and Services, which we will introduce later in this chapter.

Where Is the kubelet?

Note that etcd, the API server, the scheduler, and the controller manager comprise the control plane of Kubernetes. A machine that runs these components is called a master node. The kubelet, on the other hand, is deployed on each worker machine.

In our single-node Minikube cluster, the kubelet is deployed on the same node that carries the control plane components. However, in most production environments, it is not deployed on any of the master nodes. We will learn more about production environments when we deploy a multi-node cluster in Chapter 11, Build Your Own HA Cluster.

The kubelet primarily aims at talking to the underlying container runtime (for example, Docker, containerd, or cri-o) to bring up the containers and ensure that the containers are running as expected. Also, it's responsible for sending the status update back to the API server.

However, as shown in the preceding screenshot, the docker ps command doesn't show anything named kubelet. To start, stop, or restart any software and make it auto-restart upon failure, usually, we need a tool to manage its life cycle. In Linux, systemd has that responsibility. In Minikube, the kubelet is managed by systemd and runs as a native binary instead of a Docker container. We can run the following command to check its status:

systemctl status kubelet

You should see an output similar to the following:

Figure 2.7: Status of kubelet

Figure 2.7: Status of kubelet

By default, the kubelet has the configuration for staticPodPath in its config file (which is stored at /var/lib/kubelet/config.yaml). kubelet is instructed to continuously watch the changes in files under that path, and each file under that path represents a Kubernetes component. Let's understand what this means by first finding staticPodPath in the kubelet's config file:

grep "staticPodPath" /var/lib/kubelet/config.yaml

You should see the following output:

staticPodPath: /etc/kubernetes/manifests

Now, let's see the contents of this path:

ls /etc/kubernetes/manifests

You should see the following output:

addon-manager.yaml.tmpl kube-apiserver.yaml      kube-scheduler.yaml
etcd.yaml               kube-controller-manager.yaml

As shown in the list of files, the core components of Kubernetes are defined by objects that have a definition specified in YAML files. In the Minikube environment, in addition to managing the user-created pods, the kubelet also serves as a systemd equivalent in order to manage the life cycle of Kubernetes system-level components, such as the API server, the scheduler, the controller manager, and other add-ons. Once any of these YAML files is changed, the kubelet auto-detects that and updates the state of the cluster so that it matches the desired state defined in the updated YAML configuration.

We will stop here without diving deeper into the design of Minikube. In addition to "static components", the kubelet is also the manager of "regular applications" to ensure that they're running as expected on the node and evicts pods according to the API specification or upon resource shortage.

kube-proxy

kube-proxy appears in the output of the docker ps command, but it was not present at /etc/kubernetes/manifests when we explored that directory in the previous subsection. This implies its role – it's positioned more as an add-on component instead of a core one.

kube-proxy is designed as a distributed network router that runs on every node. Its ultimate goal is to ensure that inbound traffic to a Service (this is an API object that we will introduce later) endpoint can be routed properly. Moreover, if multiple containers are serving one application, it is able to balance the traffic in a round-robin manner by leveraging the underlying Linux iptables/IPVS technology.

There are also some other add-ons such as CoreDNS, though we will skip those so that we can focus on the core components and get a high-level picture.

Note

Sometimes, kube-proxy and CoreDNS are also considered core components of a Kubernetes installation. To some extent, that's technically true as they're mandatory in most cases; otherwise, the Service API object won't work. However, in this book, we're leaning more toward categorizing them as "add-ons" as they focus on the implementation of one particular Kubernetes API resource instead of general workflow. Also, kube-proxy and CoreDNS are defined in addon-manager.yaml.tmpl instead of being portrayed on the same level as the other core Kubernetes components.

Kubernetes Architecture

In the previous section, we gained a first impression of the core Kubernetes components: etcd, the API server, the scheduler, the controller manager, and the kubelet. These components, plus other add-ons, comprise the Kubernetes architecture, which can be seen in the following diagram:

Figure 2.8: Kubernetes architecture

Figure 2.8: Kubernetes architecture

At this point, we won't look at each component in too much detail. However, at a high-level view, it's critical to understand how the components communicate with each other and why they're designed in that way.

The first thing to understand is which components the API server can interact with. From the preceding diagram, we can easily tell that the API server can talk to almost every other component (except the container runtime, which is handled by the kubelet) and that it also serves to interact with end-users directly. This design makes the API server act as the "heart" of Kubernetes. Additionally, the API server also scrutinizes incoming requests and writes API objects into the backend storage (etcd). This, in other words, makes the API server the throttle of security control measures such as authentication, authorization, and auditing.

The second thing to understand is how the different Kubernetes components (except for the API server) interact with each other. It turns out that there is no explicit connection among them – the controller manager doesn't talk to the scheduler, nor does the kubelet talk to kube-proxy.

You read that right – they do need to work in coordination with each other to accomplish many functionalities, but they never directly talk to each other. Instead, they communicate implicitly via the API server. More precisely, they communicate by watching, creating, updating, or deleting corresponding API objects. This is also known as the controller/operator pattern.

Container Network Interface

There are several networking aspects to take into consideration, such as how a pod communicates with its host machine's network interface, how a node communicates with other nodes, and, eventually, how a pod communicates with any pod across different nodes. As the network infrastructure differs vastly in the cloud or on-premises environments, Kubernetes chooses to solve those problems by defining a specification called the Container Network Interface (CNI). Different CNI providers can follow the same interface and implement their logic that adheres to the Kubernetes standards to ensure that the whole Kubernetes network works. We will revisit the idea of the CNI in Chapter 11, Build Your Own HA Cluster. For now, let's return to our discussion of how the different Kubernetes components work.

Later in this chapter, Exercise 2.05, How Kubernetes Manages a Pod's Life Cycle, will help you consolidate your understanding of this and clarify a few things, such as how the different Kubernetes components operate synchronously or asynchronously to ensure a typical Kubernetes workflow, and what would happen if one or more of these components malfunctions. The exercise will help you better understand the overall Kubernetes architecture. But before that, let's introduce our containerized application from the previous chapter to the Kubernetes world and explore a few benefits of Kubernetes.

Migrating Containerized Application to Kubernetes

In the previous chapter, we built a simple HTTP server called k8s-for-beginners, and it runs as a Docker container. It works perfectly for a sample application. However, what if you have to manage thousands of containers, and coordinate and schedule them properly? How can you upgrade a service without downtime? How do you keep a service healthy upon unexpected failure? These problems exceed the abilities of a system that simply uses containers alone. What we need is a platform that can orchestrate, as well as manage, our containers.

We have told you that Kubernetes is the solution that we need. Next, we will walk you through a series of exercises regarding how to orchestrate and run containers in Kubernetes using a Kubernetes native approach.

Pod Specification

A straightforward thought is that we wish to see what the equivalent API call or command to run a container in Kubernetes is. As explained in Chapter 1, Introduction to Kubernetes and Containers, a container can join another container's namespace so that they can access each other's resources (for example, network, storage, and so on) without additional overhead. In the real world, some applications may need several containers working closely, either in parallel or in a particular order (the output of one will be processed by another). Also, some generic containers (for example, logging agent, network throttling agent, and so on) may need to work closely with their target containers.

Since an application may often need several containers, a container is not the minimum operational unit in Kubernetes; instead, it introduces a concept called pods to bundle one or multiple containers. Kubernetes provides a series of specifications to describe how this pod is supposed to be, including several specifics such as images, resource requests, startup commands, and more. To send this pod spec to Kubernetes, particularly to the Kubernetes API server, we're going to use kubectl.

Note

We will learn more about pods in Chapter 5, Pods, but we will use them in this chapter for the purpose of simple demonstrations. You can refer to the complete list of available pod specifications at this link: https://godoc.org/k8s.io/api/core/v1#PodSpec.

Next, let's learn how to run a single container in Kubernetes by composing the pod spec file (also called the specification, manifest, config, or configuration file). In Kubernetes, you can use YAML or JSON to write this specification file, though YAML is commonly used since it is more human-readable and editable.

Consider the following YAML spec for a very simple pod:

kind: Pod
apiVersion: v1
metadata:
  name: k8s-for-beginners
spec:
  containers:
  - name: k8s-for-beginners
    image: packtworkshops/the-kubernetes-workshop:k8s-for-beginners

Let's go through the different fields briefly:

  • kind tells Kubernetes which type of object you want to create. Here, we are creating a Pod. In later chapters, you will see many other kinds, such as Deployment, StatefulSet, ConfigMap, and so on.
  • apiVersion specifies a particular version of an API object. Different versions may behave a bit differently.
  • metadata includes some attributes that can be used to uniquely identify the pod, such as name and namespace. If we don't specify a namespace, it goes in the default namespace.
  • spec contains a series of fields describing the pod. In this example, there is one container that has its image URL and name specified.

Pods are one of the simplest Kubernetes objects to deploy, so we will use them to learn how to deploy objects using YAML manifests in the following exercise.

Applying a YAML Manifest

Once we have a YAML manifest ready, we can use kubectl apply -f <yaml file> or kubectl create -f <yaml file> to instruct the API server to persist the API resources defined in this manifest. When you create a pod from scratch for the first time, it doesn't make much difference which of the two commands you use. However, we may often need to modify the YAML (let's say, for example, if we want to upgrade the image version) and reapply it. If we use the kubectl create command, we have to delete and recreate it. However, with the kubectl apply command, we can rerun the same command and the delta change will be calculated and applied automatically by Kubernetes.

This is very convenient from an operational point of view. For example, if we use some form of automation, it is much simpler to repeat the same command. So, we will use kubectl apply across the following exercise, regardless of whether it's the first time it's being applied or not.

Note

A detailed on kubectl can be obtained in Chapter 4, How to Communicate with Kubernetes (API Server).

Exercise 2.02: Running a Pod in Kubernetes

In the previous exercise, we started up Minikube and looked at the various Kubernetes components running as pods. Now, in this exercise, we shall deploy our pod. Follow these steps to complete this exercise:

Note

If you have been trying out the commands from the Kubernetes Components Overview section, don't forget to leave the SSH session by using the exit command before beginning this exercise. Unless otherwise specified, all commands using kubectl should run on the host machine and not inside the Minikube VM.

  1. In Kubernetes, we use a spec file to describe an API object such as a pod. As mentioned earlier, we will stick to YAML as it is more human-readable and editable friendly. Create a file named k8s-for-beginners-pod.yaml (using any text editor of your choice) with the following content:
    kind: Pod
    apiVersion: v1
    metadata:
      name: k8s-for-beginners
    spec:
      containers:
      - name: k8s-for-beginners
        image: packtworkshops/the-kubernetes-workshop:k8s-for-      beginners

    Note

    Please replace the image path in the last line of the preceding YAML file with the path to your image that you created in the previous chapter.

  2. On the host machine, run the following command to create this pod:
    kubectl apply -f k8s-for-beginners-pod.yaml

    You should see the following output:

    pod/k8s-for-beginners created
  3. Now, we can use the following command to check the pod's status:
    kubectl get pod

    You should see the following response:

    NAME                   READY     STATUS      RESTARTS       AGE
    k8s-for-beginners      1/1       Running     0              7s

    By default, kubectl get pod will list all the pods using a table format. In the preceding output, we can see the k8s-for-beginners pod is running properly and that it has one container that is ready (1/1). Moreover, kubectl provides an additional flag called -o so we can adjust the output format. For example, -o yaml or -o json will return the full output of the pod API object in YAML or JSON format, respectively, as it's stored version in Kubernetes' backend storage (etcd).

  4. You can use the following command to get more information about the pod:
    kubectl get pod -o wide

    You should see the following output:

    Figure 2.9: Getting more information about pods

    Figure 2.9: Getting more information about pods

    As you can see, the output is still in the table format and we get additional information such as IP (the internal pod IP) and NODE (which node the pod is running on).

  5. You can get the list of nodes in our cluster by running the following command:
    kubectl get node

    You should see the following response:

    NAME          STATUS          ROLES          AGE          VERSION
    minikube      Ready           master         30h          v1.16.2
  6. The IP listed in Figure 2.9 refers to the internal IP Kubernetes assigned for this pod, and it's used for pod-to-pod communication, not for routing external traffic to pods. Hence, if you try to access this IP from outside the cluster, you will get nothing. You can try that using the following command from the host machine, which will fail:
    curl 172.17.0.4:8080

    Note

    Remember to change 172.17.0.4 to the value you get for your environment in step 4, as seen in Figure 2.9.

    The curl command will just hang and return nothing, as shown here:

    k8suser@ubuntu:~$ curl 172.17.0.4:8080
    ^C

    You will need to press Ctrl + C to abort it.

  7. In most cases, end-users don't need to interact with the internal pod IP. However, just for observation purposes, let's SSH into the Minikube VM:
    minikube ssh

    You will see the following response in the terminal:

    Figure 2.10: Accessing the Minikube VM via SSH

    Figure 2.10: Accessing the Minikube VM via SSH

  8. Now, try calling the IP from inside the Minikube VM to verify that it works:
    curl 172.17.0.4:8080

    You should get a successful response:

    Hello Kubernetes Beginners!

With this, we have successfully deployed our application in a pod on the Kubernetes cluster. We can confirm that it is working since we get a response when we call the application from inside the cluster. Now, you may end the Minikube SSH session using the exit command.

Service Specification

The last part of the previous section proves that network communication works great among different components inside the cluster. But in the real world, you would not expect users of your application to gain SSH access into your cluster to use your applications. So, you would want your application to be accessed externally.

To facilitate just that, Kubernetes provides a concept called a Service to abstract the network access to your application's pods. A Service acts as a network proxy to accept network traffic from external users and then distributes it to internal pods. However, there should be a way to describe the association rule between the Service and the corresponding pods. Kubernetes uses labels, which are defined in the pod definitions, and label selectors, which are defined in the Service definition, to describe this relationship.

Note

You will learn more about labels and label selectors in Chapter 6, Labels and Annotations.

Let's consider the following sample spec for a Service:

kind: Service
apiVersion: v1
metadata:
  name: k8s-for-beginners
spec:
  selector:
    tier: frontend
  type: NodePort
  ports:
  - port: 80
    targetPort: 8080

Similar to a pod spec, here, we define kind and apiVersion, while name is defined under the metadata field. Under the spec field, there are several critical fields to take note of:

  • selector defines the labels to be selected to match a relationship with the corresponding pods, which, as you will see in the following exercise, are supposed to be labeled properly.
  • type defines the type of Service. If not specified, the default type is ClusterIP, which means it's only used within the cluster, that is, internally. Here, we specify it as NodePort. This means the Service will expose a port in each node of the cluster and associate the port with the corresponding pods. Another well-known type is called LoadBalancer, which is typically not implemented in a vanilla Kubernetes offering. Instead, Kubernetes delegates the implementation to each cloud provider, such as GKE, EKS, and so on.
  • ports include a series of port fields, each with a targetPort field. The targetPort field is the actual port that's exposed by the destination pod.

    Thus, the Service can be accessed internally via <service ip>:<port>. Now, for example, if you have an NGINX pod running internally and listening on port 8080, then you should define targetPort as 8080. You can specify any arbitrary number for the port field, such as 80 in this case. Kubernetes will set up and maintain the mapping between <service IP>:<port> and <pod IP>:<targetPort>. In the following exercise, we will learn how to access the Service from outside the cluster and bring external traffic inside the cluster via the Service.

In the following exercise, we will define Service manifests and create them using kubectl apply commands. You will learn that the common pattern for resolving problems in Kubernetes is to find out the proper API objects, then compose the detailed specs using YAML manifests, and finally create the objects to bring them into effect.

Exercise 2.03: Accessing a Pod via a Service

In the previous exercise, we observed that an internal pod IP doesn't work for anyone outside the cluster. In this exercise, we will create Services that will act as connectors to map the external requests to the destination pods so that we can access the pods externally without entering the cluster. Follow these steps to complete this exercise:

  1. Firstly, let's tweak the pod spec from Exercise 2.02, Running a Pod in Kubernetes, to apply some labels. Modify the contents of the k8s-for-beginners-pod1.yaml file, as follows:
    kind: Pod
    apiVersion: v1
    metadata:
      name: k8s-for-beginners
      labels:
        tier: frontend
    spec:
      containers:
      - name: k8s-for-beginners
        image: packtworkshops/the-kubernetes-workshop:k8s-for-      beginners

    Here, we added a label pair, tier: frontend, under the labels field.

  2. Because the pod name remains the same, let's rerun the apply command so that Kubernetes knows that we're trying to update the pod's spec, instead of creating a new pod:
    kubectl apply -f k8s-for-beginners-pod1.yaml

    You should see the following response:

    pod/k8s-for-beginners configured

    Behind the scenes, for the kubectl apply command, kubectl generates the difference of the specified YAML and the stored version in the Kubernetes server-side storage (that is, etcd). If the request is valid (that is, we have not made any errors in the specification format or the command), kubectl will send an HTTP patch to the Kubernetes API server. Hence, only the delta changes will be applied. If you look at the message that's returned, you'll see it says pod/k8s-for-beginners configured instead of created, so we can be sure it's applying the delta changes and not creating a new pod.

  3. You can use the following command to explicitly display the labels that have been applied to existing pods:
    kubectl get pod --show-labels

    You should see the following response:

    NAME              READY  STATUS   RESTARTS   AGE  LABELS
    k8s-for-beginners 1/1    Running  0          16m  tier=frontend

    Now that the pod has the tier: frontend attribute, we're ready to create a Service and link it to the pods.

  4. Create a file named k8s-for-beginners-svc.yaml with the following content:
    kind: Service
    apiVersion: v1
    metadata:
      name: k8s-for-beginners
    spec:
      selector:
        tier: frontend
      type: NodePort
      ports:
      - port: 80
        targetPort: 8080
  5. Now, let's create the Service using the following command:
    kubectl apply -f k8s-for-beginners-svc.yaml

    You should see the following response:

    service/k8s-for-beginners created
  6. Use the get command to return the list of created Services and confirm whether our Service is online:
    kubectl get service

    You should see the following response:

    Figure 2.11: Getting the list of Services

    Figure 2.11: Getting the list of Services

    So, you may have noticed that the PORT(S) column outputs 80:32571/TCP. Port 32571 is an auto-generated port that's exposed on every node, which is done intentionally so that external users can access it. Now, before moving on to the next step, exit the SSH session.

  7. Now, we have the "external port" as 32571, but we still need to find the external IP. Minikube provides a utility we can use to easily access the k8s-for-beginners Service:
    minikube service k8s-for-beginners

    You should see a response that looks similar to the following:

    Figure 2.12: Getting the URL and port to access the NodePort Service

    Figure 2.12: Getting the URL and port to access the NodePort Service

    Depending on your environment, this may also automatically open a browser web page so you can access the Service. From the URL, you will be able to see that the Service port is 32571. The external IP is actually the IP of the Minikube VM.

  8. You can also access our application from outside the cluster via the command line:
    curl http://192.168.99.100:32571

    You should see the following response:

    Hello Kubernetes Beginners!

As a summary, in this exercise, we created a NodePort Service to enable external users to access the internal pods without entering the cluster. Under the hood, there are several layers of traffic transitions that make this happen:

  • The first layer is from the external user to the machine IP at the auto-generated random port (3XXXX).
  • The second layer is from the random port (3XXXX) to the Service IP (10.X.X.X) at port 80.
  • The third layer is from the Service IP (10.X.X.X) ultimately to the pod IP at port 8080.

The following is a diagram illustrating these interactions:

Figure 2.13: Routing traffic from a user outside the cluster 
to the pod running our application

Figure 2.13: Routing traffic from a user outside the cluster to the pod running our application

Services and Pods

In step 3 of the previous exercise, you may have noticed that the Service tries to match pods by labels (the selector field under the spec section) instead of using a fixed pod name or something similar. From a pod's perspective, it doesn't need to know which Service is bringing traffic to it. (In some rare cases, it can even be mapped to multiple Services; that is, multiple Services may be sending traffic to a pod.)

This label-based matching mechanism is widely used in Kubernetes. It enables the API objects to be loosely coupled at runtime. For example, you can specify tier: frontend as the label selector, which will, in turn, be associated with the pods that are labeled as tier: frontend.

Due to this, by the time the Service is created, it doesn't matter if the backing pods exist or not. It's totally acceptable for backing pods to be created later, and after they are created, the Service object will become associated with the correct pods. Internally, the whole mapping logic is implemented by the service controller, which is part of the controller manager component. It's also possible that a Service may have two matching pods at a time, and later a third pod is created with matching labels, or one of the existing pods gets deleted. In either case, the service controller can detect such changes and ensure that users can always access their application via the Service endpoint.

It's a very commonly used pattern in Kubernetes to orchestrate your application using different kinds of API objects and then glue them together by using labels or other loosely coupled conventions. It's also the key part of container orchestration.

Delivering Kubernetes-Native Applications

In the previous sections, we migrated a Docker-based application to Kubernetes and successfully accessed it from inside the Minikube VM, as well as externally. Now, let's see what other benefits Kubernetes can provide if we design our application from the ground up so that it can be deployed using Kubernetes.

Along with the increasing usage of your application, it may be common to run several replicas of certain pods to serve a business functionality. In this case, grouping different containers in a pod alone is not sufficient. We need to go ahead and create groups of pods that are working together. Kubernetes provides several abstractions for groups of pods, such as Deployments, DaemonSets, Jobs, CronJobs, and so on. Just like the Service object, these objects can also be created by using a spec that's been defined in a YAML file.

To start understanding the benefits of Kubernetes, let's use a Deployment to demonstrate how to replicate (scale up/down) an application in multiple pods.

Abstracting groups of pods using Kubernetes gives us the following advantages:

  • Creating replicas of pods for redundancy: This is the main advantage of abstractions of groups of pods such as Deployments. A Deployment can create several pods with the given spec. A Deployment will automatically ensure that the pods that it creates are online, and it will automatically replace any pods that fail.
  • Easy upgrades and rollbacks: Kubernetes provides different strategies that you can use to upgrade your applications, as well as rolling versions back. This is important because in modern software development, the software is often developed iteratively, and updates are released frequently. An upgrade can change anything in the Deployment specification. It can be an update of labels or any other field(s), an image version upgrade, an update on its embedded containers, and so on.

Let's take a look at some notable aspects of the spec of a sample Deployment:

k8s-for-beginners-deploy.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: k8s-for-beginners
spec:
  replicas: 3
  selector:
    matchLabels:
      tier: frontend
  template:
    metadata:
      labels:
        tier: frontend
    spec:
      containers:
      - name: k8s-for-beginners
        image: packtworkshops/the-kubernetes-workshop:k8s-for-          beginners

In addition to wrapping the pod spec as a "template", a Deployment must also specify its kind (Deployment), as well as the API version (apps/v1).

Note

For some historical reason, the spec name apiVersion is still being used. But technically speaking, it literally means apiGroupVersion. In the preceding Deployment example, it belongs to the apps group and is version v1.

In the Deployment spec, the replicas field instructs Kubernetes to start three pods using the pod spec defined in the template field. The selector field plays the same role as we saw in the case of the Service – it aims to associate the Deployment object with specific pods in a loosely coupled manner. This is particularly useful if you want to bring any preexisting pods under the management of your new Deployment.

The replica number defined in a Deployment or other similar API object represents the desired state of how many pods are supposed to be running continuously. If some of these pods fail for some unexpected reason, Kubernetes will automatically detect that and create a corresponding number of pods to take their place. We will explore that in the following exercise.

We'll see a Deployment in action in the following exercise.

Exercise 2.04: Scaling a Kubernetes Application

In Kubernetes, it's easy to increase the number of replicas running the application by updating the replicas field of a Deployment spec. In this exercise, we'll experiment with how to scale a Kubernetes application up and down. Follow these steps to complete this exercise:

  1. Create a file named k8s-for-beginners-deploy.yaml using the content shown here:
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: k8s-for-beginners
    spec:
      replicas: 3
      selector:
        matchLabels:
          tier: frontend
      template:
        metadata:
          labels:
            tier: frontend
        spec:
          containers:
          - name: k8s-for-beginners
            image: packtworkshops/the-kubernetes-workshop:k8s-for-          beginners

    If you take a closer look, you'll see that this Deployment spec is largely based on the pod spec from earlier exercises (k8s-for-beginners-pod1.yaml), which you can see under the template field.

  2. Next, we can use kubectl to create the Deployment:
    kubectl apply -f k8s-for-beginners-deploy.yaml

    You should see the following output:

    deployment.apps/k8s-for-beginners created
  3. Given that the Deployment has been created successfully, we can use the following command to show all the Deployment's statuses, such as their names, running pods, and so on:
    kubectl get deploy

    You should get the following response:

    NAME                   READY   UP-TO-DATE   AVAILABLE    AGE
    k8s-for-beginners      3/3     3            3            41s

    Note

    As shown in the previous command, we are using deploy instead of deployment. Both of these will work and deploy is an allowed short name for deployment. You can find a quick list of some commonly used short names at this link: https://kubernetes.io/docs/reference/kubectl/overview/#resource-types.

    You can also view the short names by running kubectl api-resources, without specifying the resource type.

  4. A pod called k8s-for-beginners exists that we created in the previous exercise. To ensure that we see only the pods being managed by the Deployment, let's delete the older pod:
    kubectl delete pod k8s-for-beginners

    You should see the following response:

    pod "k8s-for-beginners" deleted
  5. Now, get a list of all the pods:
    kubectl get pod

    You should see the following response:

    Figure 2.14: Getting the list of pods

    Figure 2.14: Getting the list of pods

    The Deployment has created three pods, and their labels (specified in the labels field in step 1) happen to match the Service we created in the previous section. So, what will happen if we try to access the Service? Will the network traffic going to the Service be smartly routed to the new three pods? Let's test this out.

  6. To see how the traffic is distributed to the three pods, we can simulate a number of consecutive requests to the Service endpoint by running the curl command inside a Bash for loop, as follows:
    for i in $(seq 1 30); do curl <minikube vm ip>:<service node port>; done

    Note

    In this command, use the same IP and port that you used in the previous exercise if you are running the same instance of Minikube. If you have restarted Minikube or have made any other changes, please get the proper IP of your Minikube cluster by following step 9 of the previous exercise.

    Once you've run the command with the proper IP and port, you should see the following output:

    Figure 2.15: Repeatedly accessing our application

    Figure 2.15: Repeatedly accessing our application

    From the output, we can tell that all 30 requests get the expected response.

  7. You can run kubectl logs <pod name> to check the log of each pod. Let's go one step further and figure out the exact number of requests each pod has responded to, which might help us find out whether the traffic was evenly distributed. To do that, we can pipe the logs of each pod into the wc command to get the number of lines:
    kubectl logs <pod name> | wc -l

    Run the preceding command three times, copying the pod name you obtained, as shown in Figure 2.16:

    Figure 2.16: Getting the logs of each of the three pod replicas running our application

    Figure 2.16: Getting the logs of each of the three pod replicas running our application

    The result shows that the three pods handled 9, 10, and 11 requests, respectively. Due to the small sample size, the distribution is not absolutely even (that is, 10 for each), but it is sufficient to indicate the default round-robin distribution strategy used by a Service.

    Note

    You can read more about how kube-proxy leverages iptables to perform the internal load balancing by looking at the official documentation: https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-iptables.

  8. Next, let's learn how to scale up a Deployment. There are two ways of accomplishing this: one way is to modify the Deployment's YAML config, where we can set the value of replicas to another number (such as 5), while the other way is to use the kubectl scale command, as follows:
    kubectl scale deploy k8s-for-beginners --replicas=5

    You should see the following response:

    deployment.apps/k8s-for-beginners scaled
  9. Let's verify whether there are five pods running:
    kubectl get pod

    You should see a response similar to the following:

    Figure 2.17: Getting the list of pods

    Figure 2.17: Getting the list of pods

    The output shows that the existing three pods are kept and that two new pods are created.

  10. Similarly, you can specify replicas that are smaller than the current number. In our example, let's say that we want to shrink the replica's number to 2. The command for this would look as follows:
    kubectl scale deploy k8s-for-beginners --replicas=2

    You should see the following response:

    deployment.apps/k8s-for-beginners scaled
  11. Now, let's verify the number of pods:
    kubectl get pod

    You should see a response similar to the following:

    Figure 2.18: Getting the list of pods

    Figure 2.18: Getting the list of pods

    As shown in the preceding screenshot, there are two pods, and they are both running as expected. Thus, in Kubernetes' terms, we can say, "the Deployment is in its desired state".

  12. We can run the following command to verify this:
    kubectl get deploy

    You should see the following response:

    NAME                   READY    UP-TO-DATE   AVAILABLE    AGE
    k8s-for-beginners      2/2      2            2           19m
  13. Now, let's see what happens if we delete one of the two pods:
    kubectl delete pod <pod name>

    You should get the following response:

    pod "k8s-for-beginners-66644bb776-7j9mw" deleted
  14. Check the status of the pods to see what has happened:
    kubectl get pod

    You should see the following response:

    Figure 2.19: Getting the list of pods

Figure 2.19: Getting the list of pods

We can see that there are still two pods. From the output, it's worth noting that the first pod name is the same as the second pod in Figure 2.18 (this is the one that was not deleted), but that the highlighted pod name is different from any of the pods in Figure 2.18. This indicates that the highlighted one is the pod that was newly created to replace the deleted one. The Deployment created a new pod so that the number of running pods satisfies the desired state of the Deployment.

In this exercise, we have learned how to scale a deployment up and down. You can scale other similar Kubernetes objects, such as DaemonSets and StatefulSets, in the same way. Also, for such objects, Kubernetes will try to auto-recover the failed pods.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Explore the Kubernetes environment and understand how containers are managed
  • Learn how to build, maintain, and deploy cloud-native applications using Kubernetes
  • Get to grips with using Kubernetes primitives to manage the life cycle of a full application stack

Description

Thanks to its extensive support for managing hundreds of containers that run cloud-native applications, Kubernetes is the most popular open source container orchestration platform that makes cluster management easy. This workshop adopts a practical approach to get you acquainted with the Kubernetes environment and its applications. Starting with an introduction to the fundamentals of Kubernetes, you’ll install and set up your Kubernetes environment. You’ll understand how to write YAML files and deploy your first simple web application container using Pod. You’ll then assign human-friendly names to Pods, explore various Kubernetes entities and functions, and discover when to use them. As you work through the chapters, this Kubernetes book will show you how you can make full-scale use of Kubernetes by applying a variety of techniques for designing components and deploying clusters. You’ll also get to grips with security policies for limiting access to certain functions inside the cluster. Toward the end of the book, you’ll get a rundown of Kubernetes advanced features for building your own controller and upgrading to a Kubernetes cluster without downtime. By the end of this workshop, you’ll be able to manage containers and run cloud-based applications efficiently using Kubernetes.

Who is this book for?

Whether you are new to the world of web programming or are an experienced developer or software engineer looking to use Kubernetes for managing and scaling containerized applications, you’ll find this workshop useful. A basic understanding of Docker and containerization is necessary to make the most of this book.

What you will learn

  • Get to grips with the fundamentals of Kubernetes and its terminology
  • Share or store data in different containers running in the same pod
  • Create a container image from an image definition manifest
  • Construct a Kubernetes-aware continuous integration (CI) pipeline for deployments
  • Attract traffic to your app using Kubernetes ingress
  • Build and deploy your own admission controller

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Sep 24, 2020
Length: 780 pages
Edition : 1st
Language : English
ISBN-13 : 9781838644284
Vendor :
Google
Tools :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Product Details

Publication date : Sep 24, 2020
Length: 780 pages
Edition : 1st
Language : English
ISBN-13 : 9781838644284
Vendor :
Google
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just Can$6 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just Can$6 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total Can$ 227.97
The Kubernetes Workshop
Can$55.99
Kubernetes and Docker - An Enterprise Guide
Can$69.99
Mastering Kubernetes
Can$101.99
Total Can$ 227.97 Stars icon
Banner background image

Table of Contents

19 Chapters
1. Introduction to Kubernetes and Containers Chevron down icon Chevron up icon
2. An Overview of Kubernetes Chevron down icon Chevron up icon
3. kubectl – Kubernetes Command Center Chevron down icon Chevron up icon
4. How to Communicate with Kubernetes (API Server) Chevron down icon Chevron up icon
5. Pods Chevron down icon Chevron up icon
6. Labels and Annotations Chevron down icon Chevron up icon
7. Kubernetes Controllers Chevron down icon Chevron up icon
8. Service Discovery Chevron down icon Chevron up icon
9. Storing and Reading Data on Disk Chevron down icon Chevron up icon
10. ConfigMaps and Secrets Chevron down icon Chevron up icon
11. Build Your Own HA Cluster Chevron down icon Chevron up icon
12. Your Application and HA Chevron down icon Chevron up icon
13. Runtime and Network Security in Kubernetes Chevron down icon Chevron up icon
14. Running Stateful Components in Kubernetes Chevron down icon Chevron up icon
15. Monitoring and Autoscaling in Kubernetes Chevron down icon Chevron up icon
16. Kubernetes Admission Controllers Chevron down icon Chevron up icon
17. Advanced Scheduling in Kubernetes Chevron down icon Chevron up icon
18. Upgrading Your Cluster without Downtime Chevron down icon Chevron up icon
19. Custom Resource Definitions in Kubernetes Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.9
(9 Ratings)
5 star 88.9%
4 star 11.1%
3 star 0%
2 star 0%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




agus Jun 22, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Good
Subscriber review Packt
William Kpabitey Kwabla Oct 22, 2020
Full star icon Full star icon Full star icon Full star icon Full star icon 5
If you want to learn everything about how to containerize applications specifically Kubernetes from the basics to the advanced level like networking and security, I recommend this book for you.The authors took their time to really explain the concepts for everyone to really understand them. I have a preference of understanding concepts than memorizing them and the authors did an amazing job explaining concepts
Amazon Verified review Amazon
austbot Sep 28, 2020
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book while easy to use also provides real non-trivial examples that help you get past "hello world" on Kubernetes. The Authors have done an outstanding job with the content they pinpoint the exact problems and solutions you will need to make the most of Kubernetes. It's also full of helpful cloud and application design guides.
Amazon Verified review Amazon
chandra shekhar Feb 15, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I am not an expert but I got what I was looking for. It provides practical examples.
Amazon Verified review Amazon
dawood Mar 04, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I highly recommend this book if you are looking to go down the path of learning Kubernetes. Authors does a great job breaking everything down to a level that even someone like myself who doesn’t have any experience with this technology can digest. A big game changer extremely helpful and very important guide.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.