Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Mastering Kubernetes

You're reading from   Mastering Kubernetes Master the art of container management by using the power of Kubernetes

Arrow left icon
Product type Paperback
Published in Apr 2018
Publisher
ISBN-13 9781788999786
Length 468 pages
Edition 2nd Edition
Arrow right icon
Author (1):
Arrow left icon
Gigi Sayfan Gigi Sayfan
Author Profile Icon Gigi Sayfan
Gigi Sayfan
Arrow right icon
View More author details
Toc

Table of Contents (16) Chapters Close

Preface 1. Understanding Kubernetes Architecture 2. Creating Kubernetes Clusters FREE CHAPTER 3. Monitoring, Logging, and Troubleshooting 4. High Availability and Reliability 5. Configuring Kubernetes Security, Limits, and Accounts 6. Using Critical Kubernetes Resources 7. Handling Kubernetes Storage 8. Running Stateful Applications with Kubernetes 9. Rolling Updates, Scalability, and Quotas 10. Advanced Kubernetes Networking 11. Running Kubernetes on Multiple Clouds and Cluster Federation 12. Customizing Kubernetes – API and Plugins 13. Handling the Kubernetes Package Manager 14. The Future of Kubernetes 15. Other Books You May Enjoy

Kubernetes runtimes

Kubernetes originally only supported Docker as a container runtime engine. But that is no longer the case. Kubernetes now supports several different runtimes:

  • Docker (through a CRI shim)
  • Rkt (direct integration to be replaced with rktlet)
  • Cri-o
  • Frakti (Kubernetes on the hypervisor, previously Hypernetes)
  • Rktlet (CRI implementation for rkt)
  • cri-containerd

A major design policy is that Kubernetes itself should be completely decoupled from specific runtimes. The Container Runtime Interface (CRI) enables this.

In this section, you'll get a closer look at the CRI and get to know 
the individual runtime engines. At the end of this section, you'll be able to make a well-informed decision about which runtime engine is appropriate for your use case and under what circumstances you may switch or even combine multiple runtimes in the same system.

The Container Runtime Interface (CRI)

The CRI is a gRPC API, containing specifications/requirements and libraries for container runtimes to integrate with kubelet on a node. In Kubernetes 1.7, the internal Docker integration in Kubernetes was replaced with a CRI-based integration. This is a big deal. It opened the door to multiple implementations that take advantage of advances in the field of container. The Kubelet doesn't need to interface directly with multiple runtimes. Instead, it can talk to any CRI-compliant container runtime. The following diagram illustrates the flow:

There are two gRPC service interfaces—ImageService and RuntimeService—that CRI container runtimes (or shims) must implement. The ImageService is responsible for managing images. Here is the gRPC/protobuf interface (this is not Go):

service ImageService { 
    rpc ListImages(ListImagesRequest) returns (ListImagesResponse) {} 
    rpc ImageStatus(ImageStatusRequest) returns (ImageStatusResponse) {} 
    rpc PullImage(PullImageRequest) returns (PullImageResponse) {} 
    rpc RemoveImage(RemoveImageRequest) returns (RemoveImageResponse) {} 
    rpc ImageFsInfo(ImageFsInfoRequest) returns (ImageFsInfoResponse) {} 
} 

The RuntimeService is responsible for managing pods and containers. Here is the gRPC/profobug interface:

service RuntimeService { 
    rpc Version(VersionRequest) returns (VersionResponse) {} 
    rpc RunPodSandbox(RunPodSandboxRequest) returns (RunPodSandboxResponse) {} 
    rpc StopPodSandbox(StopPodSandboxRequest) returns (StopPodSandboxResponse) {} 
    rpc RemovePodSandbox(RemovePodSandboxRequest) returns (RemovePodSandboxResponse) {} 
    rpc PodSandboxStatus(PodSandboxStatusRequest) returns (PodSandboxStatusResponse) {} 
    rpc ListPodSandbox(ListPodSandboxRequest) returns (ListPodSandboxResponse) {} 
    rpc CreateContainer(CreateContainerRequest) returns (CreateContainerResponse) {} 
    rpc StartContainer(StartContainerRequest) returns (StartContainerResponse) {} 
    rpc StopContainer(StopContainerRequest) returns (StopContainerResponse) {} 
    rpc RemoveContainer(RemoveContainerRequest) returns (RemoveContainerResponse) {} 
    rpc ListContainers(ListContainersRequest) returns (ListContainersResponse) {} 
    rpc ContainerStatus(ContainerStatusRequest) returns (ContainerStatusResponse) {} 
    rpc UpdateContainerResources(UpdateContainerResourcesRequest) returns (UpdateContainerResourcesResponse) {} 
    rpc ExecSync(ExecSyncRequest) returns (ExecSyncResponse) {} 
    rpc Exec(ExecRequest) returns (ExecResponse) {} 
    rpc Attach(AttachRequest) returns (AttachResponse) {} 
    rpc PortForward(PortForwardRequest) returns (PortForwardResponse) {} 
    rpc ContainerStats(ContainerStatsRequest) returns (ContainerStatsResponse) {} 
    rpc ListContainerStats(ListContainerStatsRequest) returns (ListContainerStatsResponse) {} 
    rpc UpdateRuntimeConfig(UpdateRuntimeConfigRequest) returns (UpdateRuntimeConfigResponse) {} 
    rpc Status(StatusRequest) returns (StatusResponse) {} 
} 

The data types used as arguments and return types are called messages, and are also defined as part of the API. Here is one of them:

message CreateContainerRequest { 
    string pod_sandbox_id = 1; 
    ContainerConfig config = 2; 
    PodSandboxConfig sandbox_config = 3; 
} 

As you can see, messages can be embedded inside each other. The CreateContainerRequest message has one string field and two other fields, which are themselves messages: ContainerConfig and PodSandboxConfig.

Now that you are familiar at the code level with the Kubernetes runtime engine, let's look at the individual runtime engines briefly.

Docker

Docker is, of course, the 800-pound gorilla of containers. Kubernetes was originally designed to manage only Docker containers. The multi-runtime capability was first introduced in Kubernetes 1.3 and the CRI in Kubernetes 1.5. Until then, Kubernetes could only manage Docker containers.

If you are reading this book, I assume you're very familiar with Docker and what it brings to the table. Docker is enjoying tremendous popularity and growth, but there is also a lot of criticism being directed toward it. Critics often mention the following concerns:

  • Security
  • Difficulty setting up multi-container applications (in particular, networking)
  • Development, monitoring, and logging
  • Limitations of Docker containers running one command
  • Releasing half-baked features too fast

Docker is aware of the criticisms and has addressed some of these concerns. In particular, Docker has invested in its Docker Swarm product. Docker swarm is a Docker-native orchestration solution that competes with Kubernetes. It is simpler to use than Kubernetes, but it's not as powerful or mature.

Since Docker 1.12, swarm mode has been included in the Docker daemon natively, which upset some people because of its bloat and scope creep. That in turn made more people turn to CoreOS rkt as an alternative solution.

Since Docker 1.11, released in April 2016, Docker has changed the way it runs containers. The runtime now uses containerd and runC to run Open Container Initiative (OCI) images in containers:

Rkt

Rkt is a container manager from CoreOS (the developers of the CoreOS Linux distro, etcd, flannel, and more). The rkt runtime prides itself on its simplicity and strong emphasis on security and isolation. It doesn't have a daemon like the Docker engine, and relies on the OS init system, such as systemd, to launch the rkt executable. Rkt can download images (both app container (appc) images and OCI images), verify them, and run them in containers. Its architecture is much simpler.

App container

CoreOS started a standardization effort in December 2014 called appc. This included the standard image format (ACI), runtime, signing, and discovery. A few months later, Docker started its own standardization effort with OCI. At this point, it seems these efforts will converge. This is a great thing as tools, images, and runtime will be able to interoperate freely. We're not there yet.

Cri-O

Cri-o is a Kubernetes incubator project. It is designed to provide an integration path between Kubernetes and OCI-compliant container runtimes, such as Docker. The idea is that Cri-O will provide the following capabilities:

  • Support multiple image formats, including the existing Docker image format
  • Support multiple means of downloading images, including trust and image verification
  • Container image management (managing image layers, overlaying filesystems, and so on)
  • Container process life cycle management
  • The monitoring and logging required to satisfy the CRI
  • Resource isolation as required by the CRI

Then any OCI-compliant container runtime can be plugged in and will be integrated with Kubernetes.

Rktnetes

Rktnetes is Kubernetes plus rkt as the runtime engine. Kubernetes is still in the process of abstracting away the runtime engine. Rktnetes is not really a separate product. From the outside, all it takes is running the Kubelet on each node with a couple of command-line switches.

Is rkt ready for use in production?

I don't have a lot of hands-on experience with rkt. However, it is used by Tectonic—the commercial CoreOS-based Kubernetes distribution. If you run a different type of cluster, I would suggest that you wait until rkt is integrated with Kubernetes through the CRI/rktlet. There are some known issues you need to be aware of when using rkt as opposed to Docker with Kubernetes—for example, missing volumes are not created automatically, Kubectl's attach and get logs don't work, and init containers are not supported, among other issues.

Hyper containers

Hyper containers are another option. A Hyper container has a lightweight VM (its own guest kernel) and it runs on bare metal. Instead of relying on Linux cgroups for isolation, it relies on a hypervisor. This approach presents an interesting mix compared to standard, bare-metal clusters that are difficult to set up and public clouds where containers are deployed on heavyweight VMs.

Stackube

Stackube (previously called Hypernetes) is a multitenant distribution that uses Hyper containers as well as some OpenStack components for authentication, persistent storage, and networking. Since containers don't share the host kernel, it is safe to run containers of different tenants on the same physical host. Stackube uses Frakti as its container runtime, of course.

In this section, we've covered the various runtime engines that Kubernetes supports, as well as the trend toward standardization and convergence. In the next section, we'll take a step back and look at the big picture, as well as how Kubernetes fits into the CI/CD pipeline.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime