Kubernetes runtimes
Kubernetes originally only supported Docker as a container runtime engine. But that is no longer the case. Rkt is another supported runtime engine and there are interesting attempts to work with Hyper.sh
containers via Hypernetes. A major design policy is that Kubernetes itself should be completely decoupled from specific runtimes. The interaction between Kubernetes and the runtime is through a relatively generic interface that runtime engines must implement. Most of the communication is using the pod and container concepts and the operations that can be performed on a container. Each runtime engine is responsible for implementing the Kubernetes runtime interface to be compatible.
In this section, you'll get a closer look at the runtime interface and get to know the individual runtime engines. At the end of this section, you'll be able to make a well-informed decision about which runtime engine is appropriate for your use case and under what circumstances you may switch or even combine multiple runtimes in the same system.
The runtime interface
The runtime interface for containers is specified in the Kubernetes project on GitHub. Kubernetes is open source, so we can look at it at the following URL:
https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/container/runtime.go.
I'll present here snippets from this file without the elaborate comments. Even if you're not a full-fledged programmer and know nothing about the Go language, you should be able to grasp the scope and responsibilities of a runtime engine from the viewpoint of Kubernetes:
Note
A quick note about Go to help you parse the code: The method name comes first, followed by the method's parameters in parentheses. Each parameter is a pair, consisting of a name followed by its type. Finally, the return values are specified. Go allows multiple return types. It is very common to return an error
object in addition to the actual result. If everything is OK, the error
object will be nil.
type Runtime interface { Type() string Version() (Version, error) APIVersion() (Version, error) Status() error GetPods(all bool) ([]*Pod, error) }
The fact that it is an interface means that Kubernetes doesn't provide an implementation. The first group of methods provides general information about the runtime: Type
, Version
, APIVersion
, and Status
. You can also get all the pods:
SyncPod(pod *api.Pod, apiPodStatus api.PodStatus, podStatus *PodStatus, pullSecrets []api.Secret, backOff *flowcontrol.Backoff) PodSyncResult KillPod(pod *api.Pod, runningPod Pod, gracePeriodOverride *int64) error GetPodStatus(uid types.UID, name, namespace string) (*PodStatus, error) GetNetNS(containerID ContainerID) (string, error) GetPodContainerID(*Pod) (ContainerID, error) GetContainerLogs(pod *api.Pod, containerID ContainerID, logOptions *api.PodLogOptions, stdout, stderr io.Writer) (err error) DeleteContainer(containerID ContainerID) error
The next group of methods deal mostly with pods appropriately as this is the main abstraction in the Kubernetes conceptual model. Then there is the GetPodContainerID()
, which gets you from a container to a pod, and a few more container-related methods:
ContainerCommandRunner
ContainerAttacher
ImageService
The last three items, ContainerCommandRunner
, ContainerAttacher
, and ImageService
, are interfaces that the runtime interface inherits. This means that whoever implements the runtime interface also needs to implement the methods of these interfaces. The interfaces are defined in the same file. Just the interface names provide a lot of information about what they do. Kubernetes obviously needs to run commands in containers, and it needs to attach containers to its pods and pull container images. I encourage you to pursue this file and get familiar with the code.
Now that you are familiar at the code level with what Kubernetes considers as a runtime engine, let's look at the individual runtime engines briefly.
Docker
Docker is, of course, the 800 pound gorilla of containers. Kubernetes was originally designed to manage only Docker containers. The multi-runtime capability was first introduced in Kubernetes 1.3. Until then, Kubernetes could only manage Docker containers.
I assume you're very familiar with Docker and what it brings to the table if you are reading this book. Docker enjoys tremendous popularity and growth, but there is also a lot of criticism toward it. Critics often mention the following concerns:
- Security
- Difficulty setting up multi-container applications (in particular, networking)
- Development, monitoring, and logging
- Limitations of Docker containers running one command
- Releasing half-based features too fast
Docker is aware of the criticisms and has addressed some of these concerns. In particular, Docker invested in its Docker swarm product. Docker swarm is a Docker-native orchestration solution that competes with Kubernetes. It is simpler to use than Kubernetes, but it's not as powerful or mature.
Note
Starting with Docker 1.12, swarm mode is included in the Docker Daemon natively, which upset some people due to bloat and scope creep. That in turn made more people turn to CoreOS rkt as an alternative solution.
Starting with Docker 1.11, released on April 2016, Docker has changed the way it runs containers. The runtime now uses containerd
and runC
to run Open Container Initiative (OCI) images in containers:
Rkt
Rkt is a new container manager from CoreOS (developers of the CoreOS Linux distro, etcd, flannel, and more). The rkt runtime prides itself on its simplicity and strong emphasis on security and isolation. It doesn't have a Daemon like the Docker engine and relies on the OS init system, such as systemd
, to launch the rkt executable. Rkt can download images (both App Container (appc) images and OCI images), verify them, and run them in containers. Its architecture is much simpler.
App container
CoreOS started a standardization effort in December 2014 called appc. This includes standard image format (ACI), runtime, signing, and discovery. A few months later, Docker started its own standardization effort with OCI. At this point it seems these efforts will converge. This is a great thing as tools, images, and runtime will be able to interoperate freely. We're not there yet.
Rktnetes
Rktnetes is Kubernetes plus rkt as the runtime engine. Kubernetes is still in the process of abstracting away the runtime engine. Rktnetes is not really a separate product. From the outside, all it takes is running the kubelet on each node with a couple of command-line switches. But, since there are fundamental differences between Docker and rkt, you may run into a variety of issues.
Is rkt ready for production usage?
The integration between rkt and Kubernetes is not totally seamless; there are still some rough spots. My recommendation at this stage (late 2016) is to prefer Docker unless you have a very specific reason to use rkt. If you decide that it's important for your use case to use rkt then you should base your cluster on CoreOS. It is most likely that you will find the best integration with the CoreOS cluster, as well as the best documentation and online support.
Hyper containers
Hyper containers are another option. A Hyper container has a lightweight VM (its own guest kernel) and it runs on bare metal. Instead of relying on Linux cgroups for isolation, it relies on a hypervisor. This approach presents an interesting mix compared to standard bare-metal clusters that are difficult to set up and public clouds where containers are deployed on heavyweight VMs.
Hypernetes
Hypernetes is a multi-tenant Kubernetes distribution that uses Hyper containers as well as some OpenStack components for authentication, persistent storage, and networking. Since containers don't share the host kernel, it is safe to run containers of different tenants on the same physical host:
In this section, we've covered the various runtime engines that Kubernetes supports as well as the trend toward standardization and convergence. In the next section, we'll take a step back and look at the big picture, and how Kubernetes fits into the CI/CD pipeline.