Traditional virtualization appeared on the Linux kernel in the form of hypervisors such as Xen and KVM. This allowed users to isolate their runtime environment in the form of virtual machines (VMs). Virtual machines run their own operating system kernel. Users attempted to use the resources on host machines as much as possible. However, high densities were difficult to achieve with this form of virtualization, especially when a deployed application was small in size compared to a kernel; most of the host's memory was consumed by multiple copies of kernels running on it. Hence, in such high-density workloads, machines were divided using technologies such as chroot jails which provided imperfect workload isolation and carried security implications.
In 2001, an operating system virtualization in the form of Linux vServer was introduced as a series of kernel patches.
This led to an early form of container virtualization. In such forms of virtualization, the kernel groups and isolates processes belonging to different tenants, each sharing the same kernel.
Here is a table that explains the various developments that took place to enable operating system virtualization:
Year and Development |
Description |
1979: chroot |
The concept of containers emerged way back in 1979 with UNIX chroot. Later, in 1982, this was incorporated into BSD. With chroot, users can change the root directory for any running process and its children, separating it from the main OS and directory. |
2000: FreeBSD Jails |
FreeBSD Jails was introduced by Derrick T. Woolworth at R&D associates in 2000 for FreeBSD. It is an operating system's system call similar to chroot, with additional process sandboxing features for isolating the filesystem, users, networking, and so on. |
2001: Linux vServer |
Another jail mechanism that can securely partition resources on a computer system (filesystem, CPU time, network addresses, and memory). |
2004: Solaris containers |
Solaris containers were introduced for x86 and SPARC systems, and first released publicly in February 2004. They are a combination of system resource controls and the boundary separations provided by zones. |
2005: OpenVZ |
OvenVZ is similar to Solaris containers and makes use of a patched Linux kernel for providing virtualization, isolation, resource management, and checkpointing. |
2006: Process containers |
Process containers were implemented at Google in 2006 for limiting, accounting, and isolating the resource usage (CPU, memory, disk I/O, network, and so on) of a collection of processes. |
2007: Control groups |
Control groups, also known as CGroups, were implemented by Google and added to the Linux Kernel in 2007. CGroups help in the limiting, accounting, and isolation of resource usages (memory, CPU, disks, network, and so on) for a collection of processes. |
2008: LXC |
LXC stands for Linux containers and was implemented using CGroups and Linux namespaces. In comparison to other container technologies, LXC works on the vanilla Linux kernel. |
2011: Warden |
Warden was implemented by Cloud Foundry in 2011 using LXC at the initial stage; later on, it was replaced with their own implementation. |
2013: LMCTFY |
LMCTFY stands for Let Me Contain That For You. It is the open source version of Google's container stack, which provides Linux application containers. |
2013: Docker |
Docker was started in the year of 2016. Today it is the most widely used container management tool. |
2014: Rocket |
Rocket is another container runtime tool from CoreOS. It emerged to address security vulnerabilities in early versions of Docker. Rocket is another possibility or choice to use instead of Docker, with the most resolved security, composability, speed, and production requirements. |
2016: Windows containers |
Microsoft added container support (Windows containers) to the Microsoft Windows Server operating system in 2015 for Windows-based applications. With the help of this implementation, Docker would be able to run Docker containers on Windows natively without having to run a virtual machine to run. |