Containers
Containers are also a virtualization technology; however, they do not virtualize a server. Instead, a container is operating system–level virtualization. What this means is that containers share the operating system kernel (which is provided by the host) among themselves along with the host. Multiple containers running on a host (physical or virtual) share the host operating system kernel. Containers ensure that they reuse the host kernel instead of each having a dedicated kernel to themselves.
Containers are completely isolated from their host or from other containers running on the host. Windows containers use Windows storage filter drivers and session isolation to isolate operating system services such as the file system, registry, processes, and networks. The same is true even for Linux containers running on Linux hosts. Linux containers use the Linux namespace, control groups, and union file system to virtualize the host operating system.
The container appears as if it has a completely new and untouched operating system and resources. This arrangement provides lots of benefits, such as the following:
- Containers are fast to provision and take less time to provision compared to virtual machines. Most of the operating system services in a container are provided by the host operating system.
- Containers are lightweight and require fewer computing resources than VMs. The operating system resource overhead is no longer required with containers.
- Containers are much smaller than VMs.
- Containers can help solve problems related to managing multiple application dependencies in an intuitive, automated, and simple manner.
- Containers provide infrastructure in order to define all application dependencies in a single place.
Containers are an inherent feature of Windows Server 2016 and Windows 10; however, they are managed and accessed using a Docker client and a Docker daemon. Containers can be created on Azure with a Windows Server 2016 SKU as an image. Each container has a single main process that must be running for the container to exist. A container will stop when this process ends. Additionally, a container can either run in interactive mode or in detached mode like a service:
Figure 1.3: Container architecture
Figure 1.3 shows all the technical layers that enable containers. The bottom-most layer provides the core infrastructure in terms of network, storage, load balancers, and network cards. At the top of the infrastructure is the compute layer, consisting of either a physical server or both physical and virtual servers on top of a physical server. This layer contains the operating system with the ability to host containers. The operating system provides the execution driver that the layers above use to call the kernel code and objects to execute containers. Microsoft created Host Container System Shim (HCSShim) for managing and creating containers and uses Windows storage filter drivers for image and file management.
Container environment isolation is enabled for the Windows session. Windows Server 2016 and Nano Server provide the operating system, enable the container features, and execute the user-level Docker client and Docker Engine. Docker Engine uses the services of HCSShim, storage filter drivers, and sessions to spawn multiple containers on the server, with each containing a service, application, or database.