Open source virtualization projects
The following table is a list of open source virtualization projects in Linux:
Project |
Virtualization Type |
Project URL |
---|---|---|
KVM (Kernel-based Virtual Machine) |
Full virtualization | |
VirtualBox |
Full virtualization | |
Xen |
Full and paravirtualization | |
Lguest |
Paravirtualization | |
UML (User Mode Linux) | ||
Linux-VServer |
In upcoming sections, we will discuss Xen and KVM, which are the leading open source virtualization solutions in Linux.
Xen
Xen originated at the University of Cambridge as a research project. The first public release of Xen was in 2003. Later, the leader of this project at the University of Cambridge, Ian Pratt, co-founded a company called XenSource with Simon Crosby (also of the University of Cambridge). This company started to develop the project in an open source fashion. On 15 April 2013, the Xen project was moved to the Linux Foundation as a collaborative project. The Linux Foundation launched a new trademark for the Xen Project to differentiate the project from any commercial use of the older Xen trademark. More details about this can be found at xenproject.org website.
Xen hypervisor has been ported to a number of processor families, for example, Intel IA-32/64, x86_64, PowerPC,ARM, MIPS, and so on.
Xen can operate on both para virtualization and Hardware-assisted or Full Virtualization (HVM), which allow unmodified guests. A Xen hypervisor runs guest operating systems called Domains. There are mainly two types of domains in Xen:
- Dom 0
- Dom U
Dom Us are the unprivileged domains or guest systems. Dom 0 is also known as the privileged domain or the special guest and has extended capabilities. The Dom Us or guest systems are controlled by Dom 0. That said Dom 0 contains the drivers for all the devices in the system. Dom 0 also contains a control stack to manage virtual machine creation, destruction, and configuration. Dom 0 also has the privilege to directly access the hardware; it can handle all the access to the system's I/O functions and can interact with the other Virtual Machines. Dom 0 sets the Dom Us, communication path with hardware devices using virtual drivers. It also exposes a control interface to the outside world, through which the system is controlled. Dom 0 is the first VM started by the system and it's a must-have domain for a Xen Project hypervisor.
Note
If you want to know more about the Xen project, please refer to http://wiki.xenproject.org/wiki/Xen_Overview or http://xenprojec.org
Introducing KVM
Kernel-based Virtual Machine (KVM) represents the latest generation of open source virtualization. The goal of the project was to create a modern hypervisor that builds on the experience of previous generations of technologies and leverages the modern hardware available today (VT-x, AMD-V).
KVM simply turns the Linux kernel into a hypervisor when you install the KVM kernel module. However, as the standard Linux kernel is the hypervisor, it benefits from the changes to the standard kernel (memory support, scheduler, and so on). Optimizations to these Linux components (such as the new scheduler in the 3.1 kernel) benefit both the hypervisor (the host operating system) and the Linux guest operating systems. For I/O emulations, KVM uses a userland software, QEMU; Qemu is a userland program that does hardware emulation.
It emulates the processor and a long list of peripheral devices: disk, network, VGA, PCI, USB, serial/parallel ports, and so on to build a complete virtual hardware on which the guest operating system can be installed and this emulation is powered by KVM.
High-level overview of KVM
The following figure gives us a high-level overview of the user mode and kernel mode components of a KVM:
A separate qemu-kvm
process is launched for each virtual machine by libvirtd
at the request of system management utilities, such as virsh
and virt-manager
. The properties of the virtual machines (number of CPUs, memory size, I/O device configuration) are defined in separate XML files, which are located in the directory /etc/libvirt/qemu
. libvirtd
uses the details from these XML files to derive the argument list that is passed to the qemu-kvm
process.
Here is an example:
qemu 14644 9.8 6.8 6138068 1078400 ? Sl 03:14 97:29 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name guest1 -S -machine pc--m 5000 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid 7a615914-ea0d-7dab-e709-0533c00b921f -no-user-config -nodefaults -chardev socket,id=charmonitor-drive file=/dev/vms/hypervisor2,if=none,id=drive-virtio-disk0,format=raw,cache=none,aio=native -device id=net0,mac=52:54:00:5d:be:06
Here, an argument similar to –m 5000
forms a 5 GB memory for the virtual machine, --smp = 4
points to a 4 vCPU that has a topology of four vSockets with one core for each socket.
Details about what libvirt
and qemu
are and how they communicate each other to provide virtualization, are explained in Chapter 2, KVM Internals.