In this article by Charbel Nemnom and Patrick Lownds, the author of the book Windows Server 2016 Hyper-V Cookbook, Second Edition, we will see Hyper-V architecture along with the most important components in Hyper-V and also differences between Windows Server 2016 Hyper-V, Nano Server, Hyper-V Server, Hyper-V Client, and VMware.
Virtualization is not a new feature or technology that everyone decided to have in their environment overnight. Actually, it's quite old. There are a couple of computers in the mid-60s that were using virtualization already, such as the IBM M44/44X, where you could run multiple VMs using hardware and software abstraction. It is known as the first virtualization system and the creation of the term virtual machine.
Although Hyper-V is in its fifth version, Microsoft virtualization technology is very mature. Everything started in 1988 with a company named Connectix. It had innovative products such as Connectix Virtual PC and Virtual Server, an x86 software emulation for Mac, Windows, and OS/2.
In 2003, Microsoft acquired Connectix and a year later released Microsoft Virtual PC and Microsoft Virtual Server 2005. After lots of improvements in the architecture during the project Viridian, Microsoft released Hyper-V in 2008, the second version in 2009 (Windows Server 2008 R2), the third version in 2012 (Windows Server 2012), a year later in 2013 the fourth version was released (Windows Server 2012 R2), the current and fifth version in 2016 (Windows Server 2016).
In the past years, Microsoft has proven that Hyper-V is a strong and competitive solution for server virtualization and provides scalability, flexible infrastructure, high availability, and resiliency. To better understand the different virtualization models, and how the VMs are created and managed by Hyper-V, it is very important to know its core, architecture, and components. By doing so, you will understand how it works, you can compare with other solutions, and troubleshoot problems easily.
Microsoft has long told customers that Azure datacenters are powered by Microsoft Hyper-V, and the forthcoming Azure Stack will actually allow us to run Azure in our own datacenters on top of Windows Server 2016 Hyper-V as well.
For more information about Azure Stack, please refer to the following link:
https://azure.microsoft.com/en-us/overview/azure-stack/
Microsoft Hyper-V proves over the years that it's a very scalable platform to virtualize any and every workload without exception.
This appendix includes well-explained topics with the most important Hyper-V architecture components compared with other versions.
(For more resources related to this topic, see here.)
The Virtual Machine Manager (VMM), also known as Hypervisor, is the software application responsible for running multiple VMs in a single system. It is also responsible for creation, preservation, division, system access, and VM management running on the Hypervisor layer.
These are the types of Hypervisors:
This type runs Hypervisor on top of an OS, as shown in the following diagram, we have the hardware at the bottom, the OS and then the Hypervisor running on top.
Microsoft Virtual PC and VMware Workstation is an example of software that uses VMM Type 2.
VMs pass hardware requests to the Hypervisor, to the host OS, and finally reaching the hardware. That leads to performance and management limitation imposed by the host OS.
Type 2 is common for test environments—VMs with hardware restrictions—to run on software applications that are installed in the host OS.
When using the VMM Hybrid type, the Hypervisor runs on the same level as the OS, as shown in the following diagram. As both Hypervisor and the OS are sharing the same access to the hardware with the same priority, it is not as fast and safe as it could be. This is the type used by the Hyper-V predecessor named Microsoft Virtual Server 2005:
VMM Type 1 is a type that has the Hypervisor running in a tiny software layer between the hardware and the partitions, managing and orchestrating the hardware access. The host OS, known as Parent Partition, run on the same level as the Child Partition, known as VMs, as shown in the next diagram. Due to the privileged access that the Hypervisor has on the hardware, it provides more security, performance, and control over the partitions. This is the type used by Hyper-V since its first release:
Knowing how Hyper-V works and how its architecture is constructed will make it easier to understand its concepts and operations. The following sections will explore the most important components in Hyper-V.
Before we dive into the Hyper-V architecture details, it will be easy to understand what happens after Hyper-V is installed, by looking at Windows without Hyper-V, as shown in the following diagram:
In a normal Windows installation, the instructions access is divided by four privileged levels in the processor called Rings. The most privileged level is Ring 0, with direct access to the hardware and where the Windows Kernel sits. Ring 3 is responsible for hosting the user level, where most common applications run and with the least privileged access.
When Hyper-V is installed, it needs a higher privilege than Ring 0. Also, it must have dedicated access to the hardware. This is possible due to the capabilities of the new processor created by Intel and AMD, called Intel-VT and AMD-V respectively, that allows the creation of a fifth ring called Ring -1. Hyper-V uses this ring to add its Hypervisor, having a higher privilege and running under Ring 0, controlling all the access to the physical components, as shown in the following diagram:
The OS architecture suffers several changes after Hyper-V installation. Right after the first boot, the Operating System Boot Loader file (winload.exe) checks the processor that is being used and loads the Hypervisor image on Ring -1 (using the files Hvix64.exe for Intel processors and Hvax64.exe for AMD processors). Then, Windows Server is initiated running on top of the Hypervisor and every VM that runs beside it.
After Hyper-V installation, Windows Server has the same privilege level as a VM and is responsible for managing VMs using several components.
There are four different versions of Hyper-V—the role that is installed on Windows Server 2016 (Core or Full Server), the role that can be installed on a Nano Server, its free version called Hyper-V Server and the Hyper-V that comes in Windows 10 called Hyper-V Client. The following sections will explain the differences between all the versions and a comparison between Hyper-V and its competitor, VMware.
Hyper-V is one of the most fascinating and improved role on Windows Server 2016. Its fifth version goes beyond virtualization and helps us deliver the correct infrastructure to host your cloud environment.
Hyper-V can be installed as a role in both Windows Server Standard and Datacenter editions.
The only difference in Windows Server 2012 and 2012 R2 in the Standard edition, two free Windows Server OSes are licensed whereas there are unlimited licenses in the Datacenter edition.
However, in Windows Server 2016 there are significant changes between the two editions.
The following table will show the difference between Windows Server 2016 Standard and Datacenter editions:
Resource |
Windows Server 2016 Datacenter edition |
Windows Server 2016 Standard edition |
Core functionality of Windows Server |
Yes |
Yes |
OSes/Hyper-V Containers |
Unlimited |
2 |
Windows Server Containers |
Unlimited |
Unlimited |
Nano Server |
Yes |
Yes |
Storage features for software-defined datacenter including Storage Spaces Direct and Storage Replica |
Yes |
N/A |
Shielded VMs |
Yes |
N/A |
Networking stack for software-defined datacenter |
Yes |
N/A |
Licensing Model |
Core + CAL |
Core + CAL |
As you can see in preceding table, the Datacenter edition is designed for highly virtualized private and hybrid cloud environments and Standard edition is for low density or non-virtualized (physical) environments.
In Windows Server 2016, Microsoft is also changing the licensing model from a per-processor to per-core licensing for Standard and Datacenter editions.
The following points will guide you in order to license Windows Server 2016 Standard and Datacenter edition:
The 2-core pack for each edition is one-eighth the price of a 2-processor license for corresponding Windows Server 2012 R2 editions.
The following table illustrates the new licensing model based on number of 2-core pack licenses:
Legend:
Windows Server 2016 Standard edition may need additional licensing.
Nano Server is a new headless, 64-bit only installation option that installs "just enough OS" resulting in a dramatically smaller footprint that results in more uptime and a smaller attack surface. Users can choose to add server roles as needed, including Hyper-V, Scale out File Server, DNS Server and IIS server roles. User can also choose to install features, including Container support, Defender, Clustering, Desired State Configuration (DSC), and Shielded VM support.
Nano Server is available in Windows Server 2016 for:
Supports the following inbox optional roles and features:
The Windows Server 2016 Hyper-V role can be installed on a Nano Server; this is a key Nano Server role, shrinking the OS footprint and minimizing reboots required when Hyper-V is used to run virtualization hosts. Nano server can be clustered, including Hyper-V failover clusters.
Hyper-V works the same on Nano Server including all features does in Windows Server 2016, aside from a few caveats:
Hyper-V Server 2016, the free virtualization solution from Microsoft has all the features included on Windows Server 2016 Hyper-V.
The only difference is that Microsoft Hyper-V Server does not include VM licenses and a graphical interface. The management can be done remotely using PowerShell, Hyper-V Manager from another Windows Server 2016 or Windows 10.
All the other Hyper-V features and limits in Windows Server 2016, including Failover Cluster, Shared Nothing Live Migration, RemoteFX, Discrete Device Assignment and Hyper-V Replica are included in the Hyper-V free version.
In Windows 8, Microsoft introduced the first Hyper-V Client version. Its third version now with Windows 10. Users can have the same experience from Windows Server 2016 Hyper-V on their desktops or tablet, making their test and development virtualized scenarios much easier.
Hyper-V Client in Windows 10 goes beyond only virtualization and helps Windows developers to use containers by bringing Hyper-V Containers natively into Windows 10. This will further empower developers to build amazing cloud applications benefiting from native container capabilities right in Windows.
Since Hyper-V Containers utilize their own instance of the Windows kernel, the container is truly a server container all the way down to the kernel. Plus, with the flexibility of Windows container runtimes (Windows Server Containers or Hyper-V Containers), containers built on Windows 10 can be run on Windows Server 2016 as either Windows Server Containers or Hyper-V Containers.
Because Windows 10 only supports Hyper-V containers, the Hyper-V feature must also be enabled.
Hyper-V Client is present only in the Windows 10 Pro or Enterprise version and requires the same CPU feature as in Windows Server 2016 called Second Level Address Translation (SLAT).
Although Hyper-V client is very similar to the server version, there are some components that are only present on Windows Server 2016 Hyper-V. Here is a list of components you will find only on the server version:
Even with these limitations, Hyper-V Client has very interesting features such as Storage Migration, VHDX, VMs running on SMB 3.1 File Shares, PowerShell integration, Hyper-V Manager, Hyper-V Extensible Switch, Quality of Services, Production Checkpoints, the same VM hardware limits as Windows Server 2016 Hyper-V, Dynamic Memory, Runtime Memory Resize, Nested Virtualization, DHCP Guard, Port Mirroring, NIC Device Naming and much more.
In Windows 8, Microsoft introduced the first Hyper-V Client version. Its third version now with Windows 10. Users can have the same experience from Windows Server 2016 Hyper-V on their desktops or tablet, making their test and development virtualized scenarios much easier.
Hyper-V Client in Windows 10 goes beyond only virtualization and helps Windows developers to use containers by bringing Hyper-V Containers natively into Windows 10. This will further empower developers to build amazing cloud applications benefiting from native container capabilities right in Windows.
Since Hyper-V Containers utilize their own instance of the Windows kernel, the container is truly a server container all the way down to the kernel. Plus, with the flexibility of Windows container runtimes (Windows Server Containers or Hyper-V Containers), containers built on Windows 10 can be run on Windows Server 2016 as either Windows Server Containers or Hyper-V Containers.
Because Windows 10 only supports Hyper-V containers, the Hyper-V feature must also be enabled.
Hyper-V Client is present only in the Windows 10 Pro or Enterprise version and requires the same CPU feature as in Windows Server 2016 called Second Level Address Translation (SLAT).
Although Hyper-V client is very similar to the server version, there are some components that are only present on Windows Server 2016 Hyper-V. Here is a list of components you will find only on the server version:
Even with these limitations, Hyper-V Client has very interesting features such as Storage Migration, VHDX, VMs running on SMB 3.1 File Shares, PowerShell integration, Hyper-V Manager, Hyper-V Extensible Switch, Quality of Services, Production Checkpoints, the same VM hardware limits as Windows Server 2016 Hyper-V, Dynamic Memory, Runtime Memory Resize, Nested Virtualization, DHCP Guard, Port Mirroring, NIC Device Naming and much more.
VMware is the existing competitor of Hyper-V and the current version 6.0 offers the VMware vSphere as a free and a standalone Hypervisor, vSphere Standard, Enterprise, and Enterprise Plus.
The following list compares all the features existing in the free version of Hyper-V with VMware Sphere and Enterprise Plus:
Feature |
Windows Server 2012 R2 |
Windows Server 2016 |
VMware vSphere 6.0 |
VMware vSphere 6.0 Enterprise Plus |
Logical Processors |
320 |
512 |
480 |
480 |
Physical Memory |
4TB |
24TB |
6TB |
6TB/12TB |
Virtual CPU per Host |
2,048 |
2,048 |
4,096 |
4,096 |
Virtual CPU per VM |
64 |
240 |
8 |
128 |
Memory per VM |
1TB |
12TB |
4TB |
4TB |
Active VMs per Host |
1,024 |
1,024 |
1,024 |
1,024 |
Guest NUMA |
Yes |
Yes |
Yes |
Yes |
Maximum Nodes |
64 |
64 |
N/A |
64 |
Maximum VMs per Cluster |
8,000 |
8,000 |
N/A |
8,000 |
VM Live Migration |
Yes |
Yes |
No |
Yes |
VM Live Migration with Compression |
Yes |
Yes |
N/A |
No |
VM Live Migration using RDMA |
Yes |
Yes |
N/A |
No |
1GB Simultaneous Live Migrations |
Unlimited |
Unlimited |
N/A |
4 |
10GB Simultaneous Live Migrations |
Unlimited |
Unlimited |
N/A |
8 |
Live Storage Migration |
Yes |
Yes |
No |
Yes |
Shared Nothing Live Migration |
Yes |
Yes |
No |
Yes |
Cluster Rolling Upgrades |
Yes |
Yes |
N/A |
Yes |
VM Replica Hot/Add virtual Disk |
Yes |
Yes |
Yes |
Yes |
Native 4-KB Disk Support |
Yes |
Yes |
No |
No |
Maximum Virtual Disk Size |
64TB |
64TB |
2TB |
62TB |
Maximum Pass Through Disk Size |
256TB or more |
256TB or more |
64TB |
64TB |
Extensible Network Switch |
Yes |
Yes |
No |
Third party vendors |
Network Virtualization |
Yes |
Yes |
No |
Requires vCloud networking and security |
IPsec Task Offload |
Yes |
Yes |
No |
No |
SR-IOV |
Yes |
Yes |
N/A |
Yes |
Virtual NICs per VM |
12 |
12 |
10 |
10 |
VM NIC Device Naming |
No |
Yes |
N/A |
No |
Guest OS Application Monitoring |
Yes |
Yes |
No |
No |
Guest Clustering with Live Migration |
Yes |
Yes |
N/A |
No |
Guest Clustering with Dynamic Memory |
Yes |
Yes |
N/A |
No |
Shielded VMs |
No |
Yes |
N/A |
No |
In this article, we have covered Hyper-V architecture along with the most important components in Hyper-V and also differences between Windows Server 2016 Hyper-V, Nano Server, Hyper-V Server, Hyper-V Client, and VMware.
Further resources on this subject: