Now moving to the cloud journey with Microsoft Azure, a large server installation that has a lot of things installed, requires patching, and reboots which interrupt service delivery. Azure doesn't use live migration and doesn't use failover clustering. When they have to take down a host in an Azure data center, it does require the virtual machine to be taken down and restarted as well. So, with a large number of servers and large OS resource consumption, it generates a lot of Cost of goods sold (COGS) for them. COGS are the direct costs attributable to the production of the services sold by Microsoft Azure. Thus, by provisioning, large host images compete for the network resources. As mentioned in the Business impact section earlier in this chapter, deploying all those hosts and then re-imaging all of them when a new patch comes out, requires a lot of network bandwidth. Many service providers (not only Microsoft Azure) are over provisioning their network so that they can have enough capacity for live migration or for re-provisioning servers.
Back in October 2014, Microsoft released the first version of their Cloud-in-box solution called Cloud Platform System (CPS) which is running on top of Windows Server Core, System Center, and Windows Azure Pack. To build a CPS system, requires a lot of time; installing all that software takes a lot of time and patching impacts the network allocation. Since a CPS system is an on-premises solution, it does use live migration for the virtual machines. So, with fully loaded CPS 4 racks, configuration would support up to 8,000 virtual machines. So, if each VM is configured with 2 GB of RAM, then you need 16 TB to live migrate over all the networks. Thus, we conclude that you need to have enough capacity to handle that network traffic instead of using it for the business itself. I am not saying that the configuration isn't optimized in CPS in a live migration sense, but they are using live migration over Server Message Block (SMB) protocol directly to offload the network traffic to Remote Direct Memory Access (RDMA) NICs, which is really fast. However, it still takes time to migrate 16 TB of information, and as mentioned earlier, server reboots result in service disruption. The reboot for the compute Hyper-V host in CPS takes around 2 minutes, and the storage host takes around 5 minutes to complete.
Microsoft determined from both Azure and building up the CPS solution that they need a server configuration which is optimized for the cloud and also something that will benefit all their customers, whether you are deploying a cloud configuration in your data center or you are using just Windows Server as your virtualization platform or leveraging the public cloud that's running on top of Windows Server.
The next step in the journey is Nano Server, a new headless, 64-bit only, deployment option for Windows Server, as you can see in Figure 3. It's a little different from Windows Server 2012 R2 in Figure 1. Nano Servers start following the Server Core pattern as a separate installation option. Therefore you can install Nano Server and then there is sub-set of roles and features that you can add on top. The installation options that we have in Windows Server 2016 are Nano Server, Server Core, and Server with a Desktop Experience. Microsoft made a significant change in Windows Server 2016 where you cannot move between different installation options anymore as in Windows Server 2012 R2, just because of some of the changes they had to make in order to implement Nano Server and Server with a Desktop Experience:
Figure 3: Nano Server journey (image source: Microsoft)
Nano Server is deep refactoring initially focused on the CloudOS infrastructure. With Nano Server, you can deploy Hyper-V hosts as a compute platform. You can deploy a scale-out file server as storage nodes and clusters, so that you can do clustered storage servers or clustered Hyper-V hosts and do live migration across nodes. The Nano Server team is continuously working on supporting born-in-the cloud applications; those applications were written with cloud patterns which allow you to run on top of Nano Server. Nano Server can be installed on your physical machines, or it can be installed as a guest virtual machine, and it will also serve as the base OS for Hyper-V containers. Please refer to Chapter 8, Running Windows Server Containers and Hyper-V Containers on Nano Server, for more details about Windows Server containers and Hyper-V containers running on top of Nano Server.
Nano Server is a separate installation option. It's a self-contained operating system that has everything you need. The major difference between Nano Server and Server Core is that none of the roles or features are available in the image same as we get in Server Core and Full Server. The side by side store is when you go to add or install additional roles and features with Windows Server; it never prompts you for the media, as the binary data that is required already exists on your hard disk within the OS. However, in Nano Server, all the infrastructure roles (Hyper-V, storage, clustering, DNS, IIS, and so on) live in a series of separate packages, so you have to add them to the image. In this case, your base Nano Server image will always stay very small. As you start adding roles and features to Nano Server, each role becomes an additional package, as the Hyper-V role for example which only requires the Nano Server base OS, so it will always be small and tight. If you are adding another role that requires a 500 MB file, that will be another 500 MB file to be added to the Nano Server image as a separate package. Nano Server has full driver support, so any driver that works for Windows Server 2016, will work with Nano Server as well.
As of the first release of Nano Server 2016, these are the key roles and features supported to run on Nano Server:
- Hyper-V, clustering, storage, DNS, IIS, DCB, PowerShell DSC, shielded VMs, Windows defender, and software inventory logging
- Core CLR, ASP.NET 5, and PaaSv2
- Windows Server containers and Hyper-V containers
- System Center Virtual Machine Manager (SCVMM) and System Center Operations Manager (SCOM)