Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
System Center 2016 Virtual Machine Manager Cookbook

You're reading from   System Center 2016 Virtual Machine Manager Cookbook Design, configure, and manage an efficient virtual infrastructure with VMM in System Center 2016

Arrow left icon
Product type Paperback
Published in Feb 2018
Publisher Packt
ISBN-13 9781785881480
Length 562 pages
Edition 3rd Edition
Arrow right icon
Authors (2):
Arrow left icon
EDVALDO ALESSANDRO CARDOSO EDVALDO ALESSANDRO CARDOSO
Author Profile Icon EDVALDO ALESSANDRO CARDOSO
EDVALDO ALESSANDRO CARDOSO
Roman Levchenko Roman Levchenko
Author Profile Icon Roman Levchenko
Roman Levchenko
Arrow right icon
View More author details
Toc

Table of Contents (12) Chapters Close

Preface 1. VMM 2016 Architecture 2. Upgrading from Previous Versions FREE CHAPTER 3. Installing VMM 2016 4. Installing a Highly Available VMM Server 5. Configuring Fabric Resources in VMM 6. Configuring Guarded Fabric in VMM 7. Deploying Virtual Machines and Services 8. Managing VMware ESXi hosts 9. Managing Clouds, Fabric Updates, Resources, Clusters, and New Features of VMM 2016 10. Integration with System Center Operations Manager 2016 11. Other Books You May Enjoy

Designing the VMM server, database, and console implementation

When planning a VMM 2016 design for deployment, consider the different VMM roles, keeping in mind that VMM is part of the Microsoft private cloud solution. If you are considering a private cloud, you will need to integrate VMM with the other System
Center family components.

You can create application profiles that will provide instructions for installing Microsoft Web Deploy applications and Microsoft SQL Server data-tier applications (DACs), and for running scripts when deploying a virtual machine as part of a service.

In VMM, you can add the hardware, guest operating system, SQL Server, and application profiles that will be used in a template to deploy virtual machines. These profiles are essentially answer files to configure the application or SQL during the setup.

Getting ready

You can create a private cloud by combining hosts, even from different hypervisors
(for example, Hyper-V and VMware), with networking, storage, and library resources.

To start deploying VMs and services, you first need to configure the fabric.

How to do it...

Create a spreadsheet with the server names and the IP settings, as seen in the following table, of every System Center component you plan to deploy. This will help you manage and integrate the solution:

Server name

Role

IP settings

vmm-mgmt01

VMM Management Server 01

IP: 10.16.254.20/24

GW: 10.16.254.1

DNS: 10.16.254.2

vmm-mgmt02

VMM Management Server 02

IP: 10.16.254.22/24

GW: 10.16.254.1

DNS: 10.16.254.1

vmm-console01

VMM Console

IP: 10.16.254.50/24

GW: 10.16.254.1

DNS: 10.16.254.2

vmm-lib01

VMM Library

IP: 10.16.254.25/24

GW: 10.16.254.1

DNS: 10.16.254.2

w2016-sql01

SQL Server 2016

IP: 10.16.254.40/24

GW: 10.16.254.1

DNS: 10.16.254.2

How it works...

The following rules need to be considered when planning a VMM 2016 deployment:

  • The computer name cannot contain the character string SCVMM (for example,
    srv-scvmm-01) and cannot exceed 15 characters.
  • Your VMM database must use a supported version of SQL Server to perform a VMM 2016 deployment. Express editions of Microsoft SQL Server are no longer supported for the VMM database. For more information, check the system requirements specified in the Specifying the correct system requirements for a real- world scenario recipe in this chapter.
For a full highly available VMM, not only must VMM be deployed on a Failover Cluster (minimum two servers), but the SQL Server must be deployed on a cluster as well (minimum two servers).
  • VMM 2016 does not support a library server on a computer that is running Windows Server 2012; it now requires Windows Server 2012 R2 as a minimum, but for consistency and standardization, I do recommend that you install it on a Windows Server 2016.
  • VMM 2016 no longer supports creating and importing templates with the Server App-V packages. If you are upgrading from a previous version of VMM that has templates with such applications, you will continue to manage them with VMM, but you will not be able to upgrade the application.
  • Hosts running the following versions of VMware ESXi and VMware vCenter Server are supported:
    • ESXi 5.1
    • ESXi 5.5
    • ESXi 6.0
    • vCenter 5.1
    • vCenter 5.5
    • vCenter 6.0
  • Upgrading a previous version of VMM to a highly available VMM 2016 requires additional preparation. See Chapter 2, Upgrading from Previous Version of VMM, for this purpose.
  • If you're planning for high availability of VMM 2016, be sure to install SQL Server on a cluster and on separate servers as it cannot physically be located on the same servers as your VMM 2016 management server. In addition, AlwaysOn availability groups can be used for the VMM database.
  • The VMM management server must be a member of a domain. (This rule does not apply to the managed hosts, which can be on a workgroup.)
  • The startup RAM for the VMM management server (if running on a VM with dynamic memory enabled) must be at least 2048 MB.
  • VMM library does not support DFS Namespaces (DFSN) or DFS Replication (DFSR). This support is being planned.
  • VMM does not support file servers configured with the case-insensitive option for Windows Services for Unix, as the network filesystem case control is set to ignore. Refer to the Windows Services for UNIX 2.0 NFS Case Control article available at http://go.microsoft.com/fwlink/p/?LinkId=102944 to learn more.
  • The VMM console machine must be a member of a domain.

There's more...

For a complete design solution, there are more items you need to consider.

Storage providers – SMI-S and SMP

VMM provides support for both Block level storage (Fibre Channel, iSCSI, and Serial Attached SCSI (SAS) connections) and File storage (on SMB 3.0 network shares, residing on a Windows file server or on a NAS device).

By using storage providers, VMM enables discovery, provisioning, classification, allocation, and decommissioning.

Storage classifications enable you to assign user-defined storage classifications to discovered storage pools for Quality of Service (QoS) or chargeback purposes.

You can, for example, assign a classification of Gold to storage pools that have the highest performance and availability, Silver for high performance, and Bronze for low performance.

In order to use this feature, you will need the SMI-S provider.

VMM 2016 can discover and communicate with SAN arrays through the Storage Management Initiative (SMI-S provider) and Storage Management Provider (SMP) provider.

If your storage is SMI-S compatible, you must install the storage provider on a separately available server (do not install on the VMM management server) and then add the provider to VMM management. Some devices come with built-in SMI-S provider and no extra are tasks required in that case. If your storage is SMP-compatible, it does not require a provider installation either.

Each vendor has its own SMI-S setup process. My recommendation is to contact the storage vendor to ask for a Storage provider compatible with VMM 2016. A list of oficially supported storage arrays is available here: https://docs.microsoft.com/en-us/system-center/vmm/supported-arrays.

CIM-XML is used by VMM to communicate with the underlying SMI-S providers since VMM never communicates with the SAN arrays themselves.

By using the storage provider to integrate with the storage, VMM can create LUNs (both GPT and MBR) and assign storage to hosts or clusters.

VMM 2016 also supports the SAN snapshot and clone feature, allowing you to duplicate a LUN through a SAN Copy-capable template to provide for new VMs, if you are hosting those in a Hyper-V platform. You will need to provision outside of VMM for any other VMs hosted with VMware hosts, for example.

Bare metal

This capability enables VMM 2016 to identify the hardware, install the operational system (OS), enable the Hyper-V or file server role, and add the machine to a target-host group with streamlined operations in an automated process.

As of SC 2016, deploying a bare metal Hyper-V cluster is now a single step. Furthermore, additional cluster hosts can be added to an existing Hyper-V or SOFS cluster using bare metal deployment.

PXE capability is required and is an integral component of the server pool. The target server will need to have a baseboard management controller (BMC) supporting one of the following management protocols:

  • Data Center Management Interface (DCMI) 1.0
  • Systems Management Architecture for Server Hardware (SMASH) 1.0
  • Intelligent Platform Management Interface (IPMI) 1.5 or 2.0
  • Custom protocols such as HPE Integrated Lights-Out (iLO) or Integrated Dell Remote Access (iDRAC)

Enterprise and hosting companies will benefit from the ability to provide new Hyper-V servers without having to install the operational system manually on each machine. By using BMC and integrating with Windows Deployment Services (WDS), VMM deploys the OS to designated hosts through the boot from the VHD(X) feature. The right BMC configuration presence is also a requirement for one of the most interesting features, called OS Rolling Upgrade, which will be discussed in detail later.

Configuring security

To ensure that users can perform only assigned actions on selected resources, create tenants, self-service users, delegated administrators, and read-only administrators in VMM using the VMM console, you will need to create Run As accounts to provide necessary credentials for performing operations in VMM ( example, for adding hosts).

Run As accounts in VMM

Run As accounts are very useful additions to enterprise environments. These accounts are used to store credentials that allow you to delegate tasks to other administrators and self-service users without exposing sensitive credentials.

By using Windows Data Protection API (DPAPI), VMM provides OS-level data protection when storing and retrieving the Run As account.

There are several different categories of Run As accounts:

  • Host computer: This is used to provide access to Hyper-V and VMware ESXi hosts
  • BMC: This is used to communicate with BMC on the host computer,
    for out-of-band management or power optimization
  • Network device: This is used to connect to network load balancers
  • Profile: This is to be used for service creation in the OS and application profiles as well as SQL and host profiles
  • External: This is to be used for external systems such as System Center
    Operations Manager

Only administrators or delegated administrators can create and manage Run As accounts.

During the installation of the VMM management server, you will be requested to use distributed key management (DKM) to store encryption keys in Active Directory Domain Services (AD DS).

Communications poand protocols for firewall configuration

When designing the VMM implementation, you need to plan which ports you are going to use for communication and file transfers between VMM components. Based on the chosen ports, you will also need to configure your host and external firewalls. See the Configuring ports and protocols on the host firewall for each SCVMM component recipe in Chapter 3, Installing VMM 2016.

Not all of the ports can be changed through VMM. Hosts and library servers must have access to the VMM management server on the ports specified during setup. This means that all firewalls, whether software-based or hardware-based, must be previously configured.

VM storage placement

The recommendation is to create a big CSV volume. CSV spreads across multiple disk spindles and it will give great storage performance for VMs, as opposed to creating volumes based on the VHD purpose (for example, OS, data, and logs).

If Storage Spaces Direct is used, It's recommended to make the number of volumes a multiple of the number of servers in your cluster. For example, if you have 4 servers, you will experience more consistent performance with 8 total volumes than with 7 or 9.

Management cluster

VMM 2016 supports management up to 1000 physical hosts and 25000 VMs. Therefore, the best practice is to have a separate management cluster with running VMM components to manage the production, test, and development clusters.

In addition to this, although you can virtualize the domain controllers with Windows 2016, it is not the best practice to have all the domain controllers running on the management clusters, as the cluster and System Center components highly depend on the domain controllers. If it's possible, place one or more DCs on the physical hosts or VMs in the location or fault domains different from the management cluster.

The following figure shows a two-node hyper-converged management cluster, with System Center 2016 components installed in separate VMs to manage the production cluster. All hosts are running on Windows Server 2016 with enabled Storage Spaces Direct to provide hyper-converged solutions which help to maximize the server's efficiency and reduce overall costs:

Small environment

In a small environment, you can have all the VMM components located on the same server. A small business may or may not have high availability in place, as VMM 2016 is now a critical component for your private cloud deployment.

Start by selecting the VMM server's location, which could be a physical server or a
virtual machine.

You can install SQL Server on the VMM server as well, but as VMM 2016 does not support SQL Express editions, you will need to install SQL Server first and then proceed with the VMM installation.

If you are managing more than 10 hosts in the production environment, my recommendation would be to have SQL Server running on a separate machine.

It is important to understand that when deploying VMM in production environments (real-world scenarios), the business will require a reliable system that it can trust.

The following figure illustrates a real-world deployment where all VMM 2016 components are installed on the same VM and SQL is running on a separate VM.

Note though that this deployment won't allow for converged network if no dedicated network adapter is provided for VMM Management.

Lab environments

I would recommend up to 50 hosts in a lab environment with SQL Server and all VMM components installed on a single VM. It will work well, but I would not recommend this installation in a production environment.

Alternatively, you can leverage a nested virtualization feature in Windows Server 2016. In other words, with nested virtualization, a Hyper-V host itself can be virtualized, so you can make your lab on a single host. Using VMM 2016, you can add a vritualized Hyper-V host to the fabric and manage VMs running on the host. However, a true support of nested virtualization is available only in VMM 1801 semi-annual channel release (for example, enabling and disabling nested virtualization on the VM through VMM console)

Medium and enterprise environments

In a medium-scale or large-scale environment, the best practice is to split the roles across multiple servers or virtual machines. By splitting the components, you can scale out and introduce high availability to the System Center environment.

In the following design, you can see each component and what role it performs in the System Center Virtual Machine Manager environment:

When designing an enterprise private cloud infrastructure, you should take into consideration some key factors such as business requirements, company policies, applications, services, workloads, current hardware, network infrastructure, storage, security, and users.

Private cloud sample infrastructure

Following is a sample of a real-world infrastructure that can support up to 3000 VMs and 64 server nodes running Windows 2016 Hyper-V.

The number of VMs you can run on an implementation like this will depend on some key factors. Do not take the following configuration as a mirror for your deployment, but as a starting point. My recommendation is to start understanding the environment, then run a capacity planner such as a MAP toolkit. It will help you gather information that you can use to design your private cloud.

I am assuming a ratio of 50 VMs per node cluster with 3 GB of RAM, configured to use Dynamic Memory (DM):

  • Servers
    • 64 servers (4 clusters x 16 nodes)
    • Dual processor, 6 cores: 12 cores in total
    • 192 GB RAM
    • 2 x 146 GB local HDD (ideally SDD) in Raid 1
  • Storage
    • Switch and host redundancy
    • Fiber channel or iSCSI or S2D (converged)
    • Array with capacity to support customer workloads
    • Switch with connectivity for all hosts.
  • Network
    • A switch with switch redundancy and sufficient port density and connectivity to all hosts.
    • It provides support for VLAN tagging and trunking.
    • NIC Team and VLAN are recommended for better network availability, security, and performance achievement.
  • Storage connectivity
    • If it uses a fiber channel: 2 (two) x 4 GB HBAs
    • If it uses ISCSI: 2 (two) x dedicated NICs (recommended 10 GbE)
    • If it uses S2D: 2 (two) x dedicated 10Gbps NICs (recommended RDMA-capable adapters)
  • Network connectivity
    • If it maintains a 1 GbE connectivity: 6 dedicated 1 GbE (live migration, CSV, management, virtual machines' traffic)
    • If it maintains a 10 GbE connectivity: 3 dedicated NICs 10 GbE
      (live migration, CSV, management, virtual machines' traffic)
Another way to build private cloud infrastructure is to use hyper-converged solution in which all Storage Spaces Direct, Hyper-V, Failover Clustering and other components are configured on the same cluster hosts. In this model, storage and compute resources cannot be scaled up separately (adding one more host to an existing cluster will extend both compute and storage resources). There are also some requirements for the IT staff who have to carefully plan any management tasks on each storage and compute subsystem to eliminate any possible downtimes. To avoid all these disadvantages and for larger deployments, I'd recommend using a converged solution with separate clusters for SOFS and Hyper-V workloads.

Hosting environments

System Center 2012 SP1 VMM introduced multi-tenancy. This is one of the most important features for hosting companies as they only need to install a single copy of System Center VMM, and then centralize their customer management, each one running in a controlled environment in their own domain. Hosters always want to maximize their compute capacity and VLAN segment hardware so you can't maximize its capacity. Network virtualization moves the isolation up to the software stack, enabling the hoster to maximize all capacity and isolate customers via software-defined networking
VMM 2012 R2 takes advantage of Windows Server 2012 R2 features, VMM 2012 R2 delivers Site-to-Site NVGRE gateway for Hyper-V network virtualization. This capability enables you to use network virtualization to support multiple Site-to-Site tunnels and direct access through a NAT Firewall. The networking virtualization (NV) uses NVGRE protocol, allowing network load balancers to act as NV gateways. Plus, switch extensions can make use of NV policies to interpret the IP information in packets being sent and communication between, for example, Cisco switches and VMM 2012 R2.

New networking features in VMM 2016

VMM 2016 and Windows Server 2016 continue to improve Hyper-V Network-Virtualization (HNV) and helps you move to an efficient SDDC solution. VMM 2016 introduces flexible encapsulation which supports both NVGRE (HNVv1) and new VXLAN (HNVv2) to create overlay networks in which original packets from VMs with its MACs, IPs and other data (Customer Address network) are placed inside an IP packet on the underlying physical network (Provider Address network) for further transportation. VXLAN is the default in VMM 2016 and works in MAC distribution mode. It uses a new Network Controller (NC) as a central management point that communicates with Hyper-V hosts and pushes network policies down to NC host agents running on each host. In short, NC is responsible for the address mapping, and the host agents maintain the mapping database. NC also integrates with Software-Load Balancer (L3 and L4), network layer datacenter firewall and RAS gateways which are also included in Windows Server 2016. Consequently, NC is the heart of SDN in VMM 2016 and should always be considered in a cluster configuration.

Thanks to nested virtualization in Windows Server 2016 (an ability to run Hyper-V server inside a VM), you can evaluate SDN and other scenarios using just one physical machine. The good example of SDN evaluation is available at https://blogs.msdn.microsoft.com/excellentsge/2016/10/06/deploying-sdn-on-one-single-physical-host-using-vmm/.

There is also a new way of deploying converged networking that has been introduced in Windows Server 2016 and VMM 2016 to ease and improve SDN deployment. Switch-Embedded Teaming (SET) allows you to group up to eight identical adapters into one or more software-based virtual adapters. Prior to VMM 2016 you needed to have two different sets of adapters: one to use with traditional teaming and the one to use with RDMA because of its incompatibility with teaming and virtual switch. SET eliminates this limitation and supports RDMA convergence as well as QoS, RSS, VMQ, and both versions of HNV noted earlier. Furthermore, creating a general Hyper-V virtual switch with RDMA NICs would be also supported:

New storage features in VMM 2016

When we discussed possible architectures for management clusters, I referred to a new feature in Windows Server 2016 and VMM 2016, Storage Spaces Direct (S2D). S2D uses industry-standard servers with local storage which could be direct-attached enclosures or internal disks. S2D provides similar shared storage pools across cluster nodes by leveraging Cluster Shared Volume, Storage Spaces, Failover Clustering and SMB3 protocol for file access (SOFS). Hyper-converged and converged solutions can now be based on software-defined storage running on Windows Server 2016. So, you have a choice: to buy external enterprise SAN or to use S2D. If your goal is a software-defined datacenter, the answer to all questions is very clear - S2D and SDN implementation. The main competitor to S2D is a well-known VMware Virtual SAN (vSAN) that was first released in vSphere 5.5 and is still present in the newest vSphere 6.6. S2D, just like a vSAN, has special licensing requirements.

S2D is not available in Windows Server 2016 Standard edition and would require the most expensive Datacenter edition.

Furthermore, improved Storage QoS in VMM 2016 provides a way to centrally monitor and manage storage performance for virtual machines residing on S2D or another device. Storage QoS was first introduced in 2012 R2 version. You could set maximum and minimum IOPS thresholds for virtual hard disks (excluding shared virtual hard disks). It worked well on standalone Hyper-V hosts, but if you have a cluster with a lot of virtual machines or even tenants, it could be complicated to achieve the right QoS for all cluster resources. The feature automatically improves storage resource fairness between multiple virtual machines using the same file server cluster. In other words, QoS for storage will be distributed between a group of virtual machines and virtual hard disks:

Another feature available only in Windows Server 2016 Datacenter edition is Storage Replica (SR). Previously, we needed to find third-party solutions for SAN-to-SAN replication. And building stretched clusters required a huge amount of money. Windows Server 2016 and VMM 2016 can help to significantly reduce costs and enhance unification in such scenarios. SR is the main component of multi-site clusters or disaster recovery solutions supporting both asynchronous and synchronous replication between any storage devices, including Storage Spaces Direct. Also, you are not required to have identical devices on both sides. However, at the time of writing, only synchronous replication is supported in VMM fabric, and deployment is limited to PowerShell.

Undoubtedly, this is not a final list of new features. Since VMM 2016 is compatible with Windows Server 2016 that brings a lot of major and minor updates in Hyper-V, Failover Clustering and Security, they are also covered in later chapters. New features of VMM 1801 semi-annual channel release will also be briefly covered in next chapters.

See also

For more information, see the following references:

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image