When planning a VMM 2012 design for deployment, consider the different VMM roles, keeping in mind that VMM is part of the Microsoft private cloud solution. If you are considering a private cloud, you will need to integrate VMM with the other System Center family components.
In VMM, you can create the hardware, guest operating system, SQL Server, and application profiles that will be used in a template to deploy virtual machines. These profiles are essentially answer files to configure the application or SQL during the setup.
For a complete design solution, there are more items you need to consider.
Storage providers – SMI-S and SMP
VMM provides support for both block-level storage (fibre channel, iSCSI, and
Serial Attached SCSI (SAS) connections) and file storage (on SMB 3.0 network shares, residing on a Windows file server or on a NAS device).
By using storage providers, VMM enables discovery, provision, classification, allocation, and decommissioning.
Storage classifications enable you to assign user-defined storage classifications to discovered storage pools for Quality of Service (QoS) or chargeback purposes.
Tip
You can, for example, assign a classification of GOLD to storage pools that have the highest performance and availability, SILVER for high performance, and BRONZE for low performance.
In order to use this feature, you will need the SMI-S provider.
VMM 2012 R2 can discover and communicate with
Storage Area Network (SAN) arrays through the Storage Management Initiative (SMI-S) provider and SMP provider.
If your storage is SMI-S compatible, you must install the storage provider on a separately available server (do not install the VMM management server) and then add the provider to the VMM management. If your storage is compatible with SMP, it does not require a provider installation.
Tip
Each vendor has its own SMI-S setup process. My recommendation is that you contact the storage vendor to ask for a storage provider compatible with VMM 2012 R2.
CIM-XML is used by VMM to communicate with the underlying SMI-S providers since VMM never communicates with the SAN arrays itself.
By using the storage provider to integrate with the storage, VMM can create LUNs (both GPT and MBR) and assign storage to hosts or clusters.
Note
Do not install a storage provider other than the WMI SMP providers from Dell EqualLogic and Nexsan in the VMM Management Server as they are not supported.
VMM 2012 also supports the SAN snapshot and clone feature, allowing you to duplicate a Logical Unit Number (LUN) through a SAN copy-capable template to provide for new VMs, if you are hosting them in a Hyper-V platform. You will need to provision the outside of VMM for any other VMs hosted with VMware or Citrix hosts.
This capability enables VMM 2012 to identify the hardware, install the Operating System (OS), enable the Hyper-V role, and add the machine to a target-host group with streamlined operations in an automated process.
Note
You can now deploy Bare Metal File Servers (clusters), which are new to SC 2012 R2.
The PXE capability is required and is an integral component of the server pool. The target server will need to have a
Baseboard Management Controller (BMC) that supports one of the following management protocols:
- Data Center Management Interface (DCMI) 1.0
- Systems Management Architecture for Server Hardware (SMASH) 1.0
- Intelligent Platform Management Interface (IPMI) 1.5 or 2.0
- HP Integrated Lights-Out (iLO) 2.0
Enterprise and hosting companies will benefit from the ability to provide new Hyper-V servers without having to install the Operating System manually on each machine. By using BMC and integrating with
Windows Deployment Services (WDS), VMM deploys the OS to designated hosts through the boot from the VHD(X) feature.
To ensure that users can perform only assigned actions on selected resources, create tenants, self-service users, delegated administrators, and read-only administrators in VMM using the VMM console. You will need to create Run As accounts to provide necessary credentials for performing operations in VMM (for example, adding hosts).
Run As accounts are a very useful addition to enterprise environments. These accounts are used to store credentials that allow you to delegate tasks to other administrators and self-service users without exposing sensitive credentials.
Note
By using Windows Data Protection API (DPAPI), VMM provides OS-level data protection when storing and retrieving the Run As account.
There are several different categories of Run As accounts, which are listed as follows:
- Host computer: This is used to provide access to Hyper-V, VMware ESX, and Citrix XenServer hosts
- BMC: This is used to communicate with BMC on the host computer for out-of-band management
- Network device: This is used to connect to network load balancers
- Profile: This is to be used for service creation in the OS and application profiles as well as SQL and host profiles
- External: This is to be used for external systems such as System Center Operations Manager
Only administrators or delegated administrators can create and manage Run As accounts.
Note
During the installation of the VMM management server, you will be requested to use DKM to store encryption keys in
Active Directory Domain Services (AD DS).
Ports' communications and protocols for firewall configuration
When designing the VMM implementation, you need to plan which ports you are going to use for communication and file transfers between VMM components. Based on the chosen ports, you will also need to configure your host and external firewalls. Refer to the Configuring ports and protocols on the host firewall for each VMM component recipe in Chapter 3, Installing VMM 2012 R2.
Note
Not all of the ports can be changed through VMM. Hosts and library servers must have access to the VMM management server on the ports specified during the setup. This means that all firewalls, whether software-based or hardware-based, must be previously configured.
My recommendation is to create a big CSV volume spread across multiple disk spindles, as it will give great storage performance for VMs, as opposed to creating volumes based on the VHD purpose (for example, OS, data, and logs).
The best practice is to have a separate management cluster to manage the production, test, and development clusters.
In addition to this, although you can virtualize the domain controllers with Windows 2012, it is not the best practice to have all the domain controllers running on the management clusters, as the cluster and System Center components highly depend on the domain controllers.
The following figure shows you a two-node management cluster with System Center 2012 and SQL Server cluster installed in separate VMs to manage the production cluster:
In a small environment, you can have all the VMM components located on the same server. A small business may or may not have High Availability in place as VMM 2012 is now a critical component for your private cloud deployment.
Start by selecting the VMM server's location, which could be a physical server or a virtual machine.
You can install SQL Server on the VMM server as well, but as VMM 2012 does not support SQL Express editions, you will need to install SQL Server first and then proceed with the VMM installation.
If you are managing more than 10 hosts in the production environment, my recommendation would be that you have SQL Server running on a separate machine.
It is important to understand that when deploying VMM in production environments (real-world scenarios), the business will require a reliable system that it can trust.
The following figure illustrates a real-world deployment where all VMM 2012 components are installed on the same VM and SQL is running on a separate VM:
Tip
This deployment won't allow for converged network if no dedicated network adapter is provided for VMM Management.
I would recommend up to 50 hosts in a lab environment with SQL Server and all VMM components installed on a single VM. This will work well, but I would not recommend this installation in a production environment.
Medium and enterprise environments
In a medium- or large-scale environment, the best practice is to split the roles across multiple servers or virtual machines. By splitting the components, you can scale out and introduce High Availability to the System Center environment.
In the following design, you can see each component and what role it performs in the System Center Virtual Machine Manager environment:
When designing an enterprise private cloud infrastructure, you should take into consideration some key factors such as business requirements, company policies, applications, services, workloads, current hardware, network infrastructure, storage, security, and users.
Private cloud sample infrastructure
The following is a sample of a real-world infrastructure that can support up to 3000 VMs and 64 server nodes that run Windows 2012 R2 Hyper-V.
The number of VMs you can run on an implementation such as this will depend on some key factors. Do not take the following configuration as a mirror for your deployment but as a starting point. My recommendation is that you start understanding the environment, and then run a capacity planner such as a MAP toolkit. It will help you gather information that you can use to design your private cloud.
I am assuming a ratio of 50 VMs per node cluster with 3 GB of RAM, which has been configured to use
Dynamic Memory (DM).
- Servers
- 64 servers (4 clusters x 16 nodes)
- A dual processor of 6 cores, which makes it 12 cores in total
- 192 GB RAM
- 2 x 146 GB local HDD (ideally SDD) in Raid 1
- Storage
Switch and host redundancy
- A fibre channel or iSCSI
- An array with the capacity to support customer workloads
- A switch with connectivity for all hosts
- Network
A switch that provides switch redundancy and sufficient port density and connectivity to all hosts. It provides support for VLAN tagging and trunking. NIC Team and VLAN are recommended for better network availability, security, and performance achievement.
- Storage connectivity
- If it uses a fibre channel: 2 x 4 GB HBAs
- If it uses ISCSI: 2 x dedicated NICs (recommended 10 GbE)
- Network connectivity
- If it maintains a 1 GbE connectivity: 6 dedicated 1 GbE (live migration, CSV, management, virtual machines' traffic)
- If it maintains a 10 GbE connectivity: 3 dedicated NICs 10 GbE (live migration, CSV, management, and virtual machines' traffic)
System Center 2012 SP1 VMM introduced multi-tenancy. This is one of the most important features for hosting companies as they only need to install a single copy of System Center VMM and then centralize their customer management, each running in a controlled environment in their own domain. Hosting companies want to maximize their compute capacity and a VLAN segmented on hardware won't allow for that. Network virtualization moves the isolation up to the software stack, enabling the hoster to maximize all the capacity and isolate customers via software-defined networking.
New networking features in VMM 2012 R2
VMM 2012 R2 brings a new networking feature: network virtualization. Taking advantage of Windows Server 2012 R2's new features, VMM now delivers site-to-site NVGRE gateway for Hyper-V network virtualization. This new capability will now enable you to use network virtualization to support multiple site-to-site tunnels and direct access through a NAT firewall. Networking Virtualization (NV) now uses the NVGRE protocol, allowing network load balancers to act as NV gateways. Plus, switch extensions can make use of NV policies to interpret the IP information in packets being sent, and the communication between Cisco switches and VMM is now expanded to support Hyper-V NV.