Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Microsoft Hyper-V Cluster Design
Microsoft Hyper-V Cluster Design

Microsoft Hyper-V Cluster Design: To achieve a Windows Server system that virtually takes care of itself, you need to master Hyper-V cluster design. This book is the perfect tutorial on the subject, providing clear instruction on expanding into the virtualized environment.

eBook
€27.98 €39.99
Paperback
€49.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

Microsoft Hyper-V Cluster Design

Chapter 1. Hyper-V Cluster Orientation

Microsoft Hyper-V Server 2012 or its R2 successor in a Microsoft Failover Cluster configuration is one of the most powerful infrastructure tools available for system administrators. It provides an inexpensive solution that combines all the benefits of virtualization with the resiliency and resource-spreading capabilities of clustering. While the technologies can provide substantial advantages, designing and implementing a Hyper-V Server failover cluster is not a trivial undertaking.

Before you can begin designing your cluster, it's important to orient yourself to the scope of the task that you're committing to. It is imperative that you master the fundamentals of the technologies involved in a Hyper-V Server cluster. You must also thoroughly understand the problem that you are using a Hyper-V Server cluster to solve. Based on that problem, you must define a clear set of goals that such a system must achieve in order to serve as a proper solution. You will then design a cluster that can meet those goals. After that, you can build, test, and deploy your Hyper-V cluster.

By the end of this chapter, you will have learned about:

  • The proper terminology related to Hyper-V Server
  • The specific meaning of clustering in a Microsoft environment
  • How to begin a cluster project planning document
  • The options available within a Hyper-V Server cluster
  • The physical components that are necessary for a Hyper-V Server cluster
  • Knowledge that you'll need to begin designing your Hyper-V Server cluster

Terminology

Due to an overlap of terms and some misperceptions about the ways that the Hyper-V Server product is delivered, terminology is a common sticking point even for people who have been working with the technology for some time. The following table provides accurate but short definitions for the most commonly misunderstood terms in relation to Hyper-V Server and clusters. These definitions will be expanded upon in detail throughout the course of this book, so don't worry if they are confusing at first:

Term

Definition

Hypervisor

A hypervisor is an operating system that manages other operating systems. The primary responsibility of the typical operating system is to allocate and manage resources for applications. In the context of a hypervisor, "applications" are guest operating systems.

Microsoft Hyper-V Server

This is a full-featured hypervisor from Microsoft. Available as a standalone, no-charge product that includes a reduced-functionality image of Windows Server for the purpose of managing the hypervisor. It is also available as a role within Microsoft Windows Server.

Server Core

A specific installation method of Windows Server from version 2008 onward. This mode does not include a graphical interface. It is a fully licensed copy of Windows Server. Hyper-V is available as a role within Server Core.

Host

A physical computer with a hypervisor installed.

Guest

Another term for virtual machine, although it is commonly used to refer to the operating system within a virtual machine.

Management Operating System

The operating system that is allowed to control the hypervisor installation. In any Hyper-V Server installation, Hyper-V Server is always the hypervisor and Windows Server is always the management operating system. This is also sometimes called a parent partition or host operating system, although those terms are falling out of favor.

Cluster

In the context of Hyper-V Server, a cluster refers to hosts joined together using Microsoft Failover Clustering to provide Hyper-V Server services.

Node

A single physical computer that is a member of a cluster.

Live Migration

A Hyper-V-specific implementation of relocating a running virtual machine from one node to another without detectable downtime for the services on that virtual machine or external consumers of those services.

Quick Migration

A Hyper-V-specific implementation of relocating a virtual machine from one node to another. This method involves gracefully stopping the virtual machine in some fashion and starting it again at the destination.

Saved State

A condition in which a virtual machine's operations have been paused. The contents of its system memory are copied to a disk file and the virtual machine is placed in a non-running condition.

Note

The term Hyper-V Core and its variants should not be used. Hyper-V Server is one product and Server Core is a particular mode for the Windows Server product. Combining their labels leads to confusion and should be avoided.

Clustering in a Microsoft environment

The generic term clustering in computer terminology refers to any method that groups multiple computers together to provide a particular service. The common reason this is done is to introduce high availability and/or distribution of resources. For the purposes of this book, clustering leverages multiple physical computers to provide a hosting service for virtual machines. All of this is transparent to the consumers—both technological and human; the machines themselves and the clients that rely on them operate as though the cluster and virtualization components were non-existent. Users employ the exposed services no differently than they would if they were directly installed on a traditional physical deployment. An example of a user accessing a website hosted on a virtual machine is shown in the following image:

Clustering in a Microsoft environment

If you're coming to Hyper-V Server clustering with experience in another hypervisor technology, there are substantial differences right from the start. Chief among these is that a Hyper-V Server cluster is composed of two major technologies. The first technology is, obviously, Hyper-V Server. The second technology is Microsoft Failover Clustering. There is significant interplay and cooperation between the two, but they are distinct. This duality of technologies can lead unaware users to draw false conclusions and fall into traps based on incorrect assumptions. It can also cause confusion for newcomers to Hyper-V Server and Microsoft Failover Clustering.

Microsoft clusters are always considered failover clusters. A single virtual machine does not coexist across cluster nodes. All of the resources belonging to any given virtual machine are contained in or accessed through only one node at a time. The cluster system handles system failures by automatically moving—failing over—virtual machines from an ailing node to others that are still running. This does not necessarily mean that other cluster nodes are idle; they can run other virtual machines.

The basic process by which Microsoft Failover Clustering operates is somewhat node-centric. Each node is responsible for three basic resource types: roles, storage, and networks. A clustered role is a service being presented and protected by the cluster. Each virtual machine (and accompanying resources) is considered a role. A virtual machine and its details must be stored in a location common to all nodes; each node is responsible for maintaining connectivity to that storage. Finally, each node must have access to the same networks as the other nodes.

Because of the failover nature of the cluster, roles and storage have owners. At any given point in time, one node is responsible for each individual instance of these resource types. A virtual machine's owner is the physical host it is currently running on (or would be responsible for starting it if the virtual machine is offline). A storage location's owner is the physical host that is currently responsible for I/O to that location. A special storage type that will be discussed in much more detail later is the Cluster Shared Volume (CSV). Multiple nodes can communicate with a CSV simultaneously; however, it is still owned by only one node at a time (called a coordinator node). Networks do not have owners.

A failure does not always automatically result in a failover event. If a node has difficulty accessing storage or networks, there are various mitigation strategies it can take. If connectivity to a CSV is lost, it can reroute I/O through the coordinator node. If it is the coordinator node, it can transfer ownership to another node that can still reach the CSV. If a node loses connectivity on a cluster network but can still use others, it may be able to use those for cluster-related traffic.

If a failure that requires a node to stop participating in the cluster does occur, there are a few things that happen. First, if the node is still functional but detects a problem, it determines whether or not it can continue participating in the cluster. The primary failure that triggers this condition is loss of communications with the other nodes. If a node can no longer communicate with enough other nodes to maintain quorum (a concept that will be thoroughly discussed in Chapter 11, High Availability), it attempts to gracefully shut down its virtual machines so that their files can be accessed by other nodes. Ordinarily, quorum is achieved by 50 percent of the nodes plus one tiebreaker being active. The nodes that still have quorum may not be aware of why the node is missing, but they will notice that it is no longer reachable. They will begin attempting to start virtual machines from missing nodes almost immediately upon loss of connectivity. If there is no way for sufficient nodes to form a quorum, the entire cluster will stop and all clustered virtual machines will shut down.

Create a project document

Before you jump into the technology, build a document to outline your project. Even if you're in a small environment, there is great benefit in using a project planning document. Otherwise, your cluster's initial build and growth may be organic in nature—meaning that it will grow and change in response to immediate needs and concerns rather than following a predictable path. Such a cluster may not be appropriate for the loads you'd like it to handle. A planning document can be a simple free-form text sheet that you create in Notepad and use like a scratch pad or it could be a formally-defined organizational document built in Microsoft Word. Among other uses, this document will help you create a focus for the project so that as you work, you can more easily stay true to the initial vision.

If you don't have a formal process in place for design documents, then a suggested format for your document is to format it in three parts: an Overview section, a Purposes section, and a Goals section. The overview should contain a very brief explanation of what Hyper-V and Failover Clustering are and what your organization can expect to achieve by implementing them. This portion can help you to clarify the involved technologies for yourself and others. The Purposes section indicates the specific reasons that your organization is undertaking the project. The Goals section delineates the ways that the project is expected to meet those purposes. In a smaller environment, combining these two sections may be preferable. Also, unless you have a strict formal organizational structure for the document that precludes this, it's a good idea to include a notes section. You can use that section as a place to track ideas and links for subjects that you discover as your project progresses but which you cannot immediately investigate. If your organization has a policy that governs format and you aren't allowed to place these notes directly in the master planning document, create your own document and ensure that it stays in close proximity to the formal work.

Purposes for a Hyper-V Server cluster

There are several common reasons to group Hyper-V Server hosts into a cluster, but each situation is unique. Your particular purpose(s) will be the primary determinant in the goals you set for your cluster project. The following subsections will talk about the common purposes for building a cluster. Those that you include in your document should be specific to your environment. In a planning document, generic topics belong in the overview portion.

As you consider the technologies and solutions available in Hyper-V Server and Failover Clustering, remember that you are not required to utilize all of them. It's easy to get caught up in the flash and glamor of exciting possibilities and design a system that solves problems your organization doesn't actually have—usually with equally unrealistic price tags and time demands. This practice, infamously known as over-architecting, is a non-trivial concern. Whatever you build must be maintained into perpetuity, so don't saddle yourself, your co-workers, and your organization with complexities without a clear and demonstrable need.

High availability

One of the most common reasons to build a Hyper-V Server cluster is to provide high availability to virtual machines. High availability is a term that is often misunderstood, so ensure that you take the time to truly understand what it means and make sure you can explain it to anyone else who will be a stakeholder or otherwise involved in your cluster project. High availability is often confused with fault tolerance. A truly fault tolerant solution can handle the failure of any single component without perceptible downtime to the consumers of the service it is providing. Hyper-V Server has no built-in method to make the product completely fault tolerant. It is possible to leverage supporting technologies to provide fault tolerance at most levels, and it is possible to use Hyper-V Server as a component to provide fault tolerance for other scale-out technologies, but Hyper-V Server alone is not a fault tolerant solution.

In contrast to fault tolerance, Hyper-V Server provides high availability. This term means a few things. First, to directly compare it to fault tolerance, it provides for very rapid recovery after a major fault. Second, it grants the ability for planned moves of services without perceptible downtime for those services. The primary reason for this is planned maintenance of underlying hardware and supporting software. Of course, it can also be leveraged when a fault occurs but the system is able to continue functioning, such as when an internal drive in a RAID 1 array fails.

Note

Hyper-V's high availability features do not grant virtual machines immunity against downtime. Approaches that may provide application-level immunity will be covered in Chapter 11, High Availability.

The most important distinction between fault tolerant and high availability in Hyper-V Server is that if a failure occurs that causes a Hyper-V Server host computer to fail without warning, such as a blue screen error, all of its virtual machines will crash. The Failover Cluster component will immediately begin bringing failed virtual machines back online on surviving cluster nodes. Those virtual machines will act the same way a physical installation would if its power had been removed without warning.

The following image is a visualization of Hyper-V Server in a cluster layered on fault tolerant subsystems. Despite the fact that the cluster's constituent components have suffered a number of failures, the virtual machines are still running (although likely with reduced performance):

High availability

The subject of high availability will be explored more thoroughly later.

High Availability Printing

In Windows Server versions prior to 2012, you could create clusters specifically for the Windows print spooler service. While functional, this solution was never particularly stable. It was quite complicated and required a significant amount of hands-on maintenance. Print drivers provided by the hardware manufacturer needed to be specifically designed to support clustering, certain uses required administrative scripting, and problems were difficult to isolate and solve. In Windows Server 2012, Microsoft now defines High Availability Printing as a print spooler service running on a highly available virtual machine. You can no longer establish the print spooler itself as a clustered resource.

Balancing resources

The second most common reason to employ a Hyper-V Server cluster is to provide distribution of resources across multiple physical computers. When all of the nodes in a Hyper-V Server cluster are operational, you have the cumulative entirety of all their physical resources at your disposal. Even though the involved technology specifically mentions the word failover, it is possible to design the system in such a fashion that not all hosted resources can be successfully failed over. Virtual machines can be configured to prioritize the way they'll supersede each other when there is contention for limited resources.

When designing your cluster for resource balancing, there are two extremes. For complete high availability, you must have enough nodes to run all virtual machines on the smallest number of nodes that constitute a majority. For the highest degree of resource distribution, you must maximize the utilization of each node. In most cases, you'll gauge and select an acceptable middle ground between these two extremes. Your chosen philosophy should appear in the Goals section of your planning document. It's also wise to plan for additional virtual machines beyond those that will exist at initial deployment.

Geographic dispersion

With the increased availability of high speed public networking solutions across geographically dispersed regions, implementations of multi-site clusters are becoming more feasible. With a multi-site cluster, you can provide high availability even in the event of the loss of an entire physical site. These types of solutions are still relatively young and uncommon. Hyper-V Server does require a substantial amount of expensive supporting technology to make this possible, so ensure that you know all the requirements prior to trying to create such a system. These requirements will be discussed in greater depth in Chapter 9, Special Cases.

Natural replacement for aging infrastructure

Traditionally, organizations will purchase server hardware and software on an as-needed basis and keep it until it can no longer serve the purpose for which it was designed. A Hyper-V Server cluster is a natural place for their replacements to be created. Instead of provisioning new hardware to replace old equipment on a one-to-one basis, new hardware is only purchased when the capacity of an existing cluster is no longer sufficient.

Not only does employing a Hyper-V Server cluster slip nicely into the current hardware replacement stream, it can also completely reshape the way that hardware refreshes are handled. By decoupling hardware upgrades and replacements from software roles, an organization can upgrade software without waiting for a hardware refresh cycle. Without the dependency of software, the hardware can be replaced on any schedule that the organization desires; in some cases, and with careful planning, hardware can be upgraded without impacting hosted services. When a service impact is unavoidable, it is still likely to be substantially less intrusive than the normal physical-to-physical transition.

Test, development, and training systems

One of the defining features of a virtualized environment is isolation. Virtual machines are very effectively walled off from each other and the rest of your network unless you intentionally go through the steps to connect them. The ease of deploying new systems and destroying them once they've outlived their purpose is another key characteristic. Taken together, this facilitates a variety of uses that would be far more difficult in a physical environment. Of course, Hyper-V Server can provide this type of environment without a cluster. For some organizations, one or more of these roles are significant enough that they are just as demanding as production. For organizations at the opposite end who could never justify an entire cluster for such a purpose, the extra capacity extant in most clusters will almost undoubtedly provide enough room for a small test environment.

If you've never been in a position to be able to consider these uses before, you can create environments to test trial software releases without placing them on production systems. You can examine a suspicious application in an ephemeral sandbox environment where it can do no lasting harm. You can also duplicate a production system to safely test a software upgrade. You can even copy or emulate an end-user computer to train new users on a line-of-business application. Since all of these systems are as isolated from your live systems as you make them, the benefits provided by a testing environment and the ease with which a virtualization system can deliver it make this more of a strong point than might be obvious at first.

Cloud hosting

A term that has grown more rapidly in popularity than in comprehensibility is cloud. This term has so many unique definitions that they could be collected into a cloud of their own. With the 2012 release of server software products, Microsoft is pushing forward with the term and seems to be attempting to satisfy as many of the definitions as it can. One of the core technologies that they are pushing as a major component of their "cloud" solution is Hyper-V Server, especially when used in conjunction with Failover Clustering. Narrowing the scope of cloud to Hyper-V Server and Failover Clustering, what it means is that you can design an environment in which you can quickly create and destroy complete operating system environments as needed without being concerned with the underlying support structure. In order to create a true cloud environment using Microsoft technologies, you must also use System Center Virtual Machine Manager 2012 (SCVMM) with Service Pack 1 for a Hyper-V Server 2012 deployment or SCVMM 2012 R2 for a Hyper-V Server 2012 R2 deployment. With this tool, you'll be able to create these virtual machines without even being involved in which cluster node they begin life on. This nebulous provisioning of resources in an on-demand fashion and conceptually loose coupling of software and hardware resources is what qualifies Hyper-V Server as a component of a cloud solution.

Another aspect that allows Hyper-V Server to be considered a cloud solution is its ability to mix hardware in the cluster. As a general rule, this is not a recommended approach. You should strive to use the same hardware and software levels on every host in your cluster to ensure compatibility and smooth transitions of virtual machines. However, an organically growing cluster that is intended to function as a cloud environment can mix equipment if necessary. It is not possible to perform Live Migrations of virtual machines between physical hosts that do not have CPUs from the same manufacturer. Migrations between hosts that have CPUs that are from the same vendor but are otherwise mismatched may also present challenges to seamless migration. If your goals and requirements stipulate that extra computing resources be made available and some possible downtime is acceptable for a virtual machine that is being migrated, heterogeneous cluster configurations are both possible and useful.

Using Hyper-V Server to provide a cloud solution has two major strategies: public clouds and private clouds. You can create or expand your own hosting service that involves selling computing resources, software service availability, and storage space to end users outside your organization. You can provide a generic service that allows the end users to exploit the available system as they see fit, or you can choose to attempt to provide a niche service with one or more specialized pre-built environments that are deployed from templates. The more common usage of a Hyper-V Server cloud will be for private consumption of resources. Either usage supplies you with the ability to track who is using the available resources. This will be discussed in the following section.

Resource metering

A common need in hosted environments is the ability to meter resources. This should not be confused with measuring performance. The purpose of resource metering is to determine and track who is using which resources. This is commonly of importance in pay-as-you-go hosting models in which customers are only billed for what they actually use. However, resource metering has value even in private deployments. In a purely physical environment, it's not uncommon for individual departments to be responsible for paying for the hardware and software that is specific to their needs. One of the initial resistances to virtualization was the loss of the ability to determine resource utilization. Specifically in a Hyper-V Server cluster where the guest machines can travel between physical units at any time and share resources with other guests, it's no longer a simple matter of having a department pay for a physical unit. It also may not fit well with the organization's accounting practices to have just a single fund devoted to providing server hardware resources regardless of usage. Resource metering is an answer to that problem; usage can be tracked in whatever way the organization needs. These results can be periodically recorded and the individual departments or users can be matched to the quantity of resources that they consumed. This enables a practice commonly known as chargeback, in which costs can be precisely assigned to the consumer.

Hyper-V Server allows for metering of CPU usage, memory usage, disk space consumption, and network traffic. Third-party application vendors also provide extensions and enhancements to the basic metering package.

VDI and RemoteFX

Virtual Desktop Infrastructure (VDI) is a generic term that encompasses the various ways that desktop operating systems (such as Windows 8) are virtualized and made accessible to end-users in some fashion. VDI in Hyper-V Server is enhanced by the features of RemoteFX. This technology was introduced in the 2008 R2 version and provided superior video services to virtual desktops. RemoteFX was greatly expanded in Hyper-V Server 2012, especially when combined with Remote Desktop Services. A full discussion of these technologies is not included in this book, but if you intend to use them, they and their requirements must form a critical part of your planning and design. The hardware requirements and configuration steps are well-documented in a TechNet wiki article viewable at:

http://social.technet.microsoft.com/wiki/contents/articles/16652.remotefx-vgpu-setup-and-configuration-guide-for-windows-server-2012.aspx

Be open to other purposes

The preceding sections outlined some of the most common reasons to build a Hyper-V Server cluster, but the list is by no means all-inclusive. Skim through the remainder of this book for additional ideas. Look through community forums for ways that others are leveraging Hyper-V Server clusters to address their issues.

Goals for a Hyper-V Server cluster

Once you have outlined the reasons to build a Hyper-V Server cluster, the next step is to identify how your organization intends to benefit from the technology. Before you can fully flesh this portion out, you need to identify how the technology can and cannot be applied to your specific environment. The driving factor behind the work that builds this section is ensuring that expectations of the system are realistic. This is an exploratory process that includes a substantial number of activities.

Identify the resources that cannot be virtualized

Not every application can run in a virtualized environment. A very common reason is a dependence upon a piece of hardware that cannot be virtualized. Anything that requires access to a PCIe slot, a serial device, or a specific piece of parallel-connected equipment will likely be difficult or impossible to virtualize. As it fits within the Microsoft paradigm, a virtualized operating system needs to be portable between hosts without specific ancillary configuration requirements from one host to the next. These types of devices preclude that portability. Fortunately, there are ways to accommodate some types of hardware, such as USB devices. There will be a section on that later.

Consult with application vendors

Application vendors, even Microsoft, may require a specific environment for their software in order to continue providing support. There are many reasons why a software company may not wish to certify their applications on a Hyper-V Server cluster, so you'll need to contact those whose products you use to ensure that you'll continue to have access to the needed support lines.

Even if your vendors have already certified their applications on a standalone Hyper-V Server host, it does not necessarily follow that they will extend that support to a cluster environment. One such example is Microsoft Lync Server. Technologies like Live Migration appear to have no downtime because the normal disconnect time is less than the standard TCP/IP timeout. Even though it's brief, there is a break in service. This can cause problems for some applications.

Microsoft's application-specific support policy in regards to virtualization is viewable on their knowledgebase at http://support.microsoft.com/kb/957006.

Involve internal stakeholders

There are some non-technical investigations to be made. Clustering of physical resources expands the reach of your hardware that is dedicated to virtualization. An argument can be made that the traditional reasons to segregate resources onto separate hardware platforms are no longer relevant. You might be able to inspire other departments to join in on the project. This could bring additional resources and an interest in some of the more advanced technologies that Hyper-V Server and Failover Clustering have to offer. There may also be internal reasons to bring others onboard.

Define phases and timelines

Like any other major project, a Hyper-V Server cluster deployment is performed in phases. Each phase is composed of a number of subsections and steps. Setting timelines for these phases helps to frame the project for others, establishes reasonable expectations, and adds another dimension of focus to the project that can help keep it from falling to the wayside or getting mired in side projects. Typical phases for this type of project include planning, design, initial setup, pre-deployment testing, deployment, resource creation and migration, post-deployment testing, and maintenance. Each of these phases should be clearly indicated in the project document with an outline of what events each phase will include. Each phase outline should also include some rough dating for expected completion.

Perform further research

One phase that typically doesn't appear in the project document is the one that you may be in right now: the Discovery phase. This may appear in a different organizational document—perhaps one intended to track activity in the Information Technology department. The Discovery phase is essentially the development of the Goals and Purposes phases. A formal description of it might be a feasibility study in which you attempt to determine if a cluster of Hyper-V Servers is the right solution for your organization. Use the Discovery phase to address the problem of, "We don't know what we don't know."

To start, at least skim through the other chapters of this book and familiarize yourself with the concepts. It is highly recommended that you obtain a copy of Hyper-V Server or a trial of Windows Server and install and cluster it on a group of test computers. Look to Internet sources and forums for ways that others are exploiting these technologies in their organizations. Ask questions. Watch for any issues that others have had to determine if there are any pitfalls you need to be aware of before starting your own project. Don't restrict yourself to Hyper-V and Windows Server resources; forums and user groups for software you intend to virtualize can also be invaluable sources of insight.

Define success metrics

With a project of this scale, it is rarely sufficient to declare that the project has been successful based on any single factor. Use the project document to list the events that must occur in order for the project to be considered a success. It is advisable to break these up into Critical and Desirable groups. Normally, a project is not considered to be successfully completed when all items marked Critical have been satisfied, but when unfulfilled Desirable items do not impede progress.

Success metrics should be very specific in nature. Don't simply use entries like, "High availability is functional." Instead, use entries such as, "All virtual machines can be Live Migrated from Host 1 to Host 2". Also, define metrics that cover all aspects of the installation. Use things such as, "Can transfer ownership of all Cluster Shared Volumes from Host 2 to Host 4". Depending upon your organizational needs and processes, it is acceptable to use a shorter list of generic success metrics on your official document that refer to a more specific set of metrics kept separately.

Measure and predict your workload

If at all possible, you should know as early as possible what sort of computing resources will be required by the applications that you'll be placing in your cluster. How well you need to plan for this depends somewhat on the resources you have available. If you have the financial and technical resources available to add new nodes on demand, you can quickly scale a Hyper-V Server cluster out to handle new or unforeseen demands. In all other cases, proper advance planning can ensure that you don't underpower or overpower your systems.

If you're going to be virtualizing an existing physical workload or converting from another hypervisor deployment, you can gather performance metrics to help you determine how to build out your new systems. Chapter 8, Performance Testing and Load Balancing covers how to track performance for your cluster, and the same concepts and techniques can be applied to standalone computing systems. The most useful information will be around CPU consumption, memory usage, disk space, and disk IOPS (input/output per second). While it is tempting to just add all current dedicated resources (such as CPU counts and total RAM usage), these numbers are almost always artificially high because few computer systems fully utilize their hardware resources. Also keep in mind that if you will be relocating some systems from older hardware, advances in technology may require fewer resources to provide comparable performance. Track resource utilization over a period of time that includes a typical workload. Of course, since you are using virtual machines, you'll have the ability to add or remove CPU and memory resources and expand disk space with very little impact, so mis-provisioning is usually not a serious risk.

If your new cluster will include a new software deployment for which you have no existing implementation and therefore you cannot track live performance metrics, consult with the software vendor. Keep in mind that it is normal for software vendors to overestimate the actual amount of hardware that their systems require, but they may not support their applications on anything less.

Only allow changes during the planning phase

As you and other stakeholders learn more about the technologies and how they can apply to your environment, your goals and purposes will no doubt be expanded. Set a definite end point at which changes to the project's scope will no longer be accepted. Otherwise, you'll run the risk of scope creep, in which a project continually grows until it is no longer manageable. If further changes are desired but not required for the success of the project, they can be placed into a separate project to be completed after successful completion of the current endeavor. If you have no official guidelines, a logical point at which to cease allowing project changes is at the halfway mark of the time allotted to the Design phase.

Looking forward to the Design phase

Once you have set reasonable purposes and goals for your cluster project, the next phase involves designing the system that will achieve them. At a high level, this is little different from designing a system that is intended to host a single-purpose service in a non-virtualized environment. You first identify the expected load and then architect a solution that can comfortably bear it. You no doubt already have some idea of the volume of computing resources that will be demanded of your cluster. However, the nature of clustering does require some more understanding before you can begin outlining components to purchase.

Many of these concepts may seem obvious to you as a computing professional, but the early phases of a project will usually require the involvement and sometimes oversight from less technically proficient members of the organization. It is certainly not required that they become subject-matter experts, but they must be made aware of the general needs of the project so that they are not surprised when the requests for resources, time, and capital expenditures begin. Several items will need to have attention drawn to them in the early phases. Specific inclusion of those items in the project planning document is optional based on the needs of your organization and the overall size of your project. You might consider a Solution Summary section that briefly itemizes the components of the solution without providing any particular details. If your project is small enough or if there won't be many reviewers of the document itself, you may choose to skip including this section in favor of the more detailed list that will inevitably be included in the Design portion. However, the more simplistic layout may need to be built for presentations, and it can even be used as a basic checklist for the Design phase.

Host computers

A cluster involves multiple physical computer systems. As mentioned in the cloud discussion earlier, it's not absolutely required that each host be identical to the others, but it is certainly desirable. Virtual machines that move across differing hardware may suffer a noticeable performance degradation if the target doesn't have the same capabilities or configuration as the source. Where possible, these hosts should be purchased together prior to initial implementation. Adding nodes to a cluster requires more effort after that cluster has gone into production. Unlike a typical single-server physical deployment, it is common for the combined power of a cluster to provide significantly more computing resources than are actually required to provide the included services. This is because part of the purpose of a cluster is to provide failover capability.

Also, Hyper-V Server host by nature needs to run more than one operating system concurrently, so these systems may require more CPU cores and RAM than your organization is accustomed to purchasing for a single system. If possible, modify your organization's existing provisioning standards to accommodate the differences for virtualization hosts.

Storage

An element that clustering introduces is the need for shared storage. While it is technically possible to build a cluster that does not use shared storage, it is not practical. Out of the three main components of a virtual machine, the CPU threads and memory contents can only exist on one node at a time, but they can be rapidly transferred to another node. In the event of a host crash, these contents are irretrievably lost just as they would be if the machine were not virtualized. In a high availability solution, these are considered acceptable losses. However, the long-term data component, which includes configuration data about the virtual machine in addition to the contents of its virtual hard drives, is a protected resource that is expected to survive a host crash—just as it would be in a non-virtualized environment. If that data is kept on internal storage in a host that fails, there will be no way for another host to access it without substantial effort on the part of an administrator.

The files that comprise a highly available virtual machine must be placed in a location that all cluster nodes can access. There are some special-case uses in which only a subset of the nodes are allowed to access a particular storage location, but a virtual machine cannot be truly considered to be highly available unless it can run on more than one cluster node.

Cluster Shared Volumes

Shared storage involves both physical devices and logical components. The preferred way to logically establish shared storage for clustered Hyper-V Server computers is by using Cluster Shared Volumes (CSV). The name more or less explains what it does; it allows volumes to be shared across the nodes of a cluster. Contrast this to the traditional volume which can only be accessed by one computer at a time. In the term CSV, Volumes specifically refers to NTFS volumes. You cannot use any other format type (FAT, NFS, and so on) with a CSV (the new ReFS format is acceptable in 2012 R2, as will be discussed in Chapter 4, Storage Design).

In more technical terms, CSV is powered by a filter driver that a node uses to communicate with NTFS volumes that might also be accessed by other nodes simultaneously. The technical details of CSVs will be examined in much more depth in later chapters.

SMB shares

A powerful feature introduced with Windows Server 2012 is Version 3.0 of Microsoft's server message block (SMB) technology. Because it is typically used on file shares, SMB is usually thought of in terms of storage. In actuality, it is a networking protocol. Its applications to storage are why it is mentioned in this section. For one thing, Cluster Shared Volume communications between nodes are encapsulated in SMB. However, you can now create a regular SMB share on any computer running Windows Server 2012 or later and use it to host the files for a Hyper-V Server virtual machine. Hardware vendors are also working to design systems that provide SMB 3.0 shares. Many will use an embedded installation of Windows Storage Server; others will follow Microsoft's specification and design their own systems.

Mixing SMB 3.0 and CSV

You will cover the specific method(s) of provisioning and using storage during the Design phase, but the possibilities and applications need to be made clear as early as possible. Unless they're on a clustered file server, you cannot create a CSV on an SMB 3.0 share point, and creating an SMB 3.0 share on a CSV does not expose the existence of that CSV in a way that Hyper-V Server can properly utilize. However, a Hyper-V Server cluster can run some virtual machines from CSVs while running others on SMB 3.0 shares. The initial impact this has on planning is that if you have complex needs and/or a restrictive budget, there is no requirement to decide between a storage area network (SAN) or less expensive methods of storage. You can have both. If any of these concepts or terms are new to you, read through Chapter 4, Storage Design, before making any storage decisions.

The following image shows a sample concept diagram of a cluster that mixes storage connectivity methods:

Mixing SMB 3.0 and CSV

Networking

The networking needs of a Hyper-V Server cluster node are substantially different from those of a standalone system. A cluster node running Hyper-V Server involves three distinct networking components.

  • Management
  • Cluster and Cluster Shared Volume communications
  • Live Migration traffic

Management

Management traffic involves regular communications to and from the management operating system of the variety that any Windows Server system would use. Examples include connections for Remote Desktop Connection clients, remote management consoles, monitoring tools, and backups that operate within the context of the management operating system. This connection is used as the host's identifier within the cluster and will be the target for cluster management software. Usually, the events that will generate the most bandwidth on this connection are file transfers to and from the host (such as .ISO files to be connected to virtual machines) and backup traffic moving from the hypervisor to a backup server on another computer.

Cluster and Cluster Shared Volumes

The individual nodes of a cluster need to communicate with each other directly, preferably over a network dedicated to inter-node communications. The traffic consists of "heartbeat" information in which the nodes continually verify that they can see each other. Information about cluster resources, specifically virtual machines in the case of a cluster of Hyper-V Server computers, is synchronized across this network.

Communications related to Cluster Shared Volumes also utilizes this network. In normal operations, this is nothing more than basic metadata information such as ownership changes of a CSV or a virtual machine. However, some conditions can trigger what is called Redirected Access Mode, in which all the disk operations for the virtual machines on a particular node involving one or more CSVs are routed through the node(s) that own the affected CSV(s). This mode and its triggers will be looked at in greater detail in later chapters. At this stage, the important information is that if you will be using CSVs, you need to prepare for the possibility that cluster communications may need to have access to a significant amount of bandwidth.

Live Migration

A Live Migration involves the transfer of the active state of a virtual machine from one node to another. There is a small amount of configuration data that goes along, but the vast majority of the information in this transfer is the active contents of the virtual machine's memory. The amount of bandwidth you make available for this network translates directly into how quickly these transfers occur. The considerations for this will be thoroughly examined later. For now, understand that this network needs access to a substantial amount of bandwidth.

Subnetting

Each of these traffic types must be isolated from the others on their own subnets with the possible exception of cluster communications. This is a requirement of Microsoft Failover Clustering and, for the most part, cannot be circumvented. In some organizations, this will involve calling upon a dedicated networking team to prepare the necessary resources for you. Until you enter the actual Design phase, you won't be able to tell them much beyond the fact that you'll need at least two, and probably more, subnets to satisfy the requirements. However, unless you intend to isolate your Hyper-V Server hosts and/or you expect your cluster to have enough nodes that it might overwhelm currently allocated ranges, the subnet that contains your management traffic can be an existing IP infrastructure. Depending on the capability of your networking equipment and organizational practices, you may also choose to place your IP networks into distinct virtual LANs (VLANs).

The VLAN is a networking concept that has been in widespread use for quite some time, and it is not related to hypervisors or virtual machines. Windows Server's networking stack and Hyper-V's virtual switch are fully capable of handling traffic in separate VLANs. This book will explain how to configure Hyper-V accordingly, but your network equipment will have its own configuration needs. Work with your networking team or provider if you need guidance.

Virtual machine traffic

A fourth traffic type you must design for is that used by the virtual machines. Unlike the traffic types mentioned previously, this is not a cluster-defined network. In fact, Microsoft Failover Clustering in 2012 is not at all aware of the existence of your virtual machine network setup. R2 adds visibility for protection purposes, but it is not a true cluster network. Virtual machine traffic is controlled entirely by Hyper-V Server via the virtual switch. It is recommended that you use at least a one gigabit network adapter for this role, but it is possible for it to share with a cluster role if necessary. If using gigabit adapters, Microsoft only supports this sharing with the management role and only in a particular configuration. The actual amount of bandwidth required will depend on how much your virtual machines need. You will revisit this during the Design phase.

Virtual machine traffic does not require a dedicated subnet. Any virtual machine can access any subnet or VLAN that you wish.

Storage traffic

iSCSI is a commonly used approach to providing access to shared storage for a Hyper-V Server cluster environment. If you're not familiar with the term, iSCSI is a method of encapsulating traditional Small Computer Systems Interface (SCSI) commands into IP packets. SCSI in this sense refers to a standardized command set used for communications with storage devices. If you will be using iSCSI, it is recommended that this traffic be given its own subnet. Doing so reduces the impact of broadcast traffic on I/O operations and provides a measure of security against intruders.

If your storage system employs multi-path or you have multiple storage devices available, you will occasionally see recommendations that you further divide the separate paths into their own subnets as well. Testing for the true impact of this setup has not produced conclusive results, so it is likely to require more effort than it's worth. Unless you have a very large iSCSI environment or a specific use case that clearly illustrates the rationale for multiple iSCSI networks, a single subnet should suffice.

SMB 3.0 traffic should also be given its own subnet. Like iSCSI, SMB 3.0 can take advantage of multiple network adapters. Unlike iSCSI, using multiple paths to SMB 3.0 storage requires one subnet per path.

Physical adapter considerations

It is recommended that you provide each traffic type with its own gigabit adapter. If necessary, it is possible for the roles to share fewer adapters, all the way down to a single gigabit network interface card. This can cause severe bottlenecks and Microsoft will only support such role-sharing in specific configurations. If you will be using ten-gigabit adapters, the recommendations are much more relaxed. These are important considerations early on as it's not uncommon for a Hyper-V Server host to have more than six network adapters. Many organizations are not accustomed to purchasing hardware with that sort of configuration, so this may require a break from standardized provisioning processes.

All physical adapters are not created equally. While the only base requirement is to use gigabit adapters, other features are available that can provide enhanced network performance. One of these features is VMQ (virtual machine queue), which allows a guest to bypass some of the hypervisor's processing for incoming traffic. More recent technologies that Hyper-V Server can take advantage of are remote direct memory access (RDMA) and single-root input/output virtualization (SR-IOV).

These technologies are becoming increasingly common, but they are currently only available on higher-end adapters. Chapter 6, Network Traffic Shaping and Performance Enhancements, is devoted to these and other advanced networking technologies.

Adapter teaming

Windows Server 2012 introduced the ability to form teams of network adapters natively within the operating system. In previous Windows versions, teaming required specific support from hardware manufacturers. It was usually not possible to create a single team across adapters of different hardware revisions or from different manufacturers. The quality of teaming could vary significantly from one driver set to the next. As a result, teams sometimes caused more problems than they solved. Microsoft official policy has always been to support Windows networking only when no third-party teaming solution is present.

With built-in support for adapter teaming, many new possibilities are available for Hyper-V Server cluster nodes. These will be discussed in great detail in later chapters. What is important to know now is that the technology is available and directly supported by Microsoft. One major misconception about this technology deals with bandwidth aggregation.

If you or other interested parties have particular expectations of this feature, you may benefit from reading ahead through Chapter 5, Network Design. In simple terms, the primary benefits of adapter teaming are load balancing and failover. Teaming also paves the way for converged fabric, which is also explained in Chapter 5, Network Design.

Active Directory

Microsoft Failover Clustering requires the presence of an Active Directory domain. The foundational reason is that the nodes of a cluster need to be able to trust that the other member computers are who they say they are, and the definitive tool that Microsoft technology relies on to make that determination is Active Directory. A Microsoft Failover Cluster also creates an Active Directory computer object that represents the entire cluster to other computers and some services. This object isn't quite as meaningful for a cluster of Hyper-V Server machines as it is for other clustered services, such as Microsoft SQL Server, but the object must exist. Other supporting technologies, such as Cluster Shared Volumes and SMB 3.0 shares that host virtual machines, are also dependent on Active Directory.

The requirement for Active Directory needs to be made obvious prior to the Design phase, as it may come as a surprise to some. Hyper-V Server itself does not require a domain, and as such it is not uncommon to find organizations that configure stand-alone Hyper-V Server hosts in workgroup mode to host publicly-accessible services in an untrusted perimeter or isolation network. This can be achieved through the natural isolation of virtual machines provided by Hyper-V Server and a better understanding of the virtual switch.

Virtualized domain controllers

Virtualizing domain controllers is an issue that is not without controversy. There are some very important pros and cons involved. Windows Server 2012 eliminated the more serious problems and planned placement of virtualized domain controllers can address most of the rest. It is not necessary that any decisions about this subject be made at this point of design; in fact, unless you don't have a domain environment yet, it can wait until after the virtualization project is complete. However, it should be brought up early on, so you may wish to make yourself aware of the challenges now. This topic will be fully explored in Chapter 9, Special Cases.

Supporting software

A Microsoft Hyper-V Server and a Microsoft Failover Cluster can both be managed using tools built into Windows Server and freely downloadable for Windows 8/8.1. However, there are many other applications available that go beyond what the basic tools can offer. You should begin looking into these products early on to determine what their feature sets are and if those features are of sufficient value to your organization to justify the added expenditure.

Management tools

Multiple tools exist that can aid you in maintaining and manipulating Hyper-V Server and Failover Clustering. The Remote Server Administration Tools, which are part of the previously mentioned tools built into Windows Server and downloadable for Windows 8/8.1, include Hyper-V Manager and Failover Cluster Manager. There are also a plethora of PowerShell commands available for managing these technologies. It is entirely possible to manage all aspects of even a large Hyper-V Server cluster using only these tools. However, the larger your cluster or the less time you have available, the more likely it is that you'll want to employ more powerful software assistants.

Foremost among these tools is Microsoft System Center Virtual Machine Manager (SCVMM). This tool adds a number of capabilities, especially if it is used in conjunction with the larger System Center family of products. Be aware that you must be using at least Service Pack 1 of the 2012 release of this product in order to manage a Hyper-V Server 2012 system and at least version 2012 R2 in order to manage Hyper-V Server 2012 R2.

Third-party management products exist for Hyper-V Server and the market continues to grow. Take some time to learn about them, and if possible, test them out.

To aid you in defining your criteria, there are some commonly-asked-for features that the free Hyper-V Manager and Failover Cluster Manager tools don't provide:

  • Conversion of physical machines to virtual machines (often called P2V)
  • Templates—stored copies of virtual machines that serve as basic pre-built images that can be deployed as needed
  • Cloning of virtual machines
  • Automated balancing of virtual machines across nodes
  • Centralized repositories for CD and DVD image files that can be attached to virtual machines on any node on-demand
  • "Self-service" capabilities in which non-administrators can deploy their own virtual machines as needed
  • Extensions to the Hyper-V virtual switch

You don't necessarily need all of these features, nor is it imperative that a single product provide all of them. What's important is identifying the features that are meaningful to your organization, what package(s) provide those features, and, if necessary, what you are willing to pay for them.

Backup

Backup is a critical component of any major infrastructure deployment. Unfortunately, it is often not considered until a late stage of virtualization projects. Virtualization adds options that aren't available in physical deployment. Clustered virtual machines add challenges that aren't present in other implementations.

The topic of backup will be more thoroughly examined in Chapter 12, Backup and Disaster Recovery, but the basic discussion about it can't wait. Begin collecting the names of applications that are candidates. Windows Server, including Hyper-V Server, includes Windows Server Backup. This tool can be made to work with a cluster, but it is generally insufficient for all but the smallest deployments. Ensure that the products you select for consideration are certified for the backup method you intend to perform. If your plan will be to back up some or all virtual machines from within the hypervisor, your backup application will need to provide specific support for Hyper-V Server in a Microsoft Failover Clustering Environment.

Training

Depending upon the size of your deployment and your staff, you may need to consider seeking out training resources for your systems administrator(s). Hyper-V Server and Failover Clustering are not particularly difficult to use after a successful implementation, but the initial learning curve can be fairly steep. It is entirely possible to learn them both through a strictly hands-on approach with books such as this one. Microsoft provides a great deal of free introductory material through the Microsoft Virtual Academy at http://www.microsoft virtualacademy.com and in-depth documentation on TechNet at http://technet.microsoft.com. However, some of your staff may require formal classroom training or other methods of knowledge acquisition.

A sample Hyper-V Cluster planning document

To help you get started, the following is a sample document for a fictional company called Techstra. Techstra is a medium-sized company that provides technical training on a wide array of subjects. Due to inefficiencies in resource allocation and the hardware lifecycle, their Director of Operations, who also holds the role of Chief Technology Officer, has decided to pilot a program in which a single cluster of computers running Hyper-V Server will host a variety of virtual machines. Traditionally, Techstra has grouped its computer resources by the roles that they provide, but the vision for this project is that a single large cluster will eventually run all of Techstra's server systems. There is also some talk about adding in desktop systems and creating a virtual desktop infrastructure, but there are no firm plans.

Techstra is not large enough to have dedicated technology project managers, but it is large enough to handle a project of this magnitude in-house. With the preceding information in hand, a senior systems administrator has been tasked with performing the necessary research and drawing up project documentation for review. What follows is an excerpt from such a document.

Sample project title – Techstra Hyper-V Cluster Project

Sample project overview: Techstra is faced with the challenges of managing a multitude of hardware platforms that are not consistently synchronized, maintained, utilized, or retired. To address these problems, Microsoft Hyper-V Server and Microsoft Failover Clustering will be implemented. Microsoft Hyper-V Server is a virtualization platform that allows for multiple operating systems to run on a single computer system inside virtual machines. Microsoft Failover Clustering will be used to group several physical computer systems running Microsoft Hyper-V Server together to provide redundancy and resource distribution for these virtual machines.

Key personnel for this project are the Information Technology Department Manager, Senior Systems Administrator, and Senior Network Administrator.

Personnel to keep updated on project progress are the Director of Operations, Education Department Manager, Internet Presence Department Manager, and Marketing Department Manager.

Sample project – purposes

The specific purposes of this project are as follows:

  • Hardware consolidation
  • Hardware lifecycle management
  • Isolation of test and training systems
  • Rapid turnover for training systems
  • Provisioning of systems by the training department without involving systems administrative staff
  • Embodiment of corporate We Use What We Teach philosophy
  • Migration path for a number of physical servers that are reaching end-of-life
  • Longevity protection for two line-of-business applications that cannot be upgraded or replaced and that require operating systems that are no longer being sold

Sample project – goals

The goals for this project are as follows:

  • Deployment of three physical hosts running Hyper-V Server
  • Deployment of one internally-redundant SAN device for high-performance workloads
  • Deployment of two general-purpose server-class computers running Windows Server 2012 with a file share for workloads with low performance needs but high capacity requirements
  • Conversion of seven physical server deployments to the virtual environment
  • Expansion of existing System Center 2012 deployment to include Virtual Machine Manager
  • Systems administrators trained on Hyper-V Server, Hyper-V Manager, Failover Cluster Manager, and System Center Virtual Machine Manager
  • Virtual machines backed up in accordance with corporate data protection and retention policy

Sample project – success metrics (subsection of goals)

For this project to be considered successfully completed, all of the following conditions must be demonstrably satisfied:

  • All virtual machines expected to provide services to other computers must be available and reachable outside of planned downtime windows
  • On initial deployment, the cluster will be operating at no more than 70 percent of the resource capacity of two nodes under probable demand conditions
  • The Hyper-V Server cluster must be able to survive the complete failure of any one node
  • All cluster nodes can communicate with each other on all designated paths.
  • The cluster can survive the failure of any single physical networking component
  • Virtual machines that were running on a failed or isolated node must be available within 10 minutes
  • All high availability virtual machines can be successfully Live Migrated from any host to any other host
  • If any node is manually shut down or restarted, its high availability virtual machines are gracefully moved to other nodes
  • All cluster nodes are being patched according to the corporate standard
  • All virtual machines are being backed up according to the corporate standard
  • Backups of virtual machines can be successfully restored
  • Systems administrators responsible for supporting the Hyper-V Server cluster demonstrate reasonable competence and comprehension with its components according to their level, as follows:
    • Help desk personnel can identify a failover event
    • Help desk personnel can identify a failed node
    • Help desk personnel can make reasonable predictions of service restoration for virtual machines on failed nodes
    • Junior systems administrators can satisfy all expectations of help desk personnel
    • Junior systems administrators can deploy new virtual machines from templates
    • Junior systems administrators can verify proper operation of and correct minor issues within patching systems
    • Junior systems administrators can verify proper operation of and correct minor issues within backup systems
    • Junior systems administrators demonstrate an understanding of resource allocation and load balancing including CPU, memory, and hard disk space
    • Junior systems administrators understand the monitoring systems and are familiar with the procedures for event handling
    • Senior systems administrators can satisfy all expectations of help desk personnel and junior systems administrators
    • Senior systems administrators can make changes to the infrastructure
    • Senior systems administrators can restore virtual machines
    • Senior systems administrators can add, remove, and replace cluster nodes
  • Non-IT staff that have been granted the ability to provision their own virtual machines are demonstrably able to do so

Review the sample project

Take some time to review the sample project and compare it to the stated parameters in its introduction and with the guidance provided earlier in the chapter. Take notice both of what is there and what isn't there.

The Success Metrics portion is easily the longest section, and in an actual project would be much longer. It is intentionally quite specific in nature. Filling this portion with seemingly minute details can help ensure that no stone is left unturned and that no eventuality is unplanned for. If this section is properly laid out and all of its conditions met, you are virtually guaranteed a successful deployment free of surprises.

Even though the introductory material discussed virtual desktops, there is no mention of it in the Goals or Purposes sections. While not directly stated, it is inferred that VDI is a nice-to-have feature, not a primary driver. This would be a prime example of an opportunity to set limits on the scope of the project. As you can see, each item in the project goals translates directly to a large number of success metrics, so there is a definite benefit in restricting how much you take on. In this case, the director that initiated the project has indicated that this is a pilot, which implies that it is expected that if the deployment is successful, it will be expanded at a later date. The fictitious systems administrator tasked with writing this document has elected to try to hold off on a VDI implementation for a later expansion project.

Even though the formal document skips over VDI, the project notes should contain reference to it. The director did indicate his desire to have a single large cluster to handle anything that the company chooses to virtualize. If there are any special requirements of a VDI deployment that the initial hardware cannot satisfy, then it may be difficult to meet the director's desire for a single cluster. The decision will need to be made to expend resources to ensure that the initial hardware can handle any load that would ever be expected of it or to assess the feasibility of a single cluster versus two (or more) and against the usage of a single cluster augmented by one or more standalone Hyper-V Server systems. One way to bring this into the formal document would be to introduce it as a "Desirable" goal.

As it stands, this project document would be considered to be in draft form. The author was able to make some practical decisions regarding its contents and layout, but encountered at least one decision point that will need to be handled at a higher level and/or by group discussion. This should only be considered a beginning point for the planning phase, not the end.

Once these initial portions of the plan have been approved, the next step is to outline the remaining phases and the timelines they will be completed in. Those are procedural processes whose execution will depend upon your organization's operations.

Summary

Microsoft Hyper-V Server and Microsoft Failover Clustering are two powerful technologies that, when combined, provide great opportunities to protect your computing workloads and to more fully exploit your hardware resources. These technologies encompass a large number of concepts with an attendant terminology bank. Mastery of these concepts and terminology are critical to properly utilizing the technology.

Another vital component of a successful deployment is planning. A document that codifies the constituent steps of the project is a simple way to guide its progress and keep it on target. The success of a project can almost always be measured by the quality of the planning that went into it.

To understand a Hyper-V Server cluster and to properly plan to deploy one, you must possess an awareness of the components and resources that it will require.

Once you have successfully defined the parameters of your project, you are ready to move on to designing a cluster that fulfills them. This will be the focus of the next chapter. If you are building a project document, it is not necessary—in fact, it is not recommended—that you finalize the goals and purposes portions prior to moving on to design. These sections should be fairly firm at this point, but you should also allow for situations that you encounter during design to influence these earlier parts.

Left arrow icon Right arrow icon

Description

This book is written in a friendly and practical style with numerous tutorials centred on common as well as atypical Hyper-V cluster designs. This book also features a sample cluster design throughout to help you learn how to design a Hyper-V in a real-world scenario.Microsoft Hyper-V Cluster Design is perfect for the systems administrator who has a good understanding of Windows Server in an Active Directory domain and is ready to expand into a highly available virtualized environment. It only expects that you will be familiar with basic hypervisor terminology.

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Oct 22, 2013
Length: 462 pages
Edition : 1st
Language : English
ISBN-13 : 9781782177692
Vendor :
Microsoft
Tools :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Product Details

Publication date : Oct 22, 2013
Length: 462 pages
Edition : 1st
Language : English
ISBN-13 : 9781782177692
Vendor :
Microsoft
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 91.98
Windows Server 2012 Hyper-V: Deploying Hyper-V Enterprise Server Virtualization Platform
€41.99
Microsoft Hyper-V Cluster Design
€49.99
Total 91.98 Stars icon

Table of Contents

13 Chapters
1. Hyper-V Cluster Orientation Chevron down icon Chevron up icon
2. Cluster Design and Planning Chevron down icon Chevron up icon
3. Constructing a Hyper-V Server Cluster Chevron down icon Chevron up icon
4. Storage Design Chevron down icon Chevron up icon
5. Network Design Chevron down icon Chevron up icon
6. Network Traffic Shaping and Performance Enhancements Chevron down icon Chevron up icon
7. Memory Planning and Management Chevron down icon Chevron up icon
8. Performance Testing and Load Balancing Chevron down icon Chevron up icon
9. Special Cases Chevron down icon Chevron up icon
10. Maintaining and Monitoring a Hyper-V Server Cluster Chevron down icon Chevron up icon
11. High Availability Chevron down icon Chevron up icon
12. Backup and Disaster Recovery Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Full star icon Full star icon 5
(3 Ratings)
5 star 100%
4 star 0%
3 star 0%
2 star 0%
1 star 0%
BDoubleU Jul 06, 2016
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Great book that covers all design concepts required for a product cluster including converged infrustructure. A+
Amazon Verified review Amazon
Johary G May 09, 2015
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Without understanding this book, you won't build your cluster. Period.
Amazon Verified review Amazon
dc Sep 15, 2014
Full star icon Full star icon Full star icon Full star icon Full star icon 5
In my opinion this is one of the best Hyper-V / Failover Cluster books out there. The content is great and I would highly recommend it for any engineer wanting to understand their environment, or any consultant that has to work with Hyper-V.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.