Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Virtualization

115 Articles
article-image-introduction-veeam-backup-replication-vmware
Packt
16 Apr 2014
9 min read
Save for later

Introduction to Veeam® Backup & Replication for VMware

Packt
16 Apr 2014
9 min read
(For more resources related to this topic, see here.) Veeam Backup & Replication v7 for VMware is a modern solution for data protection and disaster recovery for virtualized VMware vSphere environments of any size. Veeam Backup & Replication v7 for VMware supports VMware vSphere and VMware Infrastructure 3 (VI3), including the latest version VMware vSphere 5.5 and Microsoft Windows Server 2012 R2 as the management server(s). Its modular approach and scalability make it an obvious choice regardless of the environment size or complexity. As your data center grows, Veeam Backup & Replication grows with it to provide complete protection for your environment. Remember, your backups aren't really that important, but your restore is! Backup strategies A common train of thought when dealing with backups is to follow the 3-2-1 rule: 3: Keep three copies of your data—one primary and two backups 2: Store the data in two different media types 1: Store at least one copy offsite This simple approach ensures that no matter what happens, you will be able to have a recoverable copy of your data. Veeam Backup & Replication lets you accomplish this goal by utilizing the backup copy jobs. Back up your production environment once, then use the backup copy jobs to copy the backed-up data to a secondary location, utilizing the Built-in WAN Acceleration features and to tape for long-term archival. You can even "daisy-chain" these jobs to each other, which ensures that as soon as the backup job is finished, the copy jobs are fired automatically. This allows you to easily accomplish the 3-2-1 rule without the need for complex configurations that makes it hard to manage. Combining this with a Grandfather-Father-Son (GFS) backup media rotation scheme, for tape-based archiving, ensures that you always have a recoverable media available. In such a scheme, there are three, or more, backup cycles: daily, weekly, and monthly. The following table shows how you might create a GFS rotation schedule: Monday Tuesday Wednesday Thursday Friday         WEEK 1 MON TUE WED THU WEEK 2 MON TUE WED THU WEEK 3 MON TUE WED THU WEEK 4 MON TUE WED THU MONTH 1 "Grandfather" tapes are kept for a year, "Father" tapes for a month, and "Son" tapes for a week. In addition, quarterly, half-yearly, and/or annual backups could also be separately retained if required. Recovery point objective and recovery time objective Both these terms come into play when defining your backup strategy. The recovery point objective (RPO) is a definition of how much data you can afford to lose. If you run backups every 24 hours, you have, in effect, defined that you can afford to lose up to a day's worth of data for a given application or infrastructure. If that is not the case, you need to have a look at how often you back up that particular application. The recovery time objective (RTO) is a measure of the amount of time it should take to restore your data and return the application to a steady state. How long can your business afford to be without a given application? 2 hours? 24 hours? A week? It all depends, and it is very important that you as a backup administrator have a clear understanding of the business you are supporting to evaluate these important parameters. Basically, it boils down to this: If there is a disaster, how much downtime can your business afford? If you don't know, talk to the people in your organization who know. Gather information from the various business units in order to assist in determining what they consider acceptable. Odds are that your views as an IT professional might not coincide with the views of the business units; determine their RPO and RTO values, and determine a backup strategy based on that. Native tape support By popular demand, native tape support was introduced in Veeam Backup & Replication v7. While the most effective method of backup might be disk based, lots and lots of customers still want to make use of their existing investment in tape technology. Standalone drives, tape libraries, and Virtual Tape Libraries (VTL) are all supported and make it possible to use tape-based solutions for long-term archival of backup data. Basically any tape device recognized by the Microsoft Windows server on which Backup & Replication is installed is also supported by Veeam. If Microsoft Windows recognizes the tape device, so will Backup & Replication. It is recommended that customers check the user guide and Veeam Forums (http://forums.veeam.com) for more information on native tape support. Veeam Backup & Replication architecture Veeam Backup & Replication consists of several components that together make up the complete architecture required to protect your environment. This distributed backup architecture leaves you in full control over the deployment, and the licensing options make it easy to scale the solution to fit your needs. Since it works on the VM layer, it uses advanced technologies such as VMware vSphere Changed Block Tracking (CBT) to ensure that only the data blocks that have changed since the last run are backed up, ensuring that the backup is performed as quickly as possible and that the least amount of data needs to be transferred each time. By talking directly to the VMware vStorage APIs for Data Protection (VADP), Veeam Backup & Replication can back up VMs without the need to install agents or otherwise touch the VMs directly. It simply tells the vSphere environment that it wants to take a backup of a given VM; vSphere then creates a snapshot of the VM, and the VM is read from the snapshot to create the backup. Once the backup is finished, the snapshot is removed, and changes that happened to the VM while it was backed up are rolled back into the production VM. By integrating with VMware Tools and Microsoft Windows VSS, application-consistent backups are provided if available in the VMs that are being backed up. For Linux-based VMs, VMware Tools are required and its native quiescence option is used. Not only does it let you back up your VMs and restore them if required, but you can also use it to replicate your production environment to a secondary location. If your secondary location has a different network topology, it helps you remap and re-IP your VMs in case there is a need to failover a specific VM or even an entire datacenter. Of course, failback is also available once the reason for the failover is rectified and normal operations can resume. Veeam Backup & Replication components The Veeam Backup & Replication suite consists of several components, which in combination, make up the backup and replication architecture. Veeam backup server: This is installed on a physical or virtual Microsoft Windows server. Veeam backup server is the core component of an implementation, and it acts as the configuration and control center that coordinates backup, replication, recovery verification, and restore tasks. It also controls jobs scheduling and resource allocation, and is the main entry point configuring the global settings for the backup infrastructure. The backup server uses the following services and components: Veeam Backup Service: This is the main components that coordinates all operations, such as backup, replication, recovery verification, and restore tasks. Veeam Backup Shell: This is the application user interface. Veeam Backup SQL Database: This is used by the other components to store data about the backup infrastructure, backup and restore jobs, and component configuration. This database instance can be installed locally or on a remote server. Veeam Backup PowerShell Snap-in: These are extensions to Microsoft Windows PowerShell that add a set of cmdlets for management of backup, replication, and recovery tasks through automation. Backup proxy Backup proxies are used to offload the Veeam backup server and are essential as you scale your environment. Backup proxies can be seen as data movers, physical or virtual, that run a subset of the components required on the Veeam backup server. These components, which include the Veeam transport service, can be installed in a matter of seconds and are fully automated from the Veeam backup server. You can deploy and remove proxy servers as you see fit, and Veeam Backup &Replication will distribute the backup workload between available backup proxies, thus reducing the load on the backup server itself and increasing the amount of simultaneous backup jobs that can be performed. Backup repository A backup repository is just a location where Veeam Backup & Replication can store backup files, copies of VMs, and metadata. Simply put, it's nothing more than a folder on the assigned disk-based backup storage. Just as you can offload the backup server with multiple proxies, you can add multiple repositories to your infrastructure and direct backup jobs directly to them to balance the load. The following repository types are supported: Microsoft Windows or Linux server with local or directly attached storage: Any storage that is seen as a local/directly attached storage on a Microsoft Windows or Linux server can be used as a repository. That means that there is great flexibility when it comes to selecting repository storage; it can be locally installed storage, iSCSI/FC SAN LUNs, or even locally attached USB drives. When a server is added as a repository, Veeam Backup & Replication deploys and starts the Veeam transport service, which takes care of the communication between the source-side transport service on the Veeam backup server (or proxy) and the repository. This ensures efficient data transfer over both LAN and WAN connections. Common Internet File System (CIFS) shares: CIFS (also known as Server Message Block (SMB)) shares are a bit different as Veeam cannot deploy transport services to a network share directly. To work around this, the transport service installed on a Microsoft Windows proxy server handles the connection between the repository and the CIFS share. Summary In this article, we will learned about various backup strategies and also went through some components of Veeam® Backup and Replication. Resources for Article: Further resources on this subject: VMware vCenter Operations Manager Essentials - Introduction to vCenter Operations Manager [article] Use Of ISO Image for Installation of Windows8 Virtual Machine [article] An Introduction to VMware Horizon Mirage [article]
Read more
  • 0
  • 0
  • 1759

article-image-networking
Packt
20 Mar 2014
8 min read
Save for later

Networking

Packt
20 Mar 2014
8 min read
(For more resources related to this topic, see here.) Working with vSphere Distributed Switches A vSphere Distributed Switch (vDS) is similar to a standard switch, but vDS spans across multiple hosts instead of creating an individual switch on each host. The vDS is created at the vCenter level, and the configuration is stored in the vCenter database. A cached copy of the vDS configuration is also stored on each host in case of a vCenter outage. Getting ready Log in to the vCenter Server using the vSphere Web Client. How to do it… In this section, you will learn how to create a vDS, dvportgroup, and manage the ESXi host using the vDS. First, we will create a vSphere Distributed Switch. The steps involved in creating a vDS are as follows: Select the datacenter on which the vDS has to be created. Navigate to Actions | New Distributed Switch...., as shown in the following screenshot Enter the Name and location for the vDS and click on Next. Select the version for the vDS, as shown in the following screenshot, and click on Next: In the Edit settings page, provide the following details: Number of uplinks: This specifies the number of physical NIC of the host which would be part of the vDS. Network I/O Control: This option controls the input/output to the network and can be set to either Enabled or Disabled. Default port group: This option lets you create a default port group. To create one, enable the checkbox and provide the Port group name. Click on Next when finished. In the Ready to complete screen, review the settings and click on Finish. The following steps will create a new distributed port group: The next step after creating a vDS is to create a new port group if it is not been created as part of the vDS. Select the vDS and click on Actions | New Distributed Port Group. Provide the name and select the location for the port group and click on Next. In the Configure settings screen, set the following general properties for the port group: Port binding: This provides us with three options, namely, Static, Dynamic, and Ephemeral (no binding). Static binding: This is selected when a VM is connected to the port group where a port is assigned and reserved for the VM. Only when the VM is deleted, the port is freed up. Ephemeral binding: This port is created and assigned to the VM by the host when a VM is powered on and the port is deleted when the VM is powered off. Dynamic binding: This is depreciated in ESXi 5.x version and is no longer in use, but the option is still available in the vSphere Client. Port allocation: This can be set to either Elastic or Fixed. Elastic: The default port is 8, and when all ports are used, a new set of ports is created automatically Fixed: The ports are fixed to 8, and no additional ports are created when all ports are used up Number of ports: This option is set to 8 by default. Network resource pool: This option is enabled only if a user-defined network pool is created; it can be set even after creating the port group. VLAN type: The available options are None, VLAN, VLAN trunking, and Private VLAN. None: This means that no VLAN is used VLAN: This implies that VLAN is used and the ID has to be specified VLAN trunking: This implies that a group of VLANs is being trunked and their respective ID have to be used Private VLAN: This menu is empty if a private VLAN does not exist In the Ready to complete screen, review the settings and click on Finish. The next step after creating a distributed port group is to add the ESXi host to the vDS. While the host is being added, it is possible to migrate the VMkernel and VM port group from the vSS to vDS, or it can be done later. Now, let's see the steps involved: Select the Distributed Switch in the vSphere Web Client. Navigate to Actions | Add and Manage Hosts. In the Select task screen, select Add hosts, as shown in the following screenshot, and click on Next: Click on the + icon to select hosts to be added and click on OK. Click on Next in the Select new hosts screen. Select the physical network adapters, which will be used as an uplink for the vDS, and click on Next. In the Select virtual network adapters screen, you will have the option to migrate the VMkernel interface to the vDS group; select the appropriate option and click on Next. Review any dependencies on the validation page and click on Next. Optionally, you can migrate the VM Network to the vDS port group in the Select VM network adapters screen by selecting the appropriate option and clicking on Next. In the Ready to complete screen, review the settings and click on Finish. An ESXi host can be removed from the vDS only if there is no VM still connected to the vDS. Make sure the VMs are either migrated to the standard switch or to another vDS. The following steps will remove an ESXi host from the Distributed Switch: Browse to the Distributed Switch in the vSphere Web Client. Navigate to Actions | Add and Manage Hosts. In the Select task screen, select Remove hosts, as shown in the following screenshot, and click on Next: Click on the + icon to select new hosts to be removed and click on OK. Click on Next in the Select hosts screen. In the Ready to complete screen, review the settings and click on Finish. When the entire host is being added to the vDS, you can start to migrate the resources from vSS to vDS. The following steps will help you migrate from a Standard to a Distributed Switch: Select the Distributed Switch in the vSphere Web Client. Navigate to Actions | Migrate VM to Another Network. In the Select source and destination networks screen, you have the option to browse to a specific network or no network for source network migration. These options are described as follows: Specific network: This option allows you to select the VMs residing on a particular port group No network: This option implies that VMs that are not connected to any network will be selected for migration In the Destination network option, browse and select the distributed port group for the VM network and click on Next. Select the VM to migrate and click on Next. In the Ready to complete screen, review the settings and click on Finish. How it works... vSphere Distributed Switches extend the capabilities of virtual networking. vDS can be broken into the following two logical sections; one is the data plane and the other is management plane: Data plane: This is also called the I/O plane and it takes cares of the actual packet switching, filtering, tagging, and all networking-related activities. Management plane: This is also known as the control plane. It is a centralized control to manage and configure the data plane functionality. There's more... It is possible to preserve the vSphere Distributed Switch configuration information to a file. You can use these configurations for other deployments and also as a backup. You can restore the port group configuration in case of any misconfiguration. The following steps will export the vSphere Distributed Switch configuration: Select the vSphere Distributed Switch from the vSphere Web Client. Navigate to Actions | All vCenter Actions | Export Configurations. In Configuration to export, you will have the following two options. Select the appropriate one. Distributed Switch and all port groups Distributed Switch only Click on OK. Exporting would begin, and once done, it would ask for saving the configuration. Click on Yes and provide the path to store the file. The import configuration function can be used to create a copy of the exported vDS from the existing configuration file. The following steps will import the vSphere Distributed Switch configuration file: Select the Distributed Switch from the vSphere Web Client. Navigate to Actions | All vCenter Actions | Import distributed port group. In the Import Port Group Configuration option, browse to the backup file and click on Next. Review the import settings and click on Finish. The following steps will restore the vSphere distributed port group configuration: Select the distributed port group from the vSphere Web Client. Navigate to Actions | All vCenter Actions | Restore Configuration. Select one of the following options and click on OK: Restore to a previous configuration: This allows you to restore the configuration of the port group to your previous snapshot Restore configuration from a file: This allows you to restore to the configuration from the file saved on your local system In the Ready to complete screen, review the settings and click on Finish. Summary In this article, we understood the vSphere networking concepts and how to work with vSphere distributed switches. We also discussed some of the more advanced networking configurations available in the distributed switch. Resources for Article: Further resources on this subject: Windows 8 with VMware View [Article] VMware View 5 Desktop Virtualization [Article] Cloning and Snapshots in VMware Workstation [Article]
Read more
  • 0
  • 0
  • 1285

article-image-virtual-machine-concepts
Packt
19 Mar 2014
18 min read
Save for later

Virtual Machine Concepts

Packt
19 Mar 2014
18 min read
(For more resources related to this topic, see here.) The multiple instances of Windows or Linux systems that are running on an ESXi host are commonly referred to as a virtual machine (VM). Any reference to a guest operating system (OS) is an instance of Linux, Windows, or any other supported operating system that is installed on the VM. vSphere virtual machines At the heart of virtualization lies the virtual machine. A virtual machine is a set of virtual hardware whose characteristics are determined by a set of files; it is this virtual hardware that a guest operating system is installed on. A virtual machine runs an operating system and a set of applications just like a physical server. A virtual machine comprises a set of configuration files and is backed by the physical resources of an ESXi host. An ESXi host is the physical server that has the VMware hypervisor, known as ESXi, installed. Each virtual machine is equipped with virtual hardware and devices that provide the same functionality as having physical hardware. Virtual machines are created within a virtualization layer, such as ESXi running on a physical server. This virtualization layer manages requests from the virtual machine for resources such as CPU or memory. It is the virtualization layer that is responsible for translating these requests to the underlying physical hardware. Each virtual machine is granted a portion of the physical hardware. All VMs have their own virtual hardware (there are important ones to note, called the primary 4: CPU, memory, disk, and network). Each of these VMs is isolated from the other and each interacts with the underlying hardware through a thin software layer known as the hypervisor. This is different from a physical architecture in which the installed operating system interacts with installed hardware directly. With virtualization, there are many benefits, in relation to portability, security, and manageability that aren't available in an environment that uses a traditional physical infrastructure. However, once provisioned, virtual machines use many of the same principles that are applied to physical servers. The preceding diagram demonstrates the differences between the traditional physical architecture (left) and a virtual architecture (right). Notice that the physical architecture typically has a single application and a single operating system using the physical resources. The virtual architecture has multiple virtual machines running on a single physical server, accessing the hardware through the thin hypervisor layer. Virtual machine components When a virtual machine is created, a default set of virtual hardware is assigned to it. VMware provides devices and resources that can be added and configured to the virtual machine. Not all virtual hardware devices will be available to every single virtual machine; both the physical hardware of the ESXi host and the VM's guest OS must support these configurations. For example, a virtual machine will not be capable of being configured with more vCPUs than the ESXi host has logical CPU cores. The virtual hardware available includes: BIOS: Phoenix Technologies 6.00 that functions like a physical server BIOS. Virtual machine administrators are able to enable/disable I/O devices, configure boot order, and so on. DVD/CD-ROM: NEC VMware IDE CDR10 that is installed by default in new virtual machines created in vSphere. The DVD/CD-ROM can be configured to connect to the client workstation DVD/CD-ROM, an ESXi host DVD/CD-ROM, or even an .iso file located on a datastore. DVD/CD-ROM devices can be added to or removed from a virtual machine. Floppy drive: This is installed by default with new virtual machines created in vSphere. The floppy drive can be configured to connect to the client device's floppy drive, a floppy device located on the ESXi host, or even a floppy image (.flp) located on a datastore. Floppy devices can be added to or removed from a virtual machine. Hard disk: This stores the guest operating system, program files, and any other data associated with a virtual machine. The virtual disk is a large file, or potentially a set of files, that can be easily copied, moved, and backed up. IDE controller: Intel 82371 AB/EB PCI Bus Master IDE Controller that presents two Integrated Drive Electronics (IDE) interfaces to the virtual machine by default. This IDE controller is a standard way for storage devices, such as floppy drives and CD-ROM drives, to connect to the virtual machine. Keyboard: This mirrors the keyboard that is first connected to the virtual machine console upon initial console connection. Memory: This is the virtual memory size configured for the virtual machine that determines the guest operating system's memory size. Motherboard/Chipset: The motherboard uses VMware proprietary devices that are based on the following chips: Intel 440BX AGPset 82443BX Host Bridge/Controller Intel 82093 AA I/O Advanced Programmable Interrupt Controller Intel 82371 AB (PIIX4) PCI ISA IDE Xcelerator National Semiconductor PC87338 ACPI 1.0 and PC98/99 Compliant Super I/O Network adapter: ESXi networking features provide communication between virtual machines residing on the same ESXi host, between VMs residing on different ESXi hosts, and between VMs and physical machines. When configuring a VM, network adapters (NICs) can be added and the adapter type can be specified. Parallel port: This is an interface for connecting peripherals to the virtual machine. Virtual parallel ports can be added to or removed from the virtual machine. PCI controller: This is a bus located on the virtual machine motherboard, communicating with components such as a hard disk. A single PCI controller is presented to the virtual machine. This cannot be configured or removed. PCI device: DirectPath devices can be added to a virtual machine. The devices must be reserved for PCI pass-through on the ESXi host that the virtual machine runs on. Keep in mind that snapshots are not supported with DirectPath I/O pass-through device configuration. For more information on virtual machine snapshots, see http://vmware.com/kb/1015180. Pointing device: This mirrors the pointing device that is first connected to the virtual machine console upon initial console connection. Processor: This specifies the number of sockets and core for the virtual processor. This will appear as AMD or Intel to the virtual machine guest operating system depending upon the physical hardware. Serial port: This is an interface for connecting peripherals to the virtual machine. The virtual machine can be configured to connect to a physical serial port, a file on the host, or over the network. The serial port can also be used to establish a direct connection between two VMs. Virtual serial ports can be added to or removed from the virtual machine. SCSI controller: This provides access to virtual disks. The virtual SCSI controller may appear as one of several different types of controllers to a virtual machine, depending on the guest operating system of the VM. Editing the VM configuration can modify the SCSI controller type, a SCSI controller can be added, and a virtual controller can be configured to allocate bus sharing. SCSI device: A SCSI device interface is available to the virtual machine by default. This interface is a typical way to connect storage devices (hard drives, floppy drives, CD-ROMs, and so on) to a VM. SCSI device that can be added to or removed from a virtual machine. SIO controller: The Super I/O controller provides serial and parallel ports, and floppy devices, and performs system management activities. A single SIO controller is presented to the virtual machine. This cannot be configured or removed. USB controller: This provides USB functionality to the USB ports managed. The virtual USB controller is a software virtualization of the USB host controller function in a VM. USB device: Multiple USB devices may be added to a virtual machine. These can be mass storage devices or security dongles. The USB devices can be connected to a client workstation or to an ESXi host. Video controller: This is a VMware Standard VGA II Graphics Adapter with 128 MB video memory. VMCI: The Virtual Machine Communication Interface provides high-speed communication between the hypervisor and a virtual machine. VMCI can also be enabled for communication between VMs. VMCI devices cannot be added or removed. Uses of virtual machines In any infrastructure, there are many business processes that have applications supporting them. These applications typically have certain requirements, such as security or performance requirements, which may limit the application to being the only thing installed on a given machine. Without virtualization, there is typically a 1:1:1 ratio for server hardware to an operating system to a single application. This type of architecture is not flexible and is inefficient due to many applications using only a small percentage of the physical resources dedicated to it, effectively leaving the physical servers vastly underutilized. As hardware continues to get better and better, the gap between the abundant resources and the often small application requirements widens. Also, consider the overhead needed to support the entire infrastructure, such as power, cooling, cabling, manpower, and provisioning time. A large server sprawl will cost more money for space and power to keep these systems housed and cooled. Virtual infrastructures are able to do more with less—fewer physical servers are needed due to higher consolidation ratios. Virtualization provides a safe way of putting more than one operating system (or virtual machine) on a single piece of server hardware by isolating each VM running on the ESXi host from any other. Migrating physical servers to virtual machines and consolidating onto far fewer physical servers means lowering monthly power and cooling costs in the datacenter. Fewer physical servers can help reduce the datacenter footprint; fewer servers means less networking equipment, fewer server racks, and eventually less datacenter floor space required. Virtualization changes the way a server is provisioned. Initially it took hours to build a cable and install the OS; now it takes only seconds to deploy a new virtual machine using templates and cloning. VMware offers a number of advanced features that aren't found in a strictly physical infrastructure. These features, such as High Availability, Fault Tolerance, and Distributed Resource Scheduler, help with increased uptime and overall availability. These technologies keep the VMs running or give the ability to quickly recover from unplanned outages. The ability to quickly and easily relocate a VM from one ESXi host to another is one of the greatest benefits of using vSphere virtual machines. In the end, virtualizing the infrastructure and using virtual machines will help save time, space, and money. However, keep in mind that there are some upfront costs to be aware of. Server hardware may need to be upgraded or new hardware purchased to ensure compliance with the VMware Hardware Compatibility List (HCL). Another cost that should be taken into account is the licensing costs for VMware and the guest operating system; each tier of licensing allows for more features but drives up the price to license all of the server hardware. The primary virtual machine resources Virtualization decouples physical hardware from an operating system. Each virtual machine contains a set of its own virtual hardware and there are four primary resources that a virtual machine needs in order to correctly function. These are CPU, memory, network, and hard disk. These four resources look like physical hardware to the guest operating systems and applications. The virtual machine is granted access to a portion of the resources at creation and can be reconfigured at any time thereafter. If a virtual machine experiences constraint, one of the four primary resources is generally where a bottleneck will occur. In a traditional architecture, the operating system interacts directly with the server's physical hardware without virtualization. It is the operating system that allocates memory to applications, schedules processes to run, reads from and writes to attached storage, and sends and receives data on the network. This is not the case with a virtualized architecture. The virtual machine guest operating system still does the aforementioned tasks, but also interacts with virtual hardware presented by the hypervisor. In a virtualized environment, a virtual machine interacts with the physical hardware through a thin layer of software known as the virtualization layer or the hypervisor; in this case the hypervisor is ESXi. This hypervisor allows the VM to function with a degree of independence from underlying physical hardware. This independence is what allows vMotion and Storage vMotion functionality. The following diagram demonstrates a virtual machine and its four primary resources: This section will provide an overview of each of the "primary four" resources. CPU The virtualization layer runs CPU instructions to make sure that the virtual machines run as though accessing the physical processor on the ESXi host. Performance is paramount for CPU virtualization, and therefore will use the ESXi host physical resources whenever possible. The following image displays a representation of a virtual machine's CPU: A virtual machine can be configured with up to 64 virtual CPUs (vCPUs) as of vSphere 5.5. The maximum vCPUs able to be allocated depends on the underlying logical cores that the physical hardware has. Another factor in the maximum vCPUs is the tier of vSphere licensing; only Enterprise Plus licensing allows for 64 vCPUs. The VMkernel includes a CPU scheduler that dynamically schedules vCPUs on the ESXi host's physical processors. The VMkernel scheduler, when making scheduling decisions, considers socket-core-thread topology. A socket is a single, integrated circuit package that has one or more physical processor cores. Each core has one or more logical processors, also known as threads. If hyperthreading is enabled on the host, then ESXi is capable of executing two threads, or sets of instruction, simultaneously. Effectively, hyperthreading provides more logical CPUs to ESXi on which vCPUs can be scheduled, providing more scheduler throughput. However, keep in mind that hyperthreading does not double the core's power. During times of CPU contention, when VMs are competing for resources, the VMkernel timeslices the physical processor across all virtual machines to ensure that the VMs run as if having a specified number of vCPUs. VMware vSphere Virtual Symmetric Multiprocessing (SMP) is what allows the virtual machines to be configured with up to 64 virtual CPUs, which allows a larger CPU workload to run on an ESXi host. Though most supported guest operating systems are multiprocessor aware, many guest OSes and applications do not need and are not enhanced by having multiple vCPUs. Check vendor documentation for operating system and application requirements before configuring SMP virtual machines. Memory In a physical architecture, an operating system assumes that it owns all physical memory in the server, which is a correct assumption. A guest operating system in a virtual architecture also makes this assumption but it does not, in fact, own all of the physical memory. A guest operating system in a virtual machine uses a contiguous virtual address space that is created by ESXi as its configured memory. The following image displays a representation of a virtual machine's memory: Virtual memory is a well-known technique that creates this contiguous virtual address space, allowing the hardware and operating system to handle the address translation between the physical and virtual address spaces. Since each virtual machine has its own contiguous virtual address space, this allows ESXi to run more than one virtual machine at the same time. The virtual machine's memory is protected against access from other virtual machines. This effectively results in three layers of virtual memory in ESXi: physical memory, guest operating system physical memory, and guest operating system virtual memory. The VMkernel presents a portion of physical host memory to the virtual machine as its guest operating system physical memory. The guest operating system presents the virtual memory to the applications. The virtual machine is configured with a set of memory; this is the sum that the guest OS is told it has available to it. A virtual machine will not necessarily use the entire memory size; it only uses what is needed at the time by the guest OS and applications. However, a VM cannot access more memory than the configured memory size. A default memory size is provided by vSphere when creating the virtual machine. It is important to know the memory needs of the application and guest operating system being virtualized so that the virtual machine's memory can be sized accordingly. Network There are two key components with virtual networking: the virtual switch and virtual Ethernet adapters. A virtual machine can be configured with up to ten virtual Ethernet adapters, called vNICs. The following image displays a representation of a virtual machine's vNIC: Virtual network switching is software interfacing between virtual machines at the vSwitch level until the frames hit an uplink or a physical adapter, exiting the ESXi host and entering the physical network. Virtual networks exist for virtual devices; all communication between the virtual machines and the external world (physical network) goes through vNetwork standard switches or vNetwork distributed switches. Virtual networks operate on layer 2, data link, of the OSI model. A virtual switch is similar to a physical Ethernet switch in many ways. For example, virtual switches support the standard VLAN (802.1Q) implementation and have a forwarding table, like a physical switch. An ESXi host may contain more than one virtual switch. Each virtual switch is capable of binding multiple vmnics together in a network interface card (NIC) team, which offers greater availability to the virtual machines using the virtual switch. There are two connection types available on a virtual switch: a port group and a VMkernel port. Virtual machines are connected to port groups on a virtual switch, allowing access to network resources. VMkernel ports provide a network service to the ESXi host to include IP storage, management, vMotion, and so on. Each VMkernel port must be configured with its own IP address and network mask. The port groups and VMkernel ports reside on a virtual switch and connect to the physical network through the physical Ethernet adapters known as vmnics. If uplinks (vmnics) are associated with a virtual switch, then the virtual machines connected to a port group on this virtual switch will be able to access the physical network. Disk In a non-virtualized environment, physical servers connect directly to storage, either to an external storage array or to their internal hard disk arrays to the server chassis. The issue with this configuration is that a single server expects total ownership of the physical device, tying an entire disk drive to one server. Sharing storage resources in non-virtualized environments can require complex filesystems and migration to file-based Network Attached Storage (NAS) or Storage Area Networks (SAN). The following image displays a representation of a virtual disk: Shared storage is a foundational technology that allows many things to happen in a virtual environment (High Availability, Distributed Resource Scheduler, and so on). Virtual machines are encapsulated in a set of discrete files stored on a datastore. This encapsulation makes the VMs portable and easy to be cloned or backed up. For each virtual machine, there is a directory on the datastore that contains all of the VM's files. A datastore is a generic term for a container that holds files as well as .iso images and floppy images. It can be formatted with VMware's Virtual Machine File System (VMFS) or can use NFS. Both datastore types can be accessed across multiple ESXi hosts. VMFS is a high-performance, clustered filesystem devised for virtual machines that allows a virtualization-based architecture of multiple physical servers to read and write to the same storage simultaneously. VMFS is designed, constructed, and optimized for virtualization. The newest version, VMFS-5, exclusively uses 1 MB block size, which is good for large files, while also having an 8 KB subblock allocation for writing small files such as logs. VMFS-5 can have datastores as large as 64 TB. The ESXi hosts use a locking mechanism to prevent the other ESXi hosts accessing the same storage from writing to the VMs' files. This helps prevent corruption. Several storage protocols can be used to access and interface with VMFS datastores; these include Fibre Channel, Fibre Channel over Ethernet, iSCSI, and direct attached storage. NFS can also be used to create a datastore. VMFS datastore can be dynamically expanded, allowing the growth of the shared storage pool with no downtime. vSphere significantly simplifies accessing storage from the guest OS of the VM. The virtual hardware presented to the guest operating system includes a set of familiar SCSI and IDE controllers; this way the guest OS sees a simple physical disk attached via a common controller. Presenting a virtualized storage view to the virtual machine's guest OS has advantages such as expanded support and access, improved efficiency, and easier storage management.
Read more
  • 0
  • 0
  • 6541

article-image-design-install-and-configure
Packt
18 Mar 2014
4 min read
Save for later

Design, Install, and Configure

Packt
18 Mar 2014
4 min read
(For more resources related to this topic, see here.) In this article, we will cover the following key subjects: Horizon Workspace Architecture Overview Designing a solution Sizing guidelines vApp deployment Step-by-step configuration Install Certificates Setting up Kerberos Single Sign-On (SSO) Reading this article will provide you with an introduction to the solution, and also provides you with useful reference points throughout the book that will help you install, configure, and manage a Horizon Workspace deployment. A few things are out of scope for this article, such as setting up vSphere, configuring HA, and using certificates. We will assume that the core infrastructure is already in place. We start by looking at the solution architecture and then how to size a deployment, based on best practice, and suitable to meet the requirements of your end users. Next we will cover the preparation steps in vCenter and then deploy the Horizon Workspace vApp. There are then two steps to installation and configuration. First we will guide you through the initial command-line-based setup and then finally the web-based Setup Wizard. Each section is described in easy to follow steps, and shown in detail using actual screenshots of our lab deployment. So let's get started with the architecture overview. The Horizon Workspace architecture The following diagram shows a more detailed view of how the architecture fits together: The Horizon Workspace sizing guide We have already discussed that Horizon Workspace is made up of five virtual appliances. However, for production environments, you will need to deploy multiple instances to provide for high availability, offer load balancing, and support the number of users that you need in your environment. For a Proof of Concept (POC) or pilot deployment, this is of less importance. Sizing the Horizon Workspace virtual appliances The following diagram shows the maximum number of users that each appliance can accommodate. Using these maximum values, you can calculate the number of appliances that you need to deploy. For example, if you had 6,000 users in your environment, you would need to deploy a single connector-va appliance, three gateway-va appliances, one service-va appliance, seven data-va appliances, and a single configurator-va appliance. Please note that data-va should be sized using N+1. The first data-va appliance should never contain any user data. For high availability, you may want to use two connector-va appliances and two service-va appliances. Sizing for Preview services If you plan to use a Microsoft Preview Server, this needs to be sized based on the requirements shown in the following diagram: If we use our previous example of 6,000 users, then to use Microsoft Preview, you would require a total of six Microsoft Preview Servers. The Horizon Workspace database You have a few options for the database. For a POC or pilot environment, you can use the internal database functionality. In a production deployment, you would use an external database option, using either VMware PostgreSQL or Oracle 11g. This allows you to have a highly available and protected database. The VMware recommendation is PostgreSQL, and the following diagram details the sizing information for the Horizon Workspace database: External access and load balancing considerations In a production environment, high availability, redundancy, and external access is a core requirement. This needs planning and configuration. For a POC or pilot deployment, this is usually not of high importance but should be something to be aware of. To achieve high availability and redundancy, a load balancer is required in front of the gateway-va and the connector-va appliances that are used for Kerberos (Windows authentication). If external access is required, then typically you will also need a load balancer in the Demilitarized Zone (DMZ). This is detailed in the diagram at the beginning of this article. It is not supported to place gateway-va appliances in the DMZ. For more detailed information about load balancing, please visit the following guide: https://communities.vmware.com/docs/DOC-24577 Summary In this article, we had an overview of the Horizon Workspace architecture. We made sure that all the prerequisites are in place before we deploy the Horizon Workspace vApp. This article covers the basic sizing, configuration, and the installation of Horizon Workspace 1.5. Resources for Article: Further resources on this subject: An Introduction to VMware Horizon Mirage [Article] Windows 8 with VMware View [Article] Cloning and Snapshots in VMware Workstation [Article]
Read more
  • 0
  • 0
  • 1834

article-image-snapshots
Packt
25 Feb 2014
8 min read
Save for later

Snapshots

Packt
25 Feb 2014
8 min read
(For more resources related to this topic, see here.) Much ado about snapshots (Intermediate) Snapshots are a fantastic feature of VMware Fusion because they allow you to roll the VM back in time to a previously saved state. Using snapshots is easy, but understanding how they work is important. Now, first things first. A snapshot is not a backup, but rather a way to either safely roll back in time or to keep multiple configurations of an OS but share the same basic configuration. The latter is very handy when building websites. For example, you can have one snapshot with IE7, another with IE8, another with IE9, another with IE10, and so on. A backup is a separate copy of the entire VM and/or its contents ("Your VM" and "Your Data") on a different disk or backup service. A snapshot is about rolling back in time on the same machine. If you took the snapshot when we finished installing Windows 7 but before upgrading to Windows 8, you can easily switch back and forth between Windows 7 and 8 by simply restoring the state. Let's see how. Getting ready Firstly, the VM doesn't have to be running, but it can be. The snapshot feature is powerful enough to work even when the VM is still running, but goes much faster if the virtual machine is powered off or suspended. We can use the snapshot we took when we finished installing Windows 7. If you didn't take a snapshot at that time, you can go ahead and take a new snapshot now by clicking on the Take button from the snapshot window. How to do it... Snapshots are best taken when a VM is powered off. It doesn't have to be, but your computer will complete the "Take Snapshot" operation much faster if the VM is powered off or suspended. Both fully powered off and suspended tasks are much faster because the VM isn't in motion when the snapshot is taken, allowing the operation to finish at a single stroke. Otherwise, the way the snapshot mechanism works when a VM is running is that once it finishes, it has to now gather the new data that changed from when the snapshot operation started. So, if it took five minutes to take a snapshot, it then has to gather itself up to date for those five minutes. That might take a minute. After that, it has to go back again to gather that last minute. If that minute takes 20 seconds, it has to then gather those 20 seconds again. This is made worse with the more things you're doing within the virtual machine. So, get it done in one motion by suspending or powering off the VM first. Launching the snapshot window and examining the tree The following sequence of steps is used to initiate the snapshot process: Click on the Snapshots button in the VM window and have a look at the snapshot interface. In my example, the following was the view of my "tree" right after we finished installing Windows 7: When I finished upgrading to Windows 8, I took another snapshot. This allows me to go back in time to both a fresh Windows 7 and Windows 8 installation, as shown in the following screenshot: Restoring a snapshot Having a TARDIS or DeLorean might be more fun, but for the rest of us, we can go back in time by restoring a snapshot. Let's go back to Windows 7 from our Windows 8 VM. Follow these steps: In the Snapshot Manager window, simply double-click on the base disk at the top of the tree to restore it. It will ask about saving the current state. Choose Save when prompted asks as shown in the following screenshot. You can rename the snapshot at any time from this window by right-clicking on the name and clicking on Get Info. After a few seconds, depending on the speed of your Mac, the older version should now show as the Current State, as shown in the following screenshot. If the VM was running, it should just show up now as being Windows 7. If you see a spinning wheel in the upper-right corner, that's the "disk cleanup" activity working in the background. You can use the VM while it's doing this; however, it might be a bit slow on disk access while it's cleaning up the disks. If the VM is suspended or powered off when restoring, the operation is much faster because the VM isn't changing/running. With this technique, you can switch between Windows 7 and Windows 8 with ease. How it works... In Fusion, all of the VM's files, are stored under Documents | Virtual Machines by default. Your C: drive in Windows is actually a series of files on the Mac named in sequence, with a .vmdk extension inside the Virtual Machines folder, as shown in the following screenshot. You can view the files by right-clicking on the VM and clicking on Show Package Contents from the Virtual Machines folder in the Finder. When you create a VM, it starts with one virtual disk (called the base disk). This virtual disk, or VMDK, is broken up into 2 GB "chunks" by default, but it can be one big chunk if desired. So, for a 20 GB disk, you end up with about 10 or 11 .vmdk files. This is for easy transport with drives that don't support large drives (such as MS-DOS/FAT32-formatted drives), and you may also have a performance benefit in certain cases. When you take a snapshot, the currently active VMDK goes into read-only mode, and a new VMDK is created. All writes go to the new VMDK, and reads happen from the original VMDK when the bits are there. Fusion is smart enough to keep track of what files are where; so, when the VM is running, Fusion is reading all of the snapshots in the current state's chain. A .vmdk file is thus named <disk_name>-<snapshot_number>-<slice>.vmdk. So, in my example, Virtual Disk is my disk name. (I could have customized and specified something different by performing a Custom Virtual Machine operation at the beginning.) I have three disks: Virtual Disk, Virtual Disk-000001, and Virtual Disk-000003. This means I have two snapshots and a base disk. (I took one snapshot and deleted it, which is why there's no Virtual Disk-000002). Each of those disks are of 60 GB capacity, so there are 31 slices. (s001 to s031). Each file starts around 300 KB and can grow to just over 2 GB. You can see where things can start to get confusing now. It gets even better when you have snapshots that are based on snapshots. You can have multiple snapshots with a common parent, which introduces a new concept in Fusion, that is, snapshot trees. Snapshots are also a great way to make sure something new isn't going to destroy your VM. So, if you are about to install some software that might be risky, take a snapshot. It's easy to roll back if something goes wrong. There's more... Snapshots are complicated, but there's great material out there by the gurus behind Fusion itself about how they work on a more technical level. More information You can read more about snapshots from Eric Tung, one of the original developers of Fusion (the blog is a bit old, but still completely accurate with respect to how snapshots work) at http://blogs.vmware.com/teamfusion/2008/11/vmware-fusion-3.html. A great article by Eric to dispel some of the confusion around snapshots and how to use them is available at http://blogs.vmware.com/teamfusion/2008/11/bonustip-snaps.html. One thing to note is that the more snapshots you have, the more effort the Mac has to make to "glue" them all together when you're running your virtual machine. As a rule of thumb, don't take snapshots and keep them around forever if you don't intend on rolling back to them regularly. Also, each snapshot can grow to be the size of the entire C: drive in Windows. Use them when necessary, but be aware of their performance and disk-usage costs. Summary In this article, we studied about snapshots and their usage. It shows you how to use this to make one virtual machine with both Windows 7 and Windows 8. Resources for Article: Further resources on this subject: An Introduction to VMware Horizon Mirage [Article] Windows 8 with VMware View [Article] Securing vCloud Using the vCloud Networking and Security App Firewall [Article]
Read more
  • 0
  • 0
  • 1834

article-image-understanding-citrixprovisioning-services-70
Packt
27 Jan 2014
5 min read
Save for later

Understanding Citrix®Provisioning Services 7.0

Packt
27 Jan 2014
5 min read
(For more resources related to this topic, see here.) The following diagram provides a high-level view of the basic Provisioning Services infrastructure and clarifies how Provisioning Services components might appear within the datacenter post installation and implementation: Provisioning Service License server The License Server either should be installed within the shared infrastructure or an existing Citrix license server can be selected. However, we have to ensure the Provisioning Service license is configured in your existing Citrix Enterprise License servers. A License Server can be selected when the Provisioning Service Configuration Wizard is run on a planned server. All Provisioning Servers within the farm must be able to communicate with the License Server. Provisioning Service Database server The database stores all system configuration settings that exist within a farm. Only one database can exist within a provisioning service farm. We can choose an existing SQL Server database or install an SQL Server in cluster for High Availability from a redundancy business continuities perspective. The Database server can be selected when the Provisioning Service Configuration Wizard runs on a planned server. All Provisioning Servers within the farm must be able to communicate with the Database server, and only one database can exist within a Provisioning Service farm Provisioning Service Admin Console Citrix Provisioning Service Admin Console is a tool that is used to control your Provisioning Services implementation. After logging on to the console, we can select the farm that we want to connect to. Our role determines what we can look at in the console and operate in the Provisioning Service farm. Shared storage service Citrix Provisioning Service requires shared storage for vDisks that are accessible by all of the users in a network. They are intended for file storage and allowing simultaneous access by multiple users without the need to replicate files to their machines' vDisk. The supported shared storages are SAN, NAS, iSCSI, and CIFS. Active Directory Server Citrix Provisioning service requires Microsoft's Active Directory. It provides authentication and authorization mechanisms as well as a framework, within which other related services can be deployed. Microsoft Active Directory is an LDAP-compliant database that contains objects. The most commonly used objects are users, computers, and groups Network services Dynamic Host Control Protocol (DHCP) is used for the purpose of getting IP addresses for servers and systems. Trivial File Transfer Protocol (TFTP) is used for automated transfer of boot configuration files between servers and a system in a network. Preboot Execution Environment (PXE) is a standard used for client/server interface that allows networked computers that boot remotely to boot locally instead. System requirements Citrix Provisioning Service can be installed with following requirements: Citrix Provisioning Server Requirement Description Operation system Windows 2012: Standard, Essential, and Datacenter editions; Windows 2008 R2; Windows 2008 R2 SP1: Standard, Enterprise, and DataCenter editions; and all editions of Windows 2008 (32 or 64-bit) Processor Intel or AMD x86 or x64 compatible 2 GHz / 3 GHz (preferred) / 3.5 GHz Dual Core / HT or an equal one for growing capacity fulfiller Memory 2 GB RAM; 4 GB (greater than 250 vDisks) Hard disk To determine IOPS needed along RAID Level, please plan your sizing based on the following formula: Total Raw IOPS = Disk Speed IOPS x # of Disks Functional IOPS = ((Total Raw IOPS * Write %)/RAID Penalty ) + (Total Raw IOPS*Read %) For more, please refer to http://support.citrix.com/servlet/KbServlet/download/24559-102-647931/ Network adapter IP assignment to servers should be static. 1 GB is recommended for less than 250 target devices. If you are planning for more than 250 devices, Dual 1 GB is recommended. For High Availability, please have two NICs for redundancy purposes. Pre-requisite software components Microsoft .NET 4.0 and Microsoft Powershell 3.0 loaded on a fresh OS The Infrastructure components required are described as follows: Requirement Description Supported database Microsoft SQL 2008, Microsoft SQL 2008 R2, and Microsoft SQL 2012 Server (32-bit or 64-bit editions) databases can be used for the Provisioning ServicesDB sizing. Please refer to http://msdn.microsoft.com/en-us/library/ms187445.aspx. For HA Planning, please refer to http://support.citrix.com/proddocs/topic/provisioning-7/pvs-installtask1-plan-6-0.html. Supported hypervisor XenServer 6.0, Microsoft SCVMM 2012 SP1 with Hyper-V 3.0; SCVMM 2012 with Hyper-V 2.0, VMware ESX 4.1, ESX 5, or ESX 5 Update 1; vSphere 5.0, 5.1, 5.1 Update 1; along with Physical Devices for 3D Pro Graphics (Blade Servers, Windows Server OS machines, and Windows Desktop OS machines with XenDesktop VDA installed). Provisioning Console Hardware requirement: Processor 2 GHz, Memory 2 GB ,Hard Disk 500 MB Supported Operating Systems: all editions of Windows Server 2008 (32-bit or 64- bit); Windows Server 2008 R2: Standard, DataCenter, and Enterprise editions; Windows Server 2012: Standard, Essential, and Datacenter editions; Windows 7 (32-bit or 64-bit); Windows XP Professional (32-bit or 64-bit); Windows Vista (32-bit or 64-bit): Business, Enterprise, and Ultimate (retail licensing); and all editions of Windows 8 (32-bit or 64-bit). Pre-Requisite Software: MMC 3.0, Microsoft .NET 4.0, and Windows PowerShell 2.0 In case we are using Provisioning Services, we would require XenDesktop and, NET 3.5 SP1, and in the event that we are using Provisioning Services then we would require SCVMM 2012 SP1 and PowerShell 3.0. Supported ESD Apply only in case VDisk Update Management is used; ESD supports WSUS Server-3.0 SP2 and Microsoft System Center Configuration Management 2007 SP2, 2012, and 2012 SP1 Supported target device Supported Operating Systems: all editions of Windows 8 (32 or 64-bit); Windows 7 SP1 (32 bits or 64 bits): Enterprise, Professional, and Ultimate (Support alone in Private Mode); Windows XP Professional SP3 32-bit and Windows XP Professional SP2 64-bit; Windows Server 2008 R2 SP1: Standard, DataCenter, and Enterprise editions; Windows Server 2012: Standard, Essential, and Datacenter editions. Summary This article has thus covered the several components that make up a Citrix Provisioning Services farm and the system requirements that need to be met to run the software. Resources for Article: Further resources on this subject: Introduction to XenConvert [article] Citrix XenApp Performance Essentials [article] Getting Started with XenApp 6 [article]
Read more
  • 0
  • 0
  • 1697
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-design-documentation
Packt
23 Jan 2014
19 min read
Save for later

The Design Documentation

Packt
23 Jan 2014
19 min read
(For more resources related to this topic, see here.) The design documentation provides written documentation of the design factors and the choices the architect has made in the design to satisfy the business and technical requirements. The design documentation also aids in the implementation of the design. In many cases where the design architect is not responsible for the implementation, the design documents ensure the successful implementation of the design by the implementation engineer. Once you have created the documentation for a few designs, you will be able to develop standard processes and templates to aid in the creation of design documentation. Documentation can vary from project to project. Many consulting companies and resellers have standard documentation templates that they use when designing solutions. A properly documented design should include the following information: Architecture design Implementation plan Installation guide Validation test plan Operational procedures This information can be included in a single document or separated into different documents. VMware provides Service Delivery Kits to VMware partners. These kits can be found on the VMware Partner University portal at http://www.vmware.com/go/partneruniversity, which provides documentation templates that can be used as a foundation for creating design documents. If you do not have access to these templates, example outlines are provided in this article to assist you in developing your own design documentation templates. The final steps of the design process include gaining customer approval to begin implementation of the design and the implementation of the design. Creating the architecture design document The architecture design document is a technical document describing the components and specifications required to support the solution and ensure that the specific business and technical requirements of the design are satisfied. An excellent example of an architecture design document is the Cloud Infrastructure Architecture Case Study White Paper article that can be found at http://www.vmware.com/files/pdf/techpaper/cloud-infrastructure-achitecture-case-study.pdf. The architect creates the architecture design document to document the design factors and the specific choices that have been made to satisfy those factors. The document serves as a way for the architect to show his work when making design decisions. The architecture design document includes the conceptual, logical, and physical designs. How to do it... The architecture design document should include the following information: Purpose and overview Executive summary Design methodology Conceptual design Logical management, storage, compute, and network design Physical management, storage, compute, and network design How it works... The Purpose and Overview section of the architecture design includes the Executive Summary section. The Executive Summary section provides a high-level overview of the design and the goals the design will accomplish, and defines the purpose and scope of the architecture design document. The following is an example executive summary in the Cloud Infrastructure Architecture Case Study White Paper : Executive Summary: This architecture design was developed to support a virtualization project to consolidate 100 existing physical servers on to a VMware vSphere 5.x virtual infrastructure. The primary goals this design will accomplish are to increase operational efficiency and to provide high availability of customer-facing applications. This document details the recommended implementation of a VMware virtualization architecture based on specific business requirements and VMware recommended practices. The document provides both logical and physical design considerations for all related infrastructure components including servers, storage, networking, management, and virtual machines. The scope of this document is specific to the design of the virtual infrastructure and the supporting components. The purpose and overview section should also include details of the design methodology the architect has used in creating the architecture design. This should include the processes followed to determine the business and technical requirements along with definitions of the infrastructure qualities that influenced the design decisions. Design factors, requirements, constraints, and assumptions are documented as part of the conceptual design. To document the design factors, use a table to organize them and associate them with an ID that can be easily referenced. The following table illustrates an example of how to document the design requirements: ID Requirement R001 Consolidate the existing 100 physical application servers down to five servers R002 Provide capacity to support growth for 25 additional application servers over the next five years R003 Server hardware maintenance should not affect application uptime R004 Provide N+2 redundancy to support a hardware failure during normal and maintenance operations The conceptual design should also include tables documenting any constraints and assumptions. A high-level diagram of the conceptual design can also be included. Details of the logical design are documented in the architecture design document. The logical design of management, storage, network, and compute resources should be included. When documenting the logical design document, any recommended practices that were followed should be included. Also include references to the requirements, constraints, and assumptions that influenced the design decisions. When documenting the logical design, show your work to support your design decisions. Include any formulas used for resource calculations and provide detailed explanations of why design decisions were made. An example table outlining the logical design of compute resource requirements is as follows: Parameter Specification Current CPU resources required 100 GHz *CPU growth 25 GHz CPU required (75 percent utilization) 157 GHz Current memory resources required 525 GB *Memory growth 131 GB Memory required (75 percent utilization) 821 GB Memory required (25 percent TPS savings) 616 GB *CPU and memory growth of 25 additional application servers (R002) Similar tables will be created to document the logical design for storage, network, and management resources. The physical design documents have the details of the physical hardware chosen along with the configurations of both the physical and virtual hardware. Details of vendors and hardware models chosen and the reasons for decisions made should be included as part of the physical design. The configuration of the physical hardware is documented along with the details of why specific configuration options were chosen. The physical design should also include diagrams that document the configuration of physical resources, such as physical network connectivity and storage layout. A sample outline of the architecture design document is as follows: Cover page: It includes the customer and project names Document version log: It contains the log of authors and changes made to the document Document contacts: It includes the subject matter experts involved in the creation of the design Table of contents: It is the index of the document sections for quick reference List of tables: It is the index of tables included in the document for quick reference List of figures: It is the index of figures included in the document for quick reference Purpose and overview: This section consists of an executive summary to provide an overview of the design and the design methodology followed in creating the design Conceptual design: It is the documentation of the design factors: requirements, constraints, and assumptions Logical design: It has the details of the logical management, storage, network, and compute design Physical design: It contains the details of the selected hardware and the configuration of the physical and virtual hardware Writing an implementation plan The implementation plan documents the requirements necessary to complete the implementation of the design. The implementation plan defines the project roles and defines what is expected of the customer and what they can expect during the implementation of the design. This document is sometimes referred to as the statement of work. It defines the key points of contact, the requirements that must be satisfied to start the implementation, any project documentation deliverables, and how changes to the design and implementation will be handled. How to do it... The implementation plan should include the following information: Purpose statement Project contacts Implementation requirements Overview of implementation steps Definition of project documentation deliverables Implementation of change management How it works... The purpose statement defines the purpose and scope of the document. The purpose statement of the implementation plan should define what is included in the document and provide a brief overview of the goals of the project. The purpose statement is simply an introduction so that someone reading the document can gain a quick understanding of what the document contains. The following is an example purpose statement: This document serves as the implementation plan and defines the scope of the virtualization project. This document identifies points of contact for the project, lists implementation requirements, provides a brief description of each of the document deliverables, deliverables, and provides an overview of the implementation process for the data-center virtualization project. The scope of this document is specific to the implementation of the virtual data-center implementation and the supporting components as defined in the Architecture Design. Key project contacts, their roles, and their contact information should be included as part of the implementation plan document. These contacts include customer stakeholders, project managers, project architects, and implementation engineers. The following is a sample table that can be used to document project contacts for the implementation plan: Role Name Contact information Customer project sponsor     Customer technical resource     Project manager     Design architect     Implementation engineer     QA engineer     Support contacts for hardware and software used in the implementation plan may also be included in the table, for example, contact numbers for VMware support or other vendor support. Implementation requirements contain the implementation dependencies to include the access and facility requirements. Any hardware, software, and licensing that must be available to implement the design is also documented here. Access requirements include the following: Physical access to the site. Credentials necessary for access to resources. These include active directory credentials and VPN credentials (if remote access is required). Facility requirements include the following: Power and cooling to support the equipment that will be deployed as part of the design Rack space requirements Hardware, software, and licensing requirements include the following: vSphere licensing Windows or other operating system licensing Other third-party application licensing Software (ISO, physical media, and so on) Physical hardware (hosts, array, network switches, cables, and so on) A high-level overview of the steps required to complete the implementation is also documented. The details of each step are not a part of this document; only the steps that need to be performed will be included. For example: Procurement of hardware, software, and licensing. Scheduling of engineering resources. Verification of access and facility requirements. Performance of an inventory check for the required hardware, software, and licensing. Installation and configuration of storage array. Rack, cable, and burn-in of physical server hardware. Installation of ESXi on physical servers. Installation of vCenter Server. Configuration of ESXi and vCenter. Testing and verification of implementation plan. Migration of physical workloads to virtual machines. Operational verification of the implementation plan. The implementation overview may also include an implementation timeline documenting the time required to complete each of the steps. Project documentation deliverables are defined as part of the implementation plan. Any documentation that will be delivered to the customer once the implementation has been completed should be detailed here. Details include the name of the document and a brief description of the purpose of the document. The following table provides example descriptions of the project documentation deliverables: Document Description Architecture design This is a technical document describing the vSphere components and specifications required to achieve a solution that addresses the specific business and technical requirements of the design. Implementation plan This identifies implementation roles and requirements. It provides a high-level map of the implementation and deliverables detailed in the design. It documents change management procedures. Installation guide This document provides detailed, step-by-step instructions on how to install and configure the products specified in the architecture design document. Validation test plan This document provides an overview of the procedures to be executed post installation to verify whether or not the infrastructure is installed correctly. It can also be used at any point subsequent to the installation to verify whether or not the infrastructure continues to function correctly. Operational procedures This document provides detailed, step-by-step instructions on how to perform common operational tasks after the design is implemented. How changes are made to the design, specifically changes made to the design factors, must be well documented. Even a simple change to a requirement or an assumption that cannot be verified can have a tremendous effect on the design and implementation. The process for submitting a change, researching the impact of the change, and approving the change should be documented in detail. The following is an example outline for an implementation plan: Cover page: It includes the customer and project names Document version log: It contains the log of authors and changes made to the document Document contacts: It includes the subject matter experts involved in the creation of the design Table of contents: It is the index of document sections for quick reference List of tables: It is the index of tables included in the document for quick reference List of figures: It is the index of figures included in the document for quick reference Purpose statement: It defines the purpose of the document Project contacts: It is the documentation of key project points of contact Implementation requirements: It provides the access, facilities, hardware, software, and licensing required to complete the implementation Implementation overview: It is the overview of the steps required to complete the implementation Project deliverables: It consists of the documents that will be provided as deliverables once implementation has been completed Developing an installation guide The installation guide provides step-by-step instructions for the implementation of the architecture design. This guide should include detailed information about how to implement and configure all the resources associated with the virtual datacenter project. In many projects, the person creating the design is not the person responsible for implementing the design. The installation guide outlines the steps necessary to implement the physical design outlined in the architecture design document. The installation guide should provide details about the installation of all components, including the storage and network configurations required to support the design. In a complex design, multiple installation guides may be created to document the installation of the various components required to support the design. For example, separate installation guides may be created for the storage, network, and vSphere installation and configuration. How to do it... The installation guide should include the following information: Purpose statement Assumption statement Step-by-step instructions to implement the design How it works... The purpose statement simply states the purpose of the document. The assumption statement describes any assumptions the document's author has made. Commonly, an assumption statement simply states that the document has been written, assuming that the reader is familiar with virtualization concepts and the architecture design. The following is an example of a basic purpose and assumption statement that can be used for an installation guide: Purpose: This document provides a guide for installing and configuring the virtual infrastructure design defined in the Architecture Design. Assumptions: This guide is written for an implementation engineer or administrator who is familiar with vSphere concepts and terminologies. The guide is not intended for administrators who have no prior knowledge of vSphere concepts and terminology. The installation guide should include details on implementing all areas of the design. It should include configuration of the storage array, physical servers, physical network components, and vSphere components. The following are just a few examples of installation tasks to include instructions for: Storage array configurations Physical network configurations Physical host configurations ESXi installation vCenter Server installation and configuration Virtual network configuration Datastore configuration High availability, distributed resource scheduler, storage DRS, and other vSphere components installation and configuration The installation guide should provide as much detail as possible. Along with the step-by-step procedures, screenshots can be used to provide installation guidance. The following screenshot is an example taken from an installation guide that details enabling and configuring the Software iSCSI adapter: The following is an example outline for an installation guide: Cover page: It includes the customer and project names Document version log: It contains the log of authors and changes made to the document Document contacts: It includes the subject matter experts involved in the creation of the design Table of contents: It is the index of document sections for quick reference List of tables: It is the index of tables included in the document for quick reference List of figures: It is the index of figures included in the document for quick reference Purpose statement: It defines the purpose of the document Assumption statement: It defines any assumptions made in creating the document Installation guide: It provides the step-by-step installation instructions to be followed when implementing the design Creating a validation test plan The validation test plan documents how the implementation will be verified. It documents the criteria that must be met to determine the success of the implementation and the test procedures that should be followed when validating the environment. The criteria and procedures defined in the validation test plan determine whether or not the design requirements have been successfully met. How to do it... The validation test plan should include the following information: Purpose statement Assumption statement Success criteria Test procedures How it works... The purpose statement defines the purpose of the validation test plan and the assumption statement documents any assumptions the author of the plan has made in developing the test plan. Typically, the assumptions are that the testing and validation will be performed by someone who is familiar with the concepts and the design. The following is an example of a purpose and assumption statement for a validation test plan: Purpose: This document contains testing procedures to verify that the implemented configurations specified in the Architecture Design document successfully addresses the customer requirements. Assumptions: This document assumes that the person performing these tests has a basic understanding of VMware vSphere and is familiar with the accompanying design documentation. This document is not intended for administrators or testers who have no prior knowledge of vSphere concepts and terminology. The success criteria determines whether or not the implemented design is operating as expected. More importantly, these criteria determine whether or not the design requirements have been met. Success is measured based on whether or not the criteria satisfies the design requirements. The following table shows some examples of success criteria defined in the validation test plan: Description Measurement Members of the active directory group vSphere administrators are able to access vCenter as administrators Yes/No Access is denied to users outside the vSphere administrators active directory group Yes/No Access to a host using the vSphere Client is permitted when lockdown mode is disabled Yes/No Access to a host using the vSphere Client is denied when lockdown mode is enabled Yes/No Cluster resource utilization is less than 75 percent. Yes/No If the success criteria are not met, the design does not satisfy the design factors. This can be due to a misconfiguration or error in the design. Troubleshooting will need to be done to identify the issue or modifications to the design may need to be made. Test procedures are performed to determine whether or not the success criteria have been met. Test procedures should include testing of usability, performance, and recoverability. Test procedures should include the test description, the tasks to perform the test, and the expected results of the test. The following table provides some examples of usability testing procedures: Test description Tasks to perform test Expected result vCenter administrator access Use the vSphere Web Client to access the vCenter Server. Log in as a user who is a member of the vSphere administrators AD group. Administrator access to the inventory of the vCenter Server vCenter access: No permissions Use the vSphere Web Client to access the vCenter Server. Log in as a user who is not a member of the vSphere administrators AD group. Access is denied Host access: lockdown mode disabled Disable lockdown mode through the DCUI. Use the vSphere Client to access the host and log in as root. Direct access to the host using the vSphere Client is successful Host access: lockdown mode enabled Re-enable lockdown mode through the DCUI. Use the vSphere Client to access the host and log in as root. Direct access to the host using the vSphere Client is denied The following table provides some examples of reliability testing procedures: Test description Tasks to perform test Expected result Host storage path failure Disconnect a vmnic providing IP storage connectivity from the host The disconnected path fails, but I/O continues to be processed on the surviving paths. A network connectivity alarm should be triggered and an e-mail should be sent to the configured e-mail address. Host storage path restore Reconnect the vmnic providing IP storage connectivity The failed path should become active and begin processing the I/O. Network connectivity alarms should clear. Array storage path failure Disconnect one network connection from the active SP The disconnected paths fail on all hosts, but I/O continues to be processed on the surviving paths. Management network redundancy Disconnect the active management network vmnic The stand-by adapter becomes active. Management access to the host is not interrupted. A loss-of-network redundancy alarm should be triggered and an e-mail should be sent to the configured e-mail address. These are just a few examples of test procedures. The actual test procedures will depend on the requirements defined in the conceptual design. The following is an example outline of a validation test plan: Cover page: It includes the customer and project names Document version log: It contains the log of authors and changes made to the document Document contacts: It includes the subject matter experts involved in the creation of the design Table of contents: It is the index of document sections for quick reference List of tables: It is the index of tables included in the document for quick reference List of figures: It is the index of figures included in the document for quick reference Purpose statement: It defines the purpose of the document Assumption statement: It defines any assumptions made in creating the document Success criteria: It is a list of criteria that must be met to validate the successful implementation of the design Test Procedures: It is a list of test procedures to follow, including the steps to follow and the expected results
Read more
  • 0
  • 2
  • 28082

article-image-installing-virtual-desktop-agent-server-os-and-desktop-os
Packt
14 Jan 2014
3 min read
Save for later

Installing Virtual Desktop Agent – server OS and desktop OS

Packt
14 Jan 2014
3 min read
(For more resources related to this topic, see here.) You need to allow your Windows master image to communicate with your XenDesktop infrastructure. You can accomplish this task by installing Virtual Desktop Agent. In this latest release of the Citrix platform, VDA has been redeployed in three different versions: desktop operating systems, server operating systems, and Remote PC, a way to link an existing physical or virtual machine to your XenDesktop infrastructure. Getting ready You need to install and configure the described software with domain administrative credentials within both the desktop and server operating systems. How to do it... In the following section, we are going to explain the way to install and configure the three different types of Citrix Virtual Desktop Agents. Installing VDA for a server OS machine Connect to the server OS master image with domain administrative credentials. Mount the Citrix XenDesktop 7.0 ISO on the server OS machine by right-clicking on it and selecting the Mount option Browse the mounted Citrix XenDesktop 7.0 DVD-ROM, and double-click on the AutoSelect.exe executable file. On the Welcome screen, click on the Start button to continue. On the XenDesktop 7.0 menu, click on the Virtual Delivery Agent for Windows Server OS link, in the Prepare Machines and Images section. In the Environment section, select Create a master image if you want to create a master image for the VDI architecture (MCS/PVS). Or enable a direct connection to a physical or virtual server. After completing this step, click on Next. In the Core Components section, select a valid location to install the agent; then flag the Citrix Receiver component; and click on the Next button. In the Delivery Controller section, select Do it manually from the drop-down list in order to manually configure Delivery Controller; type a valid controller FQDN; and click on the Add button, as shown in the following screenshot. To continue with the installation, click on Next. To verify that you have entered a valid address, click on the Test connection...button. In the Features section flag, choose the optimization options that you want to enable, and then click on Next to continue, as shown in the following screenshot: In the Firewall section, select the correct radio button to open the required firewall ports automatically if you're using the Windows Firewall, or manually if you've got a firewall other than that on board. After completing this action, click on the Next button as shown in the following screenshot: If the options in the Summary screen are correct, click on the Install button to complete the installation procedure. In order to complete the procedure, you'll need to restart the server OS machine several times.
Read more
  • 0
  • 0
  • 1567

article-image-windows-azure-mobile-services-implementing-push-notifications-using
Packt
13 Jan 2014
8 min read
Save for later

Windows Azure Mobile Services - implementing push notifications

Packt
13 Jan 2014
8 min read
Understanding Push Notification Service flow The following procedure illustrates Push Notification Service (PNS) flow from establishing a channel to receiving a notification: The mobile device establishes a channel with the PNS and retrieves its handle (URI). The device registers its handle with a backend service (in our case, a table in our Mobile Service). A notification request can be made by another service, an admin system, and so on, which calls the backend service (in our case, an API). The service makes a request to the correct PNS for every device handle. The PNS notifies the device. Setting up Windows Store apps Visual Studio 2013 has a new wizard, which associates the app with the store in order to obtain a push notifications URI. Code is added to the app to interact with the service that will be updated to have a Channels table. This table has an Insert script to insert the channel and ping back a toast notification upon insert. The following procedure takes us through using the wizard to add a push channel to our app: Right-click on the project, and then navigate to Add | Push Notification. Follow the wizard and sign in to your store account (if you haven't got one, you will need to create one). Reserve an app name and select it. Then, continue by clicking on Next. Click on Import Subscriptions... and the Import Windows Azure Subscriptions dialog box will appear. Click on Download subscription file. Your default browser will be launched and the subscriptions file will be automatically downloaded. If you are logged into the portal, this will happen automatically; otherwise, you'll be prompted to log in. Once the subscription file is downloaded, browse to the downloaded file in the Import Windows Azure Subscriptions dialog box and click on Import. Select the subscription you wish to use, click on Next, and then click on Finish in the final dialog box. In the Output window in Visual Studio, you should see something like the following: Attempting to install 'WindowsAzure.MobileServices' Successfully installed NuGet Package 'WindowsAzure.MobileServices' Successfully added 'push.register.cs' to the project Added field to the App class successfully Initialization code was added successfully Updated ToastCapable in the app manifest Client Secret and Package SID were updated successfully on the Windows Azure Mobile Services portal The 'channels' table and 'insert.js' script file were created successfully Successfully updated application redirect domain Done We will now see a few things have been done to our project and service: The Package.StoreAssociation.xml file is added to link the project with the app on the store. Package.appxmanifest is updated with the store application identity. Add a push.register.cs class in servicesmobile services[Your Service Name], which creates a push notifications channel and sends the details to our service. The server explorer launches and shows us our service with a newly created table named channels, with an Insert method that inserts or updates (if changed) our channel URI. Then, it sends us a toast notification to test that everything is working. Run the app and check that the URI is inserted into the table. You will get a toast notification. Once you've done this, remove the sendNotifications(item.channelUri); call and function from the Insert method. You can do this in Visual Studio via the Server Explorer console. I've modified the script further to make sure the item is always updated, so when we send push notifications, we can send them to URIs that have been recently updated so that we are targeting users who are actually using the application (channels actually expire after 30 days too, so it would be a waste of time trying to push to them). The following code details these modifications: function insert(item, user, request) { var ct = tables.getTable("channels"); ct.where({ installationId: item.installationId }).read({ success: function (results) { if (results.length > 0) { // always update so we get the updated date var existingItem = results[0]; existingItem.channelUri = item.channelUri; ct.update(existingItem, { success: function () { request.respond(200, existingItem); } }); } else { // no matching installation, insert the record request.execute(); } } }) } I've also modified the UploadChannel method in the app so that it uses a Channel model that has a Platform property. Therefore, we can now work out which PNS provider to use when we have multiple platforms using the service. The UploadChannel method also uses a new InsertChannel method in our DataService method (you can see the full code in the sample app). The following code details these modifications: public async static void UploadChannel() { var channel = await Windows.Networking.PushNotifications. PushNotificationChannelManager. CreatePushNotificationChannelForApplicationAsync(); var token = Windows.System.Profile.HardwareIdentification. GetPackageSpecificToken(null); string installationId = Windows.Security.Cryptography. CryptographicBuffer.EncodeToBase64String(token.Id); try { var service = new DataService(); await service.InsertChannel(new Channel() { ChannelUri = channel.Uri, InstallationId = installationId, Platform = "win8" }); } catch (Exception ex) { System.Diagnostics.Debug.WriteLine(ex.ToString()); } }  Setting up tiles To implement wide or large square tiles, we need to create the necessary assets and define them in the Visual Assets tab of the Package.appxmanifest editor. This is shown in the following screenshot:  Setting up badges Windows Store apps support badge notifications as well as tile and toast. However, this requires a slightly different configuration. To implement badge notifications, we perform the following steps: Create a 24 x 24 pixel PNG badge that can have opacity, but must use only white color. Define the badge in the Badge Logo section of the Visual Assets tab of the Package.appxmanifest editor. Add a Background Tasks declaration in the Declarations tab of the Package.appxmanifest editor, select Push notification, and enter a Start page, as shown in the following screenshot: Finally, in the Notifications tab of the Package.appxmanifest editor, set Lock screen notifications to Badge. This is shown in the following screenshot: To see the badge notification working, you also need to add the app to the lock screen badge slots in Lock Screen Applications | Change PC Settings | Lock Screen.  Setting up Windows Phone 8 apps Visual Studio 2012 Express for Windows Phone doesn't have a fancy wizard like Visual Studio 2013 Express for Windows Store. So, we need to configure the channel and register it with the service manually. The following procedure sets up the notifications in the app by using the table that we created in the preceding Setting up Windows Store apps section: Edit the WMAppManifest.xml file to enable ID_CAP_IDENTITY_DEVICE, which allows us to get a unique device ID for registering in the Channels table, and ID_CAP_PUSH_NOTIFICATION, which allows push notifications in the app. These options are available in the Capabilities tab, as shown in the following screenshot: To enable wide tiles, we need to check Support for large Tiles (you can't see the tick unless you hover over it, as there is apparently a theming issue in VS!) and pick the path of the wide tile we want to use (by default, there is one named FlipCycleTileLarge.png under Tiles in the Assets folder). This is shown in the following screenshot: Next, we need to add some code to get the push channel URI and send it to the service: using Microsoft.Phone.Info; using Microsoft.Phone.Notification; using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Text; using System.Threading.Tasks; using TileTapper.DataServices; using TileTapper.Models; namespace TileTapper.Helpers { public class ChannelHelper { // Singleton instance public static readonly ChannelHelper Default = new ChannelHelper(); // Holds the push channel that is created or found private HttpNotificationChannel _pushChannel; // The name of our push channel private readonly string CHANNEL_NAME = "TileTapperPushChannel"; private ChannelHelper() { } public void SetupChannel() { try { // Try to find the push channel this._pushChannel = HttpNotificationChannel.Find(CHANNEL_NAME); // If the channel was not found, then create a new // connection to the push service if (this._pushChannel == null ) { this._pushChannel = new HttpNotificationChannel(CHANNEL_NAME); this.AttachEvents(); this._pushChannel.Open(); // Bind channel for Tile events this._pushChannel.BindToShellTile(); // Bind channel for Toast events this._pushChannel.BindToShellToast(); } else this.AttachEvents(); } catch (Exception ex) { System.Diagnostics.Debug.WriteLine(ex.ToString()); } } private void AttachEvents() { // Register for all the events before attempting to // open the channel this._pushChannel.ChannelUriUpdated + = async (s, e) => { // Register URI with service await this.Register(); }; this._pushChannel.ErrorOccurred += (s, e) => { System.Diagnostics.Debug.WriteLine(e.ToString()); }; } private async Task Register() { try { var service = new DataService(); await service.InsertChannel(new Channel() { ChannelUri = this._pushChannel.ChannelUri.AbsoluteUri, InstallationId = this.GetDeviceUniqueName(), Platform = "wp8" }); } catch (Exception ex) { System.Diagnostics.Debug.WriteLine(ex.ToString()); } } // Note: to get a result requires // ID_CAP_IDENTITY_DEVICE // to be added to the capabilities of the WMAppManifest // this will then warn users in marketplace private byte[] GetDeviceUniqueID() { byte[] result = null; object uniqueId; if (DeviceExtendedProperties.TryGetValue("DeviceUniqueId", out uniqueId)) result = (byte[])uniqueId; return result; } private string GetDeviceUniqueName() { byte[] id = this.GetDeviceUniqueID(); string idEnc = Encoding.Unicode.GetString(id, 0, id.Length); string deviceID = HttpUtility.UrlEncode(idEnc); return deviceID; } } } This is a singleton class that holds an instance of the HttpNotificationChannel object, so that channel URI changes can be captured and sent up to our service. The two methods at the end of the code snippet, GetDeviceUniqueID and GetDeviceUniqueName, will give a unique device identifier for the channels table. Now that we have the code to manage the channel, we need to call the SetupChannel method in the App.xaml.cs launching method as shown in the following code snippet: private void Application_Launching(object sender, LaunchingEventArgs e) { TileTapper.Helpers.ChannelHelper.Default.SetupChannel(); }
Read more
  • 0
  • 0
  • 1951

article-image-understanding-view-storage-related-features
Packt
18 Dec 2013
6 min read
Save for later

Understanding View storage-related features

Packt
18 Dec 2013
6 min read
(For more resources related to this topic, see here.) View Storage Accelerator View Storage Accelerator enables View to use the vSphere CBRC feature first introduced with vSphere 5.0. CBRC uses up to 2 GB of RAM on the vSphere host as a read-only cache for View desktop data. CBRC can be enabled for both full clone and linked clone desktop pools, with linked clone desktops having the additional option of caching only the operating system (OS) disk or both the OS disk and the persistent data disk. When the View desktops are deployed and at configured intervals after that, CBRC analyzes the View desktop VMDK and generates a digest file that contains hash values for each block. When the View desktop performs a read operation, the CBRC filter on the vSphere host reviews the hash table and requests the smallest block required to complete the read request. This block and its associated hash key chunk are then placed in the CBRC cache on the vSphere host. Since the desktop VMDK contents are hashed at the block level, and View desktops are typically based on similar master images, CBRC can reuse cached blocks for subsequent read requests for data with the same hash value. Due to this, CBRC is actually a deduplicated cache. The following figure shows the vSphere CBRC Filter as it sits in between the host CBRC cache, and the View desktop digest and VMDK files: Since desktop VMDK contents are subject to change over time, View generates a new digest file of each desktop on a regular schedule. By default, this schedule is every 7 days, but that value can be changed as needed using the View Manager Admin console. Digest generation can be I/O intensive, so this operation should not be performed during periods of heavy desktop use. View Storage Accelerator provides the most benefit during storm scenarios, such as desktop boot storms, user logon storms, or any other read-heavy desktop I/O operation initiated by a large number of desktops. As such, it is unlikely that View Storage Accelerator will actually reduce primary storage needs, but instead will ensure that desktop performance is maintained during these I/O intensive events. Additional information about View Storage Accelerator is available in the VMware document View Storage Accelerator in VMware View 5.1 (http://www.vmware.com/files/pdf/techpaper/vmware-view-storage-accelerator-host-cachingcontent-based-read-cache.pdf). The information in the referenced document is still current, even if the version of View it references is not. Tiered storage for View linked clones To enable a more granular control over the storage architecture of linked clone desktops, View allows us to specify dedicated datastores for each of the following disks: User persistent data disk OS disk (which includes the disposable data disk, if configured) Replica disk It is not necessary to separate each of these disks, but in the following two sections we will outline why we might consider doing so. User persistent data disk The optional linked clone persistent data disk contains user personal data, and its contents are maintained even if the desktop is refreshed or recomposed. Additionally, the disk is associated with an individual user within View, and can be attached to a new View desktop if ever required. As such, an organization that does not back up their linked clone desktops may at the very least consider backing up the user persistent data disks Due to the potential importance of the persistent data disks, organizations may wish to apply more protections to them than they would to the rest of the View desktop. View storage tiering is one way we can accomplish this, as we could place these disks on storage that has additional protections on it, such as replication to a secondary location, or even regular storage snapshots. These are just a sampling of the reasons an organization may want to separate the user persistent data disks. Data replication or snapshots are typically not required for linked clone OS disks or replica disks as View does not support the manual recreation of linked clone desktops in the event of a disaster. Only the user persistent data disks can be reused if the desktop needs to be recreated from scratch. Replica disks One of the primary reasons an organization would want to separate linked clone replica disks onto dedicated datastores has to do with the architecture of View itself. When deploying a linked clone desktop pool, if we do not specify a dedicated datastore for the replica disk, View will create a replica disk on every linked clone datastore in the pool. The reason we may not want a replica disk on every linked clone datastore has to do with the storage architecture. Since replica disks are shared between each desktop, their contents are often among the first to be promoted into any cache tiers that exist, particularly those within the storage infrastructure. If we had specified a single datastore for the replica, meaning that only one replica would be created, the storage platform would only need to cache data from that disk. If our storage array cache was not capable of deduplication, and we had multiple replica disks, that same array would now be required to cache the content from several View replica disks. Given that the amount of cache on most storage arrays is limited, the requirement to cache more replica disk data than is necessary due to the View linked clone tiering feature may exhaust the cache and thus decrease the array's performance. Using View linked clone tiering we can reduce the amount of replica disks we need, which may reduce the overall utilization of the storage array cache, freeing it up to cache other critical View desktop data. As each storage array architecture is different, we should consult vendor resources to determine if this is the optimal configuration for the environment. As mentioned previously, if the array cache is capable of deduplication, this change may not be necessary. VMware currently supports up to 1,000 desktops per each replica disk, although View does not enforce this limitation when creating desktop pools. Summary In this article, we have discussed the native features of View that impact the storage design, and how they are typically used. Resources for Article: Further resources on this subject: Windows 8 with VMware View [Article] Cloning and Snapshots in VMware Workstation [Article] Use Of ISO Image for Installation of Windows8 Virtual Machine [Article]
Read more
  • 0
  • 0
  • 1455
article-image-installation-oracle-vm-virtualbox-linux
Packt
09 Dec 2013
4 min read
Save for later

Installation of Oracle VM VirtualBox on Linux

Packt
09 Dec 2013
4 min read
(For more resources related to this topic, see here.) Basic requirements The following are the basic requirements for VirtualBox: Processor: Any recent AMD/Intel processor is fine to run VirtualBox. Memory: This is dependent on the size of the RAM that is required for the host OS, plus the amount of RAM needed by the guest OS. In my test environment, I am using 4 GB RAM and going to run Oracle Enterprise Linux 6.2 as the guest OS. Hard disk space: VirtualBox doesn’t need much space, but you need to plan out how much the host OS needs and how much you need for the guest OS. Host OS: You need to make sure that the host OS is certified to run VirtualBox. Host OS At the time of writing this article, VirtualBox runs on the following host operating systems: Windows The following Microsoft Windows operating systems are compatible to run as host OS for VirtualBox: Windows XP, all service packs (32-bit) Windows Server 2003 (32-bit) Windows Vista (32-bit and 64-bit) Windows Server 2008 (32-bit and 64-bit) Windows 7 (32-bit and 64-bit) Windows 8 (32-bit and 64-bit) Windows Server 2012 (64-bit) Mac OS X The following Mac OS X operating systems are compatible to run as host OS for VirtualBox: 10.6 (Snow Leopard, 32-bit and 64-bit) 10.7 (Lion, 32-bit and 64-bit) 10.8 (Mountain Lion, 64-bit) Linux The following Linux operating systems are compatible to run as host OS for VirtualBox: Debian GNU/Linux 5.0 (lenny) and 6.0 (squeeze) Oracle Enterprise Linux 4 and 5, Oracle Linux 6 Red Hat Enterprise Linux 4, 5, and 6 Fedora Core 4 to 17 Gentoo Linux openSUSE 11.0, 11.1, 11.2, 11.3, 11.4, 12.1, and 12.2 Mandriva 2010 and 2011 Solaris Both 32-bit and 64-bit versions of Solaris are supported with some limitations. Please refer to www.virtualbox.org for more information. The following host OS are supported: Solaris 11 including Solaris 11 Express Solaris 10 (update 8 and higher) Guest OS You need to make sure that the guest OSis certified to run on VirtualBox. VirtualBox supports the following guest operating systems: Windows 3.x Windows NT 4.0 Windows 2000 Windows XP Windows Server 2003 Windows Vista Windows 7 DOS Linux (2.4, 2.6, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, and 3.7) Solaris OpenSolaris OpenBSD Installation I have downloaded and used the VirtualBox repository to install VirtualBox. Please ensure the desktop or laptop you are installing VirtualBox on is connected to the Internet. However, this is not mandatory. You can install VirtualBox using the RPM command, which generally adds the complexity of finding and installing the dependency packages, or you can have a central yum server where you can add VirtualBox repository. In my test lab, my laptop is connected to the Internet and I have used the wget command to download and add virtualbox.repository. Installing dependency packages Please ensure installing the packages seen in the following screenshot to make VirtualBox work: Installing VirtualBox 4.2.16 Once the preceding dependency is installed, we are ready to install VirtualBox 4.2.16. Use the command seen in the following screenshot to install VirtualBox: Rebuilding kernel modules The vboxdrv module is a special kernel module that helps to allocate the physical memory and to gain control of the processor for the guest system execution. Without this kernel module, you can still use the VirtualBox manager to configure the virtual machines, but they will not start. When you install VirtualBox by default, this module gets installed on the system. But to maintain future kernel updates, I suggest that you install Dynamic Kernel Module Support (DKMS). In most of the cases, this module installation is straightforward. You can use yum, apt-get, and so on depending on the Linux variant you are using, but ensure that the GNU compiler (GCC), GNU make (make), and packages containing header files for your kernel are installed prior to installing DKMS. Also, ensure that all system updates are installed. Once the kernel of your Linux host is updated and DKMS is not installed, the kernel module needs to be reinstalled by executing the following command as root: The preceding command not only rebuilds the kernel modules but also automatically creates the vboxusers group and the VirtualBox user. If you use Microsoft Windows on your desktop, then download the .exe file, install it, and start it from the desktop shortcut or from the program. Start VirtualBox Use the command seen in the following screenshot in Linux to run VirtualBox: If everything is fine, then the Oracle VM VirtualBox Manager screen appears as seen in the following screenshot: Update VirtualBox You can update VirtualBox with the help of the command seen in the following screenshot: Remove VirtualBox To remove VirtualBox, use the command seen in the following screenshot: Summary In this article we have covered the installation, update, and removal of Oracle VM VirtualBox in the Linux environment. Resources for Article: Further resources on this subject: Oracle VM Management [Article] Extending Oracle VM Management [Article] Troubleshooting and Gotchas in Oracle VM Manager 2.1.2 [Article]
Read more
  • 0
  • 0
  • 4381

article-image-use-iso-image-installation-windows8-virtual-machine
Packt
29 Oct 2013
5 min read
Save for later

Use Of ISO Image for Installation of Windows8 Virtual Machine

Packt
29 Oct 2013
5 min read
(For more resources related to this topic, see here.) In the past, the only way that a Windows consumer could acquire the Windows OS was to purchase the installation media on a CD-ROM, floppy disk, or physical computer accessory, which had to be ordered online or bought from a local bricks and mortar store. Now with the recent release of Windows 8, Microsoft is continuing to extend its installation platform to digital media on a large scale. Windows 8 simplifies the process by using a web platform installer called Windows 8 Upgrade Assistant, which makes it easier to download, burn physical copies, and create a backup copy of the installation media. This form of digital distribution allows Microsoft to deploy products at a faster speed to the market, and increase its capacity to meet consumer demands. Getting ready To proceed with this recipe you will need to download Windows 8, so go to http://www.windows.com/buy. How to do it... In the first part of this section we will look into downloading the Windows 8 ISO file. Skip these steps if you have already downloaded the Windows 8 ISO file in advance. Visit the Microsoft website to purchase Windows 8 and then select the option to download the Windows 8 Upgrade Assistant file. Launch the Windows 8 Upgrade Assistant file and proceed with purchasing Windows 8. After completing the transaction, wait while the Windows 8 setup files are downloaded. The estimated download time varies, based on your Internet connection.speed. In addition, you have the option to pause the download and resume it later. Once the download is complete, the Windows 8 Upgrade Assistant will verify the integrity of the download by checking for file corruption and missing files. Wait while the Windows 8 setup gets the files ready to begin the installation. You will see a prompt that says Install Windows 8. Select the Install by creating media radio button. Select ISO file, then click on the Save button. When prompted to select a location to save the ISO file, choose a folder location and type Windows 8 as the filename. Then click on the Save button. When the product key is revealed, write it down and store it somewhere secure. Then click on the Finish button. The following set of instructions explains the details of installing Windows 8 on a newly created virtual machine using VMware Player. These steps are similar to the installation procedures encountered when installing Windows 8 on a physical computer: Open the VMware application by going to Start | All Programs| VMware. Then click on the VMware Player menu item. If you are opening VMware Player for the first time, the VMware Player License Agreement prompt will be displayed. Read the terms and conditions. Select Yes, I accept the terms in the license agreement to proceed to open the VMware Player application. If you select No, I do not accept the terms in the license agreement, you will not be permitted to continue. The Welcome to the New Virtual Machine Wizard screen will be visible. Click on Create a New Virtual Machine on the right side of the window. Select the Installer disc image file (iso): radio button. Then click on the Browse button. Browse to the directory of the Windows 8 ISO image and click on Open. You will see an information icon that says Windows 8 detected. This operating system will use Easy Install. Click on the Next button to continue. Under Easy Install Information, type in the Windows product information and click on Next to continue. You now have the options to: Enter the Windows product key Select the version of Windows to install (Windows 8 or Windows 8 Pro) Enter the full name of the computer Enter a password, which is optional If you do not enter a product key, you will receive a message saying that it can be manually entered later. Enter a new virtual machine name and directory location to install the virtual machine. For example, type Windows 8 VM and then click on the Next button to continue. Enter 16 as the Maximum disk size (GB) and store the virtual disk as a single file. Then click on the Finish button. This is because Windows 8 requires a minimum of 16 GB of free hard drive space. The Windows 8 virtual machine will power on automatically for the first time. At the Ready to Create Virtual Machine prompt, click on the Finish button. Remember that Windows 8 requires a minimum of 16 GB hard disk free space for the 32 bit installation and 20 GB space for the 64 bit installation. VMware will prompt you to install VMware Tools for Windows 2000 and later, click on the Remind Me Later button. The virtual machine will automatically boot up to the Windows 8 setup wizard. Wait until the Windows installation is complete. The virtual machine will reboot several times during this process. You will see various Windows 8 pictorials during the installation; please be patient. Once the installation is complete, your virtual machine will be immediately directed to the Windows 8 home screen. Summary This article introduced you to downloading of the Windows 8 operating system as an ISO, creating a new virtual machine, and installing Window 8 as a virtual machine. Resources for Article: Further resources on this subject: VMware View 5 Desktop Virtualization [Article] Windows 8 with VMware View [Article] Cloning and Snapshots in VMware Workstation [Article]    
Read more
  • 0
  • 0
  • 3152

article-image-availability-management
Packt
28 Oct 2013
29 min read
Save for later

Availability Management

Packt
28 Oct 2013
29 min read
(For more resources related to this topic, see here.) Reducing planned and unplanned downtime Whether we are talking about a highly available and critically productive environment or not, any planned or unexpected downtime means financial losses. Historically, solutions that could provide high availability and redundancy were costly and complex. With the virtualization technologies available today, it becomes easier to provide higher levels of availability for environments where they are needed. With VMware products, and vSphere in particular, it's possible to do the following things: Have higher availability that is independent of hardware, operating systems, or applications Choose a planned downtime for many maintenance tasks, and shorten or eliminate them Provide automatic recovery in case of failure Planned downtime Planned downtime usually happens during hardware maintenance, firmware or operating system updates, and server migrations. To reduce the impact of planned downtime, IT administrators are forced to schedule small maintenance windows outside the working hours. vSphere makes it possible to dramatically reduce the planned downtime. With vSphere, IT administrators can perform many maintenance tasks at any point in time as it allows downtime elimination for many common maintenance operations. This is possible mainly because workloads in a vSphere can be dynamically moved between different physical servers and storage resources without any service interruption. The main availability capabilities that are built into vSphere allow the use of HA and redundancy features, and are as follows: Shared storage: Storage resources such as Fibre Channel, iSCSI, Storage Area Network (SAN), or Network Access Storage (NAS) help eliminate the single points of failure. SAN mirroring and replication features can be used to have fresh copies of the virtual disk at disaster recovery sites. Network interface teaming: This feature provides tolerance for individual network card failures. Storage multipathing: This helps to tolerate storage path failures. vSphere vMotion® and Storage vMotion functionalities allow the migration of VMs between ESXi hosts and their underlying storage without service interruption, as shown in the following figure: In other words, vMotion is the live migration of VMs between ESXi hosts, and Storage vMotion is the live migration of VMs between storage LUNs. In both cases, VM retains its network and disk connection. With vSphere 5.1 and the later versions, it's possible to combine vMotion with Storage vMotion into a single migration that simplifies administration. The entire process takes less than two seconds on a GB network. vMotion keeps track of the ongoing memory transaction while memory and system states get copied to the target host. Once copying is done, vMotion suspends the source VM, copies the transactions that happened during the process to the target host, and resumes the VM on the target host. This way, vMotion ensures transaction integrity. vSphere requirements for vMotion vSphere requirements for vMotion are as follows: All the hosts must have the following features: They should be correctly licensed for vMotion Have access to the shared storage Use a GB Ethernet adapter for vMotion, preferably a dedicated one The VMkernel port group is configured for vMotion with the same name (the name is case sensitive) Have access to the same subnets Must be members of all the vSphere distributed switches that VMs use for networking Use jumbo frames for best vMotion performance All the virtual machines that need to be vMotioned must have the following features: Shouldn't use raw disks if migration between storage LUNs is needed Shouldn't use devices that are not available on the destination host (for example, a CD drive or USB devices not enabled for vMotion) Should be located on a shared storage resource Shouldn't use devices connected from the client computer Migration with vMotion Migration with vMotion happens in three stages: vCenter server verifies that the existing VM is in a stable state and that the CPU on the target host is compatible with the CPU this VM is currently using vCenter migrates VM state information such as memory, registers, and network connections to the target host The virtual machine resumes its activities on the new host VMs with snapshots can be vMotioned regardless of their power state as long as their files stay on the same storage. Obviously, this storage has to be accessible for both the source and destination hosts. If migration involves moving configuration files or virtual disks, the following additional requirements apply: Both the source and destination hosts must be of ESX or ESXi version 3.5 or later All the VM files should be kept in a single directory on a shared storage resource To vMotion a VM in vCenter, right-click on a VM and choose Migrate… as shown in the following screenshot: This opens a migration wizard where you can select whether it's going to migrate between hosts or storage or both. The Change hostoption is the standard vMotion, and Change datastore is the Storage vMotion. As you can see, the Change both host and datastore option is not available because this VM is currently running. As mentioned earlier, vSphere 5.1 and later support vMotion and Storage vMotion in one transaction. In the next steps, you are able to choose the destination as well as the priority for this migration. Multiple VMs can be migrated at the same time if you make multiple selections in the Virtual Machines tab for the host or the cluster. VM vMotion is widely used to perform host maintenance such as upgrading the ESX operating system, memory, or any other configuration changes. When maintenance is needed on a host, all the VMs can be migrated to other hosts and this host can be switched into the maintenance mode. This can be accomplished by right-clicking on the host and selecting Enter Maintenance Mode. Unplanned downtime Environments, especially critical ones, need to be protected from any unplanned downtime caused by possible hardware or application failures. vSphere has important capabilities that can address this challenge and help to eliminate unplanned downtime. These vSphere capabilities are transparent to the guest operating system and any applications running inside the VMs; they are also a part of the virtual infrastructure. The following features can be configured for VMs in order to reduce the cost and complexity of HA. More detail on these features will be given in the following sections of this article. High availability (HA) vSphere's HA is a feature that allows a group of hosts connected together to provide high levels of availability for VMs running on these hosts. It protects VMs and their applications in the following ways: In case of ESX server failure, it restarts VMs on the other hosts that are members of the cluster In case of guest OS failure, it resets the VM If application failure is detected, it can reset a VM With vSphere HA, there is no need to install any additional software in a VM. After vSphere HA is configured, all the new VMs will be protected automatically. The HA option can be combined with vSphere DRS to protect against failures and to provide load balancing across the hosts within a cluster The advantages of HA over traditional failover solutions are listed in the VMware article at http://pubs.vmware.com/vsphere-51/index.jsp?topic=%2Fcom.vmware.vsphere.avail.doc%2FGUID-CB46CEC4-87CD-4704-A9AC-058281CFD8F8.html. Creating a vSphere HA cluster Before HA can be enabled, a cluster itself needs to be created. To create a new cluster, right-click on the datacenter object in the Hosts and Clusters view and select New Cluster... as shown in the following screenshot: The following prerequisites have to be considered before setting up a HA cluster: All the hosts must be licensed for vSphere HA. ESX/ESXi 3.5 hosts are supported for vSphere HA with the following patches installed; these fix an issue involving file locks: ESX 3.5: patch ESX350-201012401-SG and prerequisites ESXi 3.5: patch ESXe350-201012401-I-BG and prerequisites At least two hosts must exist in the cluster. All the hosts' IP addresses need to be assigned statically or configured via DHCP with static reservations to ensure address consistency across host reboots. At least one network should exist that is shared by all the hosts, that is, a management network. It is best practice to have at least two. To ensure VMs can run on any host, all the hosts should also share the same datastores and virtual networks. All the VMs must be stored on shared, and not local, storage. VMware tools must be installed for VM monitoring to work. Host certificate checking should be enabled. Once all of the requirements have been met, vSphere HA can be enabled in vCenter under the cluster settings dialog. In the following screenshot, it appears as PRD-CLUSTER Settings: Once HA is enabled, all the cluster hosts that are running and are not in maintenance mode become a part of HA. HA settings The following HA settings can also be changed at the same time: Host monitoring status is enabled by default Admission control is enabled by default Virtual machine options (restart priority is Medium by default and isolation response by default is set to Leave powered on) VM monitoring is disabled by default Datastore heartbeating is selected by vCenter by default More details on each of these settings can be found in the following sections of this article. Host monitoring status When a HA cluster is created, an agent is uploaded to all the hosts and configured to communicate with other agents within the cluster. One of the hosts becomes the master host, and the rest become slave hosts. There is an election process to choose the master host, and the host that mounts more datastores has an advantage in this election. In cases where we have a tie, the host with the lexically-highest Managed Object ID(MOID) is chosen. MOID, also called MoRef ID, is a value generated by vCenter for each object: host, datastore, VM, and so on. It is guaranteed to be unique across the infrastructure managed by this particular vCenter server. When it comes to the election process for choosing the master host, a host with ID 99 will have higher priority than a host with ID 100. If a master host fails or becomes unavailable, a new election process is initiated. Slave hosts monitor whether the VMs are running locally and report to the master host. In its turn, the master host communicates with vCenter and monitors other hosts for failures. Its main responsibilities are listed as follows: Monitoring the state of the slave hosts and in case of failure, identifying which VMs must be restarted Monitoring the state of all the protected VMs and restarting them in case of failure Managing a list of hosts and protected VMs Communicating with vCenter and reporting the cluster's health state Host availability monitoring is done through a network heartbeat exchange, which happens every second by default. In cases where we lose network heartbeats with a host, before declaring it as failed, the master host checks whether this host communicates with any of the existing datastores using datastore heartbeats and responds to pings sent to its management IP address or not. The master host detects the following types of host failure: Type of failure Network heartbeats ICMP ping Datastore heartbeats Lost connectivity to the master host - + + Network isolation - - + Failure - - - If host failure is detected, the host's VMs will be restarted on other hosts. Host network isolation happens when a host is running but doesn't see any traffic from vSphere HA agents, which means that it's disconnected from the management network. Isolation is handled as a special case of failure in VMware HA. If a host becomes network isolated, the master host continues to monitor this host and the VMs running on it. Depending on the isolation settings chosen for individual VMs, some of them may be restarted on another host. The master host has to communicate with vCenter, therefore, it can't be in the isolation mode. Once that happens, a new master host will be elected. When network isolation happens, certain hosts are not able to communicate with vCenter, which may result in configuration changes not having effect on certain parts of the infrastructure. If a network infrastructure is configured correctly and has redundant network paths, isolation should happen rarely. Datastore heartbeating Datastore heartbeating was introduced in vSphere 5. In the previous versions of vSphere, once a host became unreachable through the management network, HA always initiated VM restart, even if the VMs were still running. This, of course, created unnecessary downtime and additional stress to the host. Datastore heartbeating allows HA to make a distinction between hosts that are isolated or partitioned and hosts that have failed, which adds more stability to the way HA works. vCenter server selects a list of datastores for heartbeat verification to maximize the number of hosts that can be verified. It uses a selection algorithm designed to select datastores that are connected to the highest number of hosts. This algorithm attempts to choose datastores that are hosted on different storage arrays or NFS servers. It also prefers VMFS-formatted LUNs over NFS-hosted datastores. vCenter selects datastores for heartbeating in the following scenarios: When HA is enabled If a new datastore is added If the accessibility to a datastore changes By default, two datastores are selected. This is the minimum amount of datastores needed. It can be changed using the das.heartbeatDsPerHost parameter under Advanced Settings for up to five datastores. The PRD-CLUSTER Settings dialog box can be used to verify or change the datastores selected for heartbeating, as shown in the following screenshot: It is recommended, however, to let vCenter choose the datastores. Only the datastores that are mounted to more than one host are available in the list. Datastore heartbeating leverages the existing VMFS filesystem locking mechanism. There is a so-called heartbeat region that exists on each datastore and is updated as long as the lock on a file exists. A host updates the datastore's heartbeat region if it has at least one file opened on this volume. HA creates a file for datastore heartbeating purposes only to make sure there is at least one file opened on a volume. Each host creates its own file and HA to be able to determine whether an unresponsive host still has connection to a datastore, and simply checks whether the heartbeat region has been updated or not. By default, an isolation response is triggered after 5 seconds for the master host and after approximately 30 seconds if the host was a slave in vSphere 5. This time difference occurs because of the fact that if the host was a slave, it would need to go through the election process to determine whether there are any other hosts that exist, or whether the master host is simply down. This election starts within 10 seconds after the slave host has lost its heartbeats. If there is no response for 15 seconds, the HA agent on this host elects itself as the master. The isolation response time can be increased using the das.config.fdm.isolationPolicyDelaySec parameter under Advanced Settings. This is, however, not recommended as it increases the downtime when a problem occurs. If a host becomes a master in a cluster with more than one host and has no slaves, it continuously starts checking whether it's in the isolation mode or not. It keeps doing so until it becomes a master with slaves or connects to a master as a slave. At this point, the host will ping its isolation address to determine whether the management network is available again. By default, the isolation address is a gateway configured for the management network. This option can be changed using the das.isolationaddress[X] parameter under Advanced Settings. [X] takes values from 1 to 10 and allows configuration of multiple isolation addresses. Additionally, the das.usedefaultisolationaddress parameter can be used to indicate whether the default gateway address should be used as an isolation address or not. This parameter should be set to False if the default gateway is not configured to respond to the ICMP ping packets. Generally, it's recommended to have one isolation address for each management network. If this network uses redundant paths, the isolation address should always be available under normal circumstances. In certain cases, a host may be isolated, that is, not accessible via the management network but still able to receive election traffic. This host is called partitioned. Have a look at the following figure to gain more insight about this: When multiple hosts are isolated but can still communicate with each other, it's called a network partition. This can happen for various reasons; one of them is when a cluster spans multiple sites over a metropolitan area network. This is called the stretched cluster configuration. When a cluster partition occurs, one subset of hosts is able to communicate with the master while the other is not. Depending on the isolation response selected for VMs, they may be left running or restarted. When a network partition happens, the master election process is initiated within the subset of hosts that loses its connection to the master. This is done to make sure that the host failure or isolation results in appropriate action on the VMs. Therefore, a cluster will have multiple masters; each one in a different partition as long as the partition exists. Once the partition is resolved, the masters are able to communicate with each other and discover the multiplicity of master hosts. Each time this happens, one of them becomes a slave. The hosts' HA state is reported by vCenter through the Summary tab for each host as shown in the following screenshot: This is done under the Hosts tab for cluster objects as shown in the following screenshot: Running (Master) indicates that HA is enabled and the host is a master host. Connected (Slave) means that HA is enabled and the host is a slave host. Only the running VMs are protected by HA. Therefore, the master host monitors the VM's state and once it changes from powered off to powered on, the master adds this VM to the list of protected machines. Virtual machine options Each VM's HA behavior can be adjusted under vSphere HA settings or in the Virtual Machine Options option found in the PRD-CLUSTER Settings page as shown in the following screenshot: Restart priority The restart priority setting determines which VMs will be restarted first after the host failure. The default setting is Medium. Depending on the applications running on a VM, it may need to be restarted before other VMs, for example, if it's a database, a DNS, or a DHCP server. It may be restarted after others if it's not a critical VM. If you select Disabled, this VM will never be restarted if there is a host failure. In other words, HA will be disabled for this VM. Isolation response The isolation response setting defines HA actions against a VM if its host loses connection to the management network but is still running. The default setting is Leave powered on. To be able to understand why this setting is important, imagine the situation where a host loses connection to the management network and at the same time or shortly afterwards, to the storage network as well—a so-called split-brain situation. In vSphere, only one host can have access to a VM at a time. For this purpose, the .vmdk file is locked and there is an additional .lck file present in the same folder where .vmdk file is stored. As HA is enabled, VMs will fail over to another host, however, their original instances will keep running on the old host. Once this host comes out of isolation, we will end up with two copies of VMs. Therefore, the isolated host will not have access to the .vmdk file as it's locked. In vCenter, however, this VM will look as if it is flipping between two hosts. With the default settings, the original host is not able to reacquire the disk locks and will be querying the VM. HA will send a reply instead which allows the host to power off the second running copy. If the Power Off option is selected for a VM under the isolation response settings, this VM will be immediately stopped when isolation occurs. This can cause inconsistency with the filesystem on a virtual drive. However, the advantage of this is that VM restart on another host will happen more quickly, thus reducing the downtime. The Shut down option attempts to gracefully shut down a VM. By default, HA waits for 5 minutes for this to happen. When this time is over, if the VM is not off yet, it will be powered off. This timeout is controlled by the das.isolationshutdowntimeout parameter under the Advanced Settings option. VM must have VMware tools installed to be able to shut down gracefully. Otherwise, the shutdown option is equivalent to power off. VM monitoring Under VM Monitoring, the monitoring settings of individual applications can be adjusted as shown in the following screenshot: The default setting is Disabled. However, VM and Application Monitoring can be enabled so that if the VM heartbeat (VMware tool's heartbeat) or its application heartbeat is lost, the VM is restarted. To avoid false positives, the VM monitoring service also monitors VM's I/O activity. If a heartbeat is lost and there was no I/O activity (by default during the last 2 minutes), VM is considered as unresponsive. This feature allows you to power cycle nonresponsive VMs. I/O interval can be changed under the advanced attribute settings (for more details, check the HA Advanced attributes table later in this section). Monitoring sensitivity can be adjusted as well. Sensitivity means the time interval between the loss of heartbeats and restarting of the VM. The available options are listed in the table from the VMware documentation article available at http://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vsphere.avail.doc_50%2FGUID-62B80D7A-C764-40CB-AE59-752DA6AD78E7.html. To avoid repeating VM resets by default, they will be restarted only three times during the reset period. This can be changed in the Custom mode as shown in the following screenshot: In order to be able to monitor applications within a VM, they need to support VMware application monitoring. Alternatively, you can download the appropriate SDK and set up customized heartbeats for the application that needs to be monitored. Under Advanced Options, the following vSphere HA behaviors can be set. Some of them have already been mentioned in sections of this article. The following screenshot shows the Advanced Options (vSphere HA) window where advanced HA options can be added and set to specific values: vSphere HA advanced options are listed in the article from VMware documentation available at http://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vsphere.avail.doc_50%2FGUID-E0161CB5-BD3F-425F-A7E0-B F83B005FECA.html. This mentioned article also lists the options that are not supported in vCenter 5. You will get an error message if you try to add one of them. Also, the options will be deleted after being upgraded from the previous versions. Admission control Admission control ensures there are sufficient resources available to provide failover protection as and when VM resource reservations are kept. Admission control is available for the following: Hosts Resource pools vSphere HA Admission control can only be disabled for vSphere HA. The following screenshot shows PRD-CLUSTER Settings with the option to disable admission control: Examples of actions that may not be permitted because of insufficient resources are as follows: Power on a VM Migrate a VM to another host, cluster, or resource pool Increase CPU or memory reservation for a VM Admission control policies There are three possible types of admission control policies available for HA configuration that are as follows: Host failure cluster tolerates: When this option is chosen, HA ensures that a specified number of hosts can fail, but sufficient resources will still be available to accommodate all the VMs from these hosts. The decision to either allow or deny an operation is based on the following calculations: Slot size: A hypothetical VM that has the largest amount of memory and CPU that is assigned to an existing VM in the environment. For example, for the following VMs, the slot size will be 4 GHz and 6 GB: VM CPU RAM VM1 4 GHz 2 GB VM2 2 GHz 4 GB VM3 1 GHz 6 GB Host capacity: It gives the number of slots each host can hold based on the resources available for VMs; not the total host memory and CPU. For example, for the previous slot size, the host capacity will be as given in the following table: Host CPU RAM Slots Host1 4 GHz 128 GB 1 Host2 24 GHz 6 GB 1 Host3 8 GHz 14 GB 2 Cluster failover capacity: This gives the number of hosts that can fail before there aren't enough slots left to accommodate all the VMs. For example, for previous hosts with 1 host failure policy, the failover capacity is 2 slots. In case of Host3 failure (host with larger capacity), the cluster is left with only two slots. But if the current failover capacity is less than the allowed limit, admission control disallows the operation. For example, if we are running two VMs and need to power the third one, it will be denied as the cluster capacity is two and it may not be able to accommodate three VMs. This option is probably not the best one for an environment that has VMs with significantly more of resources assigned than the rest of the VMs. The Host failure cluster tolerates option can be used when all cluster hosts are sized pretty much equally. Otherwise, if you use this option, then excessive capacity is reserved such that the cluster tolerates the largest host failure. When this option is used, VM reservations should be kept similar across the cluster as well. Because vCenter uses the slot sizes model to calculate capacity, and the slot size is based on the largest reservation, having VMs with a large reservation will again result in additional unnecessary capacity being reserved. Percentage of cluster resources: With this policy enabled, HA ensures that a specified percentage of resources are reserved for failover across all the hosts. It also checks that there are at least two hosts available. The calculation happens as follows: The total resource requirement for all the running VMs is calculated. For example, for three VMs in the previous table, the total requirement will be 7 GHz and 12 GB. The total available host resources are calculated. For the previous example, the total is 34 GHz and 148 GB. The current CPU and memory failover capacity for the cluster is calculated as follows: CPU: (1-7/34)*100%=79% RAM: (1-12/148)*100%=92% If the current CPU and memory capacity is less than allowed, the operation is denied. With such different hosts from the example, the CPU and RAM capacity should be configured carefully to avoid a situation when, for example, the host with most amount of RAM fails and the other hosts are not able to accommodate all the VMs because of memory resources. Therefore, RAM should be configured at 87 percent based on the two smallest hosts (#2 and #3) and not 30% based on the number of hosts in the environment: [1-(6+14)/148]*100%=87% In other words, if the host with 128 GB fails, we need to make sure that the total resources needed by the VMs are less than the sum of 6 GB and 14 GB, which is only 13 percent of the total cluster's 148 GB. Therefore, we need to make sure that in all instances, the VMs use only 13 percent of the RAM or that the cluster has 87 percent of RAM that is free. Specified failover hosts: With this policy enabled, HA keeps the chosen failover hosts reserved, doesn't allow the powering on or migrating of any VMs to this host, and restarts VMs on this host only when failure occurs. If for some reason, it's not possible to use a designated failover host to restart the VMs, HA will restart them on other available hosts. It is recommended to use the Percentage of cluster resources reserved option in most cases. This option offers more flexibility in terms of host and VM sizing than other options. HA security and logging vSphere HA configuration files for each host are stored on the host's local storage and are protected by the filesystem permissions. These files are only available to the root user. For security reasons, ESXi 5 hosts log HA activity only to syslog. Therefore, logs are placed at a location where syslog is configured to keep them. Log entries related to HA are prepended with fdm, which stands for fault domain manager. This is what the vSphere HA ESX service is called. Older versions of ESXi write HA activity to fdm logfiles in /var/log/vmware/fdm stored on the local disk. There is also an option to enable syslog logging on these hosts. Older ESX hosts are able to save HA activity only in the fdm local logfile in /var/log/vmware/. HA agent logging configuration also depends on the ESX host version. For ESXi 5 hosts, the logging options that can be configured via the Advanced Options tab under HA are listed in the article under the logging section available at http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2033250. The das.config.log.maxFileNum option causes ESXi 5 hosts to maintain two copies of the logfiles: one is a file created by the Version 5 logging mechanism, and the other one is maintained by the pre-5.0 logging mechanism. After any of these options are changed, HA needs to be reconfigured. The following table provides log capacity recommendations according to VMware for environments of different sizes based on the requirement to keep one week of history: Size Minimum log capacity per host in MB 40 VMs in total with 8 VMs per host 4 375 VMs in total with 25 VMs per host 35 1,280 VMs in total with 40 VMs per host 120 3,000 VMs in total with 512 VMs per host 300 These are just recommendations; additional capacity may be needed depending on the environment. Increasing the log capacity involves specifying the number of rotations together with the file size as well as making sure there is enough space on the storage resource where the logfiles are kept. The vCenter server uses the vpxuser account to connect to the HA agents. When HA is enabled for the first time, vCenter creates this account with a random password and makes sure the password is changed periodically. The time period for a password change is controlled by the VirtualCenter.VimPasswordExpirationInDays parameter that can be set under the Advanced Settings option in vCenter. All communication between vCenter and HA agents, as well as agent-to-agent traffic, is secured with SSL. Therefore, for vSphere HA, it's necessary that each host has verified SSL certificates. New certificates require HA to be reconfigured. It will also be reconfigured automatically if a host has been disconnected before the certificate is replaced. SSL certificates are also used to verify election messages so if there is a rogue agent running, it will only be able to affect the host it's running on. This issue, if it occurs, is reported to the administrator. HA uses TCP/8182 and UDP/8182 ports for communication between agents. These ports are opened and closed automatically by the host's firewall. This helps to ensure that these ports are open only when they are needed. Using HA with DRS When vSphere HA restarts VMs on a different host after a failure, the main priority is the immediate availability of VMs. Based on CPU and memory reservations, HA determines which host to use to power the VMs on. This decision is based, of course, on the available capacity of the host. It's quite possible that after all the VMs have been restarted, some hosts become highly loaded while others are relatively lightly loaded. DRS is the load balancing and failover solution that can be enabled in vCenter for better host resource management. vSphere HA, together with DRS, is able to deliver automatic failover and load balancing solutions, which may result in a more balanced cluster. However, there are a few things to consider when it comes to using both features together. In a cluster with DRS, HA, and the admission control enabled; VMs may not be automatically evacuated from a host entering the maintenance mode. This occurs because of resources reserved for VMs that need to be restarted. In this case, the administrator needs to migrate these VMs manually. Some VMs may not fail over because of resource constraints. This can happen in one of the following cases: HA admission control is disabled and DPM is enabled, which may result in insufficient capacity available to perform failover as some hosts may be in the standby mode and therefore, fewer hosts would be available. VM to host affinity rules limit hosts where certain VMs can be placed. Total resources are sufficient but fragmented across multiple hosts. In this case, these resources can't be used by the VMs for failover. DPM is in the manual mode that requires an administrator's confirmation before a host can be powered on from the standby mode. DRS is in the manual mode, and an administrator's confirmation may be needed so that the migration of VMs can be started. What to expect when HA is enabled HA only restarts a VM if there is a host failure. In other words, it will power on all the VMs that were running on a failed host placed on another member of the cluster. Therefore, even with HA enabled, there will still be a short downtime for VMs that are running on faulty hosts. In fast environments, however, VM reboot happens quickly. So if you are using some kind of monitoring system, it may not even trigger an alarm. Therefore, if a bunch of VMs have been rebooted unexpectedly, you know there was an issue with one of the hosts and can review the logs to find out what the issue was. Of course, if you have set up vCenter notifications, you should get an alert. If you need VMs to be up all the time even if the host goes down, there is another feature that can be enabled called Fault Tolerance.
Read more
  • 0
  • 0
  • 1580
article-image-your-first-step-towards-hyper-v-replica
Packt
11 Oct 2013
12 min read
Save for later

Your first step towards Hyper-V Replica

Packt
11 Oct 2013
12 min read
(For more resources related to this topic, see here.) The Server Message Block protocol When an enterprise starts to build a modern datacenter, the first thing that should be done is to set up the storage. With the introduction of Windows Server 2012, a new improved version of the Server Message Block (SMB) protocol is introduced. The SMB is a file sharing protocol. This new version is 3.0 and is designed for modern datacenters. It allows administrators to create file shares and deploy critical systems on them. This is really good, because now administrators have to deal with file shares and security permissions, instead of complex connections to storage arrays. The idea is to set up one central SMB file-sharing server and attach the underlying storage to it. This SMB server initiates connection to the underlying storage. The logical disks created on the storage are attached to this SMB server. Then different file shares are created on it with different access permissions. These file shares can be used by different systems, such as Hyper-V storage space for virtual machine files, MS SQL server database files, Exchange Server database files, and so on. It is an advantage, because all of the data is stored on one location, which means easier administration of data files. It is important to say that this is a new concept and is only available with Windows Server 2012. It comes with no performance degradation on critical systems, because SMB v3.0 was designed for this type of data traffic. Setting up security permissions on SMB file shares SMB file shares contain sensitive data files whether they are virtual machines or SQL server database files, proper security permissions need to be applied to them in order to ensure that only authorized users and machines have access to them. Because of this, SMB File Sharing server has to be connected to the LAN part of the infrastructure as well. Security permissions are read from an Active Directory server. For example, if Hyper-V hosts have to read and write on a share, then only the computer accounts of those hosts need permissions on that share, and no one else. Another example is, if the share holds MS SQL server database files, then only the SQL Server computer accounts and SQL Server service account need permissions on that share. Migration of virtual machines Virtual Machine High Availability is the reason why failover clusters are deployed. High availability means that there is no system downtime or there is minimal accepted system downtime. This is different from system uptime. A system can be up and running but it may not be available. Hyper-V hosts in modern datacenters run many virtual machines, depending on the underlying hardware resources. Each of these systems is very important to the consumer. Let's say that a Hyper-V hosts malfunctions at some bank, and let's say that this host, hosts several critical systems and one of them may be the ATM system. If this happens, the users won't be able to use the ATMs. This is where Virtual Machine High Availability comes into picture. It is achieved through the implementation of failover cluster. A failover cluster ensures that when a node of the cluster becomes unavailable, all of the virtual machines on that node will be safely migrated to another node of the same cluster. Users can even set rules to specify to which host the virtual machines failover should go. Migration is also useful when some maintenance tasks should be done on some of the nodes of the cluster. The node can safely be shut down and all of the virtual machines, or at least the most critical, will be migrated to another host. Configuring Hyper-V Replica Enterprises tend to increase their system availability and deliver end user services. There are various ways how this can be done, such as making your virtual machines highly available, disaster recovery methods, and back up of critical systems. In case of system malfunction or disasters, the IT department needs to react fast, in order to minimize system downtime. Disaster recovery methods are valuable to the enterprise. This is why it is imperative that the IT department implements them. When these methods are built in the existing platform that the enterprise uses and it is easy to configure and maintain, then you have a winning combination. This is a suitable scenario for Hyper-V Replica to step up. It is easy to configure and maintain, and it is integrated with the Hyper-V 3.0, which comes with Windows Server 2012. This is why Hyper-V Replica is becoming more attractive to the IT departments when it comes to disaster recovery methods. In this article, we will learn what are the Hyper-V Replica prerequisites and configuration steps for Hyper-V Replica in different deployment scenarios. Because Hyper-V Replica can be used with failover clusters, we will learn how to configure a failover cluster with Windows Server 2012. And we will introduce a new concept for virtual machine file storage called SMB. Hyper-V Replica requirements Before we can start with the implementation of Hyper-V Replica, we have to be sure we have met all the prerequisites. In order to implement Hyper-V Replica, we have to install Windows Server 2012 on our physical machines. Windows Server 2012 is a must, because Hyper-V Replica is functionality available only with that version of Windows Server. Next, you have to install Hyper-V on each of the physical machines. Hyper-V Replica is a built-in feature of Hyper-V 3.0 that comes with Windows Server 2012. If you plan to deploy Hyper-V on non-domain servers, you don't require an Active Directory Domain. If you want to implement a failover cluster on your premise, then you must have Active Directory Domain. In addition, if you want your replication traffic to be encrypted, you can use self-signed certificates from local servers or import a certificate generated from a Certificate Authority (CA). This is a server running Active Directory Certificate Services, which is a Windows Server Role that should be installed on a separate server. Certificates from such CAs are imported to Hyper-V Replica-enabled hosts and associated with Hyper-V Replica to encrypt traffic generated from a primary site to a replica site. A primary site is the production site of your company, and a replica site is a site which is not a part of the production site and it is where all the replication data will be stored. If we have checked and cleared all of these prerequisites, then we are ready to start with the deployment of Hyper-V Replica. Virtual machine replication in Failover Cluster environment Hyper-V Replica can be used with Failover Clusters, whether they reside in the primary or in the replica site. You can have the following deployment scenarios: Hyper-V host to a Failover Cluster Failover Cluster to a Failover Cluster Failover Cluster to a Hyper-V node Hyper-V Replica configuration when Failover Clusters are used is done with the Failover Cluster Management console. For replication to take place, the Hyper-V Replica Broker role must be installed on the Failover Clusters, whether they are in primary or replica sites. The Hyper-V Replica Broker role is installed like any other Failover Cluster roles. Failover scenarios In Hyper-V Replica there are three failover scenarios: Test failover Planned failover Unplanned failover Test failover As the name says, this is only used for testing purposes, such as health validation and Hyper-V Replica functionality. When test failover is performed, there is no downtime on the systems in the production environment. Test failover is done at the replica site. When test failover is in progress, a new virtual machine is created which is a copy of the virtual machine for which you are performing the test failover. It is easily distinguished because the new virtual machine has Test added to the name. It is safe for the Test Virtual Machine to be started because there is no network adapter on it. So no one can access it. It serves only for testing purposes. You can log in on it and check the application consistency. When you have finished testing, right-click on the virtual machine and choose Stop Test Failover, and then the Test virtual machine is deleted. Planned failover Planned failover is the safest and the only type that should be performed. Planned failover is usually done when Hyper-V hosts have to be shut down for various reasons such as transport or maintenance. This is similar to Live Migration. You make a planned failover so that you don't lose virtual machine availability. The first thing you have to do is check whether the replication process for the virtual machine is healthy. To do this, you have to start the Hyper-V Management console in the primary site. Choose the virtual machine, and then at the bottom, click on the Replication tab. If the replication health status is Healthy, then it is fine to do the planned failover. If the health status doesn't show Healthy, then you need to do some maintenance until it says Healthy. Unplanned failovers Unplanned failover is used only as a last resort. It always results in data loss because any data that has not been replicated is lost during the failover. Although planned failover is done at the primary site, the unplanned failover is done at the replica site. When performing unplanned failover, the replica virtual machine is started. At that moment Hyper-V checks to see if the primary virtual machine is on. If it is on, then the failover process is stopped. If the primary virtual machine is off, then the failover process is continued and the replica virtual machine becomes the primary virtual machine. What is virtualization? Virtualization is a concept in IT that has its root back in 1960 when mainframes were used. In recent years, virtualization became more available because of different user-friendly tools, such as Microsoft Hyper-V, were introduced to customers. These tools allow the administrator to configure and administer a virtualized environment easily. Virtualization is a concept where a hypervisor, which is a type of middleware, is deployed on a physical device. This hypervisor allows the administrator to deploy many virtual servers that will execute its workload on that same physical machine. In other words, you get many virtual servers on one physical device. This concept gives better utilization of resources and thus it is cost effective. Hyper-V 3.0 features With the introduction of Windows Server 2008 R2, two new concepts regarding virtual machine high availability were introduced. Virtual machine high availability is a concept that allows the virtual machine to execute its workload with minimum downtime. The idea is to have a mechanism that will transfer the execution of the virtual machine to another physical server in case of node malfunctioning. In Windows Server 2008 R2, a virtual machine can be live migrated to another Hyper-V host. There is also quick migration, which allows multiple migrations from one host to another host. In Windows Server 2012, there are new features regarding Virtual Machine Mobility. Not only can you live migrate a virtual machine but you can also migrate all of its associated fi les, including the virtual machine disks to another location. Both mechanisms improve high availability. Live migration is a functionality that allows you to transfer the execution of a virtual machine to another server with no downtime. Previous versions of Windows Server lacked disaster recovery mechanisms. Disaster recovery mechanism is any tool that allows the user to configure policy that will minimize the downtime of systems in case of disasters. That is why, with the introduction of Windows Server 2012, Hyper-V Replica is installed together with Hyper-V and can be used in clustered and in non-clustered environments. Windows Failover Clustering is a Windows feature that is installed from the Add Roles and Features Wizard from Server Manager. It makes the server ready to be joined to a failover cluster. Hyper-V Replica gives enterprises great value, because it is an easy to implement and configure a Business Continuity and Disaster Recovery (BCDR) solution. It is suitable for Hyper-V virtualized environments because it is built in the Hyper-V role of Windows Server 2012. The outcome of this is for virtual machines running at one site called primary site to be easily replicated to another backup site called replica site, in case of disasters. The replication between the sites is done over an IP network, so it can be done in LAN environments or across WAN link. This BCDR solution provides efficient and periodical replication. In case of disaster it allows the production servers to be failed over to a replica server. This is very important for critical systems because it reduces downtime of those systems. It also allows the Hyper-V administrator to restore virtual machines to a specific point in time regarding recovery history of a certain virtual machine. Security considerations Restricting access to Hyper-V is very important. You want only authorized users to have access to the management console of Hyper-V. When Hyper-V is installed, a local security group on the server is created. It is named Hyper-V Administrators. Every user that is member of this group can access and configure Hyper-V settings. Another way to increase security of Hyper-V is to change the default port numbers of Hyper-V Authentication. By default, Kerberos uses port number 80, and Certificate Authentication uses port number 443. Certificated also encrypts the traffic generated from primary to replica site. And at last, you can create a list of authorized servers from which replication traffic will be received. Summary There are new concepts and useful features that make the IT administrators' life easier. Windows Server 2012 is designed for enterprises that want to deploy modern datacenters with state-of-the-art capabilities. The new user interface, the simplified configuration, and all of the built-in features are what that makes Windows Server 2012 appealing to the IT administrators. Resources for Article: Further resources on this subject: Dynamically enable a control (Become an expert) [Article] Choosing the right flavor of Debian (Simple) [Article] So, what is Microsoft © Hyper-V server 2008 R2? [Article]
Read more
  • 0
  • 0
  • 1562

article-image-so-what-microsoft-hyper-v-server-2008-r2
Packt
16 Sep 2013
11 min read
Save for later

So, what is Microsoft © Hyper-V server 2008 R2?

Packt
16 Sep 2013
11 min read
(For more resources related to this topic, see here.) Welcome to the world of virtualization. On the next pages we will explain in simple terms what virtualization is, where it comes from, and why this technology is amazing. So let's start. The concept of virtualization is not really new; as a matter of fact it is in some ways an inheritance of the mainframe world. For those of you who don't know what a mainframe, is here is a short explanation: A mainframe is a huge computer that can have from several dozen up to hundreds of processors, tons of RAM, and enormous storage space. Think of it as the super computers that international banks are using, or car manufacturers, or even aerospace entities. These monster computers have a "core" operating system (OS), which helps in creating a logical partition of the resources to assign it to a smaller OS. In other words, the full hardware power is somehow divided into smaller chunks that have a specific purpose. As you can imagine, there are not too many companies which can afford this kind of equipment, and this is one of the reasons why the small servers became so popular. You can learn more about mainframes on the Wikipedia page at http://en.wikipedia.org/wiki/Mainframe_computer. Starting in the 80s, small servers (mainly based on Intel© and/or AMD© processors) became quite popular, and almost anybody could buy a simple server. But mid-sized companies began to increase the number of servers. In later years the power provided by new servers was enough to fulfill the most demanding applications, and guess what, even to support virtualization. But you will be wondering, what is virtualization? Well the virtualization concept, even if a bit bizarre, is to work as a normal application to the host OS, asking for CPU, memory, disk, network, to name the main four subsystems, but the application is creating hardware, virtualized hardware of course, that can be used to install a brand new OS. In the diagram that follows, you can see a physical server, including CPU, RAM, disk, and network. This server needs an OS on top, and from there you can install and execute programs such as Internet browsers, databases, spreadsheets, and of course a virtualization software. This virtualization software behaves the same way as any other application-it sends a request to the OS for a file stored on the disk, access to a web page, more CPU time; so for the host OS, is a standard application that demands resources. But within the virtualization application (also known as Hypervisor), some virtual hardware is created, in other words, some fake hardware is presented at the top end of the program. At this point we can start the OS setup on this virtual hardware, and the OS can recognize the hardware and use it as if it were real. So coming back to the original idea, virtualization is a technique, based on software, to execute several servers and their corresponding OSes on the same physical hardware. Virtualization can be implemented on many architectures, such as IBM© mainframes, many distributions of Unix© and Linux, Windows©, Apple©, and so on. We already mentioned that the virtualization is based on software, but there are two main kinds of software you can use to virtualize your servers. The first type of software is the one that behaves as any other application installed on the server and is also known as workstation or software-based virtualization. The second one is part of the kernel on the host OS, and is enabled as a service. This type of software is also called as hardware virtualization and it uses special CPU characteristics (as Data Execution Prevention or Virtualization Support), which we will discuss in the installation section. The main difference is the performance you can have when using either of the types. On the software/workstation virtualization, the request for hardware resources has to go from the application down to the OS into the kernel in order to get the resource. In the hardware solution, the virtualization software or hypervisor layer is built into the kernel and makes extensive usage of the CPU's virtualization capabilities, so the resource demand is faster and more reliable, as in Microsoft © Hyper-V Server 2008 R2. Reliability and fault tolerance By placing all the eggs in the same basket, we want to be sure that the basket is protected. Now think that instead of eggs, we have virtual machines, and instead of the basket, we have a Hyper-V server. We require that this server is up and running most of the time, rendering into reliable virtual machines that can run for a long time. For that reason we need a fault tolerant system, that is to say a whole system which is capable of running normally even if a fault or a failure arises. How can this be achieved? Well, just use more than one Hyper-V server. If a single Hyper-V server fails, all running VMs on it will fail, but if we have a couple of Hyper-V servers running hand in hand, then if the first one becomes unavailable, its twin brother will take care of the load. Simple, isn't it? It is, if it is correctly dimensioned and configured. This is called Live Migration. In a previous section we discussed how to migrate a VM from one Hyper-V server to another, but using this import/export technique causes some downtime in our VMs. You can imagine how much time it will take to move all our machines in case a host server fails, and even worse, if the host server is dead, you can't export your machines at all. Well, this is one of the reasons we should create a Cluster. As we already stated, a fault tolerant solution is basically to duplicate everything in the given solution. If a single hard disk may fail, then we configure additional disks (as it may be RAID 1 or RAID 5), if a NIC is prone to failure, then teaming two NICs may solve the problem. Of course, if a single server may fail (dragging with it all VMs on it), then the solution is to add another server; but here we face the problem of storage space; each disk can only be physically connected to one single data bus (consider this the cable, for simplicity), and the server must have its own disk in order to operate correctly. This can be done by using a single shared disk, as it may be a directly connected SCSI storage, a SAN (Storage Area Network connected by optical fiber), or the very popular NAS (Network Attached Storage) connected by NICs. As we can see in the preceding diagram, the red circle has two servers; each is a node within the cluster. When you connect to this infrastructure, you don't even see the number of servers, because in a cluster there are shared resources such as the server name, IP address, and so on. So you connect to the first available physical server, and in the event of a failure, your session is automatically transferred to the next available physical server. Exactly the same happens at the server's backend. We can define certain resources as shared to the cluster's resources, and then the cluster can administer which physical server will use the resources. For example, consider the preceding diagram, there are several iSCSI targets (Internet SCSI targets) defined in the NAS, and the cluster is accessing those according to the active physical node of the cluster, thus making your service (in this case, your configured virtual machines) highly available. You can see the iSCSI FAQ on the Microsoft web site (http://go.microsoft.com/fwlink/?LinkId=61375). In order to use a failover cluster solution, the hardware must be marked as Certified for Windows Server 2008 R2 and it has to be identical (in some cases the solution may work with dissimilar hardware, but the maintenance, operation, capacity planning, to name some, will increase thus making the solution more expensive and more difficult to possess). Also the full solution has to successfully pass the Hardware Configuration Wizard when creating the cluster. The storage solution must be certified as well, and it has to be Windows Cluster compliant (mainly supporting the SCSI-3 Persistent Reservations specification), and is strongly recommended that you implement an isolated LAN exclusively for storage purposes. Remember that to have a fault tolerant solution, all infrastructure devices have to be duplicated, even networks. The configuration wizard will let us configure our cluster even if the network is not redundant, but it will display a warning notifying you of this point. Ok, let's get to business. To configure a fault tolerant Hyper-V cluster, we need to use Cluster Shared Volumes, which, in simple terms, will let Hyper-V be a clustered service. As we are using a NAS, we have to configure both the ends—the iSCSI initiator (on the host server) and the iSCSI terminator (on the NAS). You can see this Microsoft Technet video at http://technet.microsoft.com/en-us/video/how-to-setup-iscsi-on-windows-server-2008-11-mins.aspx or read the Microsoft article for more information on how to configure iSCSI initiators at http://technet.microsoft.com/en-us/library/ee338480(v=ws.10).aspx. To configure the iSCSI terminator on the NAS, please refer to the NAS manufacturer's documentation. Apart from the iSCSI disk configuration we have for our virtual machines, we need to provide a witness disk (known in the past as Quorum disk). This disk (using 1 GB will do the trick) is used to orchestrate and synchronize our cluster. Once we have our iSCSI disk configured and visible (you can check this by opening the Computer Management console and selecting Disk Management ) in one of our servers, we can proceed to configure our cluster. To install the Failover Clustering feature, we have to open the Server Manager console, select the Roles node on the left, then select Add Roles, and finally select the Failover Clustering role (this is very similar to the procedure we used when we installed the Hyper-V role in the Requirements and Installation section). We have to repeat this step for every node participating on the cluster. At this point we should have both the Failover Clustering role and the Hyper-V role set up in the servers, so we can open the Failover Cluster Manager console from the Administrative tools and validate our configuration. Check that Failover Cluster Manager is selected and on the center pane, select Validate Configuration (a right-click can do the trick as well). Follow all the instructions and run all of the tests until no errors are shown. When this step is completed, we can proceed to create our cluster. In the same Failover Cluster Manager console, in the center pane, select Create a Cluster (a right-click can do the trick as well). This wizard will ask you for the following: All servers that will participate in the cluster (a maximum of 16 nodes and a minimum of 1, which is useless, so better go for two servers): The name of the cluster (this name is how you will access the cluster and not the individual server names) The IP configuration for the cluster (same as the previous point): We still need to enable Cluster Shared Volumes. To do so, right-click the failover cluster, and then click Enable Cluster Shared Volumes. The Enable Cluster Shared Volumes dialog opens. Read and accept the terms and restrictions, and click OK. Then select Cluster Shared Volumes and under Actions(to the left), select Add Storage and select the disks (the iSCSI disks) we had previously configured. Now the only thing we have left, is to make the VM highly available, which we created in the Quick start – creating a virtual machine in 8 steps section (or any other VMs that you have created or any new VM you want to create, be imaginative!). The OS in the virtual machine can failover to another node without almost no interruption. Note that the virtual machine cannot be running in order to make it highly available through the wizard. In the Failover Clustering Manager console, expand the tree of the cluster we just created. Select Services and Applications. In the Action pane, select Configure a Service or Application. In the Select Service or Application page, click Virtual Machine and then click Next. In the Select Virtual Machine page, check the name of the virtual machine that you want to make highly available, and then click Next. Confirm your selection and then click Next again. The wizard will show a summary and the ability to check the report. And finally, under Services and Applications , right-click the virtual machine and then click Bring this service or application online. This action will bring the virtual machine online and start it.
Read more
  • 0
  • 0
  • 1114