Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Virtualization

115 Articles
article-image-designing-and-building-vrealize-automation-62-infrastructure
Packt
29 Sep 2015
16 min read
Save for later

Designing and Building a vRealize Automation 6.2 Infrastructure

Packt
29 Sep 2015
16 min read
 In this article by J. Powell, the author of the book Mastering vRealize Automation 6.2, we put together a design and build vRealize Automation 6.2 from POC to production. With the knowledge gained from this article, you should feel comfortable installing and configuring vRA. In this article, we will be covering the following topics: Proving the technology Proof of Concept Proof of Technology Pilot Designing the vRealize Automation architecture (For more resources related to this topic, see here.) Proving the technology In this section, we are going to discuss how to approach a vRealize Automation 6.2 project. This is a necessary component in order to assure a successful project, and it is specifically necessary when we discuss vRA, due to the sheer amount of moving parts that comprise the software. We are going to focus on the end users, whether they are individuals or business units, such as your company's software development department. These are the people that will be using vRA to provide the speed and agility necessary to deliver results that drive the business and make money. If we take this approach and treat our co-workers as customers, we can give them what they need to perform their jobs as opposed to what we perceive they need from an IT perspective. Designing our vRA deployment around the user and business requirements, first gives us a better plan to implement the backend infrastructure as well as the service offerings within the vRA web portal. This allows us to build a business case for vRealize Automation and will help determine which of the three editions will make sense to meet these needs. Once we have our business case created, validated, and approved, we can start testing vRealize Automation. There are three common phases to a testing cycle: Proof of Concept Proof of Technology Pilot implementation We will cover these phases in the following sections and explore whether you need them for your vRealize Automation 6.2 deployment. Proof of Concept A POC is typically an abbreviated version of what you hope to achieve during production. It is normally spun up in a lab, using old hardware, with a limited number of test users. Once your POC is set up, one of two things happen. First, nothing happens or it gets decommissioned. After all, it's just the IT department getting their hands dirty with new technology. This also happens when there is not a clear business driver, which provides a reason to have the technology in a production environment. The second thing that could happen is that the technology is proven, and it moves into a pilot phase. Of course, this is completely up to you. Perhaps, a demonstration of the technology will be enough, or testing some limited outcomes in VMware's HOL for vRealize Automation 6.2 will do the trick. Due to the number of components and features within vRA, it is strongly recommended that you create a POC, documenting the process along the way. This will give you a strong base if you take the project from POC to production. Proof of Technology The object of a POT project is to determine whether the proposed solution or technology will integrate in your existing IT landscape and add value. This is the stage where it is important to document any technical issues you encounter in your individual environment. There is no need to involve pilot users in this process as it is specifically to validate the technical merits of the software. Pilot implementation A pilot is a small scale and targeted roll out of the technology in a production environment. Its scope is limited, typically by a number of users and systems. This is to allow testing, so as to make sure the technology works as expected and designed. It also limits the business risk. A pilot deployment in terms of vRA is also a way to gain feedback from the users who will ultimately use it on a regular basis. vRealize Automation 6.2 is a product that empowers the end users to provision everything as a service. How the users feel about the layout of the web portal, user experience, and automated feedback from the system directly impacts how well the product will work in a full-blown production scenario. This also gives you time to make any necessary modifications to the vRA environment before providing access to additional users. When designing the pilot infrastructure, you should use the same hardware that is used during production. This includes ESXi hosts, storage, fiber or Internet Small Computer System Interface (iSCSI) connectivity, and vCenter versions. This will take into account any variances between platforms and configurations that could affect performance. Even at this stage, design, attention to detail, and following VMware best practices is key. Often, pilot programs get rolled straight into production. Adhering to these concepts will put you on the right path to a successful deployment. To get a better understanding, let's look at some of the design elements that should be considered: Size of the deployment: A small deployment will support 10,000 managed machines, 500 catalog items, and 10 concurrent deployments. Concurrent provisioning: Only two concurrent provisions per endpoint are allowed by default. You may want to increase this limit to suit your requirements. Hardware sizing: This refers to the number of servers, the CPU, and the memory. Scale: This refers to whether there will be multiple Identity and vRealize Automation vApps. Storage: This refers to pools of storage from Storage Area Network (SAN) or Network Attached Storage (NAS) and tiers of storage for performance requirements. Network: This refers to LANs, load balancing, internal versus external access to web portals, and IP pools for use with the infrastructure provisioned through vRA. Firewall: This refers to knowing what ports need to be opened between the various components that make up vRA, as well as the other endpoint that may fall under vRA's purview. Portal layout: This refers to the items you want to provide to the end user and the manner in which you categorize them for future growth. IT Business Management Suite Standard Edition: If you are going to implement this product, it can scale up to 20,000 VMs across four vCenter servers. Certificates: Appliances can be self-signed, but it is recommended to use an internal Certificate Authority for vRA components and an externally signed certificate to use on the vRA web portal if it is going to be exposed to the public Internet. VMware has published a Technical White Paper that covers all the details and considerations when deploying vRA. You can download the paper by visiting http://www.vmware.com/files/pdf/products/vCloud/VMware-vCloud-Automation-Center-61-Reference-Architecture.pdf. VMware provides the following general recommendation when deploying vRealize Automation: keep all vRA components in the same time zone with their clocks synced. If you plan on using VMware IT Business Management Suite Standard Edition, deploy it in the same LAN as vCenter. You can deploy Worker DEMs and proxy agents over the WAN, but all other components should not go over the WAN, as to prevent performance degradation. Here is a diagram of the pilot process: Designing the vRealize Automation architecture We have discussed the components that comprise vRealize Automation as well as some key design elements. Now, let's see some of the scenarios at a high level. Keep in mind that vRA is designed to manage tens of thousands of VMs in an infrastructure. Depending on your environment, you may never exceed the limitations of what VMware considers to be a small deployment. The following diagram displays the minimum footprint needed for small deployment architecture: A medium deployment can support up to 30,000 managed machines, 1,000 catalog items, and 50 concurrent deployments. The following diagram shows you the minimum required footprint for a medium deployment: Large deployments support 50,000 managed machines, 2,500 catalog items, and 100 concurrent deployments. The following diagram shows you the minimum required footprint for a large deployment: Design considerations Now that we understand the design elements for a small, medium, and large infrastructure, let's explore the components of vRA and build an example design, based on the small infrastructure requirements from VMware. Since there are so many options and components, we have broken them down into easily digestible components. Naming conventions It is important to give some thought to naming conventions for different aspects of the vRA web portal. Your company has probably set a naming convention for servers and environments, and we will have to make sure items provisioned from vRA adhere to those standards. It is important to name the different components of vRealize Automation in a method that makes sense for what your end goal may be regarding what vRA will do. This is necessary because it is not easy (and in some cases not possible) to rename the elements of the vRA web portal once you have implemented them. Compute resources Compute resources in terms of vRA refers to an object that represents a host, host cluster, virtual data center, or a public Cloud region, such as Amazon, where machines and applications can be provisioned. For example, compute resources can refer to vCenter, Hyper-V, or Amazon AWS. This list grows with each subsequent release of vRA. Business and Fabric groups A Business group in the vRA web portal is a set of services and resources assigned to a set of users. Quite simply, it is a way to align a business department or unit with the resources it needs. For example, you may have a Business group named Software Developers, and you would want them to be able to provision SQL 2012 and 2014 on Windows 2012 R2 servers. Fabric groups enable IT administrators to provide resources from your infrastructure. You can add users or groups to the Fabric group in order to manage the infrastructure resources you have assigned. For example, if you have a software development cluster in vCenter, you could create a Fabric group that contains the users responsible for the management of this cluster to oversee the cluster resources. Endpoints and credentials Endpoints can represent anything from vCenter, to storage, physical servers, and public Cloud offerings, such as Amazon AWS. The platform address is defined with the endpoint (in terms of being accessed through a web browser) along with the credentials needed to manage them. Reservations Reservations refer to how we provide a portion of our total infrastructure that is to be used for consumption by end users. It is a key design element in the vRealize Automation 6.2 infrastructure design. Each reservation created will need to define the disk, memory, networking, and priority. The lower their number, the higher will be the priority. This is to resolve conflicts in case there are multiple matching reservations. If the priorities of the multiple reservations are equal, vRA will choose a reservation in a round-robin style order: In the preceding diagram, on the far right-hand side, we can see that we have Shared Infrastructure composed of Private Physical and Private Virtual space, as well as a portion of a Public Cloud offering. By creating different reservations, we can assure that there is enough infrastructure for the business, while providing a dedicated portion of the total infrastructure to our end users. Reservation policies A reservation policy is a set of reservations that you can select from a blueprint to restrict provisioning only to specific reservations. Reservation policies are then attached to a reservation. An example of reservations policies can be taken when using them to create different storage policies. You can create a separate Bronze, Silver, and Gold policy to reflect the type of disk available on our SAN (such as SATA, SAS, and SDD). Network profiles By default, vRA will assign an IP address from a DHCP server to all the machines it provisions. However, most production environments do not use DHCP for their servers. A network profile will need to be created to allocate and assign static IPs to these servers. Network profile options consist of external, private, NAT (short for Network Address Translation), and routed. For the scope of our examples, we will focus on the external option. Compute resources Compute resources are tied in with Fabric groups, endpoints, storage reservation policies, and cost profiles. You must have these elements created before you can configure compute resources, although some components, such as storage and cost profiles, are optional. An example of a compute resource is a vCenter cluster. It is created automatically when you add an endpoint to the vRA web portal. Blueprints Blueprints are instruction sets to build virtual, physical, and Cloud-based machines, as well vApps. Blueprints define a machine or a set of application properties, the way it is provisioned, and its policy and management settings. For an end user, a blueprint is listed as an item in the Service Catalog tab. The user can request the item, and vRA would use the blueprint to provision the user's request. Blueprints also provide a way to prompt the user making the request for additional items, such as more compute resources, application or machine names, as well as network information. Of course, this can be automated as well and will probably be the preferred method in your environment. Blueprints also contain workflow logic. vRealize Automation contains built-in workflows for cloning snapshots, Kickstart, ISO, SCCM, and WIM deployments. You can define a minimum and maximum for CPU, memory, and storage. This will give end users the option to customize their machines to match their individual needs. It is a best practice to define the minimum for servers with very low resources, such as 1 vCPU and 512 MB for memory. It is easy to hot add these resources if the end user needs more compute after an initial request. However, if you set the minimum resources too high in the blueprint, you cannot lower the value. You will have to create a new blueprint. You can also define customized properties in the blueprints. For example, if you want to provide a VM with a defined MAC address or without a virtual CD-ROM attached, you can do so. VMware has published a detailed guide of the Custom Properties and their values. You can find it at http://pubs.vmware.com/vra-62/topic/com.vmware.ICbase/PDF/vrealize-automation-62-custom-properties.pdf. Custom Properties are case sensitive. It is recommended to test Custom Properties individually until you are comfortable using them. For example, a blueprint referencing an ISO workflow would fail if you have a Custom Property to remove the CD-ROM. Users and groups Users and groups are defined in the Administration section of the vRA web portal. This is where we would assign vRA specific roles to groups. It is worth mentioning when you login to the vRA web portal and click on users, it is blank. This is because of the sheer number of users that could be potentially allowed to access the portal and would slow the load time. In our examples, we will focus on users and groups from our Identity Appliance that ties in to Active Directory. Catalog management Catalog management consists of services, catalog items, actions, and entitlement. We will discuss them in more detail in the following sections. Services Services are another key design element and are defined by the vRA administrators to help group subsets of your environment. For example, you may have services defined for applications, where you would list items, such as SQL and Oracle databases. You could also create a service called OperatingSystems where you would group catalog items, such as Linux and Windows. You can make these services active or inactive, and also define maintenance windows when catalog items under this category would be unavailable for provisioning. Catalog items Catalog items are essentially links back to blueprints. These items are tied back to a service that you previously defined and helped shape the Service Catalog tab that the end user will use to provision machines and applications. Also, you will entitle users to use the catalog item. Entitlements As mentioned previously, entitlements are how we link business users and groups to services, catalog items, and actions. Actions Actions are a list of operations that gives a user the ability to perform certain tasks with services and catalog items. There are over 30 out of the box action items that come with vRA. This includes creating and destroying VMs, changing the lease time, as well as adding additional compute resources. You also have the option of creating custom actions as well. Approval policies Approval policies are the sets of rules that govern the use of catalog items. They can be used in the pre or post configuration life cycle of an item. Let's say, as an example, we have a Red Hat Linux VM that a user can provision. We have set the minimum vCPU to 1, but have defined a maximum of 4. We would want to notify the user's manager and the IT team when a request to provision the VM exceeds the minimum vCPU we have defined. We could create an approval policy to perform a pre-check to see if the user is requesting more than one vCPU. If the threshold is exceeded, an e-mail will be sent out to approve the additional vCPU resources. Until the notification is approved, the VM will not be provisioned. Advanced services Advanced services is an area of the vRA web portal where we can tie in customized workflows from vRealize Orchestrator. For example, we may need to check for a file in the VM's operating system once it has been deployed. We need to do this to make sure that an application has been deployed successfully or a baseline compliance is in order. We can present vRealize Orchestrator workflows for end users to leverage in almost the same manner as we do IaaS. Summary In this article, we covered the design and build principles of vRealize Automation 6.2. We discussed how to prove the technology by performing due diligence checks with the business users and creating a case to implement a POC. We detailed considerations when rolling out vRA in a pilot program and showed you how to gauge its success. Lastly, we detailed the components that comprise the design and build of vRealize Automation, while introducing additional elements. Resources for Article: Further resources on this subject: vROps – Introduction and Architecture[article] An Overview of Horizon View Architecture and its Components[article] Master Virtual Desktop Image Creation [article]
Read more
  • 0
  • 0
  • 4094

article-image-deploying-orchestrator-appliance
Packt
16 Sep 2015
5 min read
Save for later

Deploying the Orchestrator Appliance

Packt
16 Sep 2015
5 min read
This article by Daniel Langenhan, the author of VMware vRealize Orchestrator Essentials, discusses the deployment of Orchestrator Appliance, and then goes on to explaining how to access it using the Orchestrator home page. In the following sections, we will discuss how to deploy Orchestrator in vCenter and with VMware Workstation. (For more resources related to this topic, see here.) Deploying the Appliance with vCenter To make the best use of Orchestrator, its best to deploy it into your vSphere infrastructure. For this, we deploy it with vCenter. Open your vSphere Web Client and log in. Select a host or cluster that should host the Orchestrator Appliance. Right-click the Host or Cluster and select Deploy OVF Template. The deploy wizard will start and ask you the typical OVF questions: Accept the EULA Choose the VM name and the VM folder where it will be stored Select the storage and network it should connect to. Make sure that you select a static IP The Customize template step will now ask you about some more Orchestrator-specific details. You will be asked to provide a new password for the root user. The root user is used to connect to the vRO appliance operating system or the web console. The other password that is needed is for the vRO Configurator interface. The last piece of information needed is the network information for the new VM. The following screenshot shows an example of the Customize template step:   The last step summarizes all the settings and lets you power on the VM after creation. Click on Finish and wait until the VM is deployed and powered on. Deploying the appliance into VMware Workstation For learning how to use Orchestrator, or for testing purposes, you can deploy Orchestrator using VMware Workstation (Fusion for MAC users). The process is pretty simple: Download the Orchestrator Appliance on to your desktop. Double-click on the OVA file. The import wizard now asks you for a name and location of your local file structure for this VM. Chose a location and click on Import. Accept the EULA. Wait until the import has finished. Click on Edit virtual machine settings. Select Network Adapter. Chose the correct network (Bridged, NAT, or Host only) for this VM. I typically use Host Only.   Click on OK to exit the settings. Power on the VM. Watch the boot screen. At some stage, the boot will stop and you will be prompted for the root password. Enter a new password and confirm it. After a moment, you will be asked for the password for the Orchestrator Configurator. Enter a new password and confirm it. After this, the boot process should finish, and you should see the Orchestrator Appliance DHCP IP. If you would like to configure the VM with a fixed IP, access the appliance configuration, as shown on the console screen (see the next section). After the deployment If the deployment is successful, the console of the VM should show a screen that looks like the following screenshot:   You can now access the Orchestrator Appliance, as shown in the next section. Accessing Orchestrator Orchestrator has its own little webserver that can be accessed by any web browser. Accessing the Orchestrator home page We will now access the Orchestrator home page: Open a web browser such as Mozilla Firefox, IE, or Google Chrome. Enter the IP or FQDN of the Orchestrator Appliance. The Orchestrator home page will open. It looks like the following screenshot:   The home page contains some very useful links, as shown in the preceding screenshot. Here is an explanation of each number: Number Description 1 Click here to start the Orchestrator Java Client. You can also access the Client directly by visiting https://[IP or FQDN]:8281/vco/client/client.jnlp. 2 Click here to download and install the Orchestrator Java Client locally. 3 Click here to access the Orchestrator Configurator, which is scheduled to disappear soon, whereupon we won't use it any more. The way forward will be Orchestrator Control Center. 4 This is a selection of links that can be used to find helpful information and download plugins. 5 These are some additional links to VMware sites. Starting the Orchestrator Client Let's open the Orchestrator Client. We will use an internal user to log in until we have hooked up Orchestrator to SSO. For the Orchestrator Client, you need at least Java 7. From the Orchestrator home page, click on Start Orchestrator Client. Your Java environment will start. You may be required to acknowledge that you really want to start this application. You will now be greeted with the login screen to Orchestrator:   Enter vcoadmin as the username and vcoadmin as the password. This is a preconfigured user that allows you to log in and use Orchestrator directly. Click on Login. Now, the Orchestrator Client will load. After a moment, you will see something that looks like the following screenshot: You are now logged in to the Orchestrator Client. Summary This article guided you through the process of deploying and accessing an Orchestrator Appliance with vCenter and VMware workstation. Resources for Article: Further resources on this subject: Working with VMware Infrastructure [article] Upgrading VMware Virtual Infrastructure Setups [article] VMware vRealize Operations Performance and Capacity Management [article]
Read more
  • 0
  • 0
  • 1177

article-image-getting-started-understanding-citrix-xendesktop-and-its-architecture
Packt
14 Sep 2015
6 min read
Save for later

Getting Started – Understanding Citrix XenDesktop and its Architecture

Packt
14 Sep 2015
6 min read
In this article written by Gurpinder Singh, author of the book Troubleshooting Citrix Xendesktop, the author wants us to learn about the following topics: Hosted shared vs hosted virtual desktops Citrix FlexCast delivery technology Modular framework architecture What's new in XenDesktop 7.x (For more resources related to this topic, see here.) Hosted shared desktops (HSD) vs hosted virtual desktops (HVD) Instead of going through the XenDesktop architecture; firstly, we would like to explain the difference between the two desktop delivery platforms HSD and HVD. It is a common question that is asked by every System Administrator whenever there is a discussion on the most suited desktop delivery platform for the enterprises. Desktop Delivery platform depends on the requirements for the enterprise. Some choose Hosted Shared Desktops (HSD)or Server Based Computing (XenApp) over Hosted Virtual Desktop (XenDesktop); where single server desktop is shared among multiple users, and the environment is locked down using Active Directory GPOs. XenApp is cost effective platform when compared between XenApp and XenDesktop and many small to mid-sized enterprises prefer to choose this platform due to its cost benefits and less complexity. However, the preceding model does pose some risks to the environment as the same server is being shared by multiple users and a proper design plan is required to configure proper HSD or XenApp Published Desktop environment. Many enterprises have security and other user level dependencies where they prefer to go with hosted virtual desktops solution. Hosted virtual desktop or XenDesktop runs a Windows 7 or Windows 8 desktop running as virtual machine hosted on a data centre. In this model, single user connects to single desktop and therefore, there is a very less risk of having desktop configuration impacted for all users. XenDesktop 7.x and above versions now also enable you to deliver server based desktops (HSD) along with HVD within one product suite. XenDesktop also provides HVD pooled desktops which work on a shared OS image concept which is similar to HSD desktops with a difference of running Desktop Operating System instead of Server Operating System. Please have a look at the following table which would provide you a fair idea on the requirement and recommendation on both delivery platforms for your enterprise. Customer Requirement Delivery Platform User needs to work on one or two applications and often need not to do any updates or installation on their own. Hosted Shared Desktop User work on their own core set of applications for which they need to change system level settings, installations and so on. Hosted virtual Desktops (Dedicated) User works on MS Office and other content creation tools Hosted Shared Desktop User needs to work on CPU and graphic intensive applications that requires video rendering Hosted Virtual Desktop (Blade PCs) User needs to have admin privileges to work on specific set of applications. Hosted Virtual Desktop (Pooled) You can always have mixed set of desktop delivery platforms in your environment focussed on the customer need and requirements. Citrix FlexCast delivery technology Citrix FlexCast is a delivery technology that allows Citrix administrator to personalize virtual desktops to meet the performance, security and flexibility requirements of end users. There are different types of user requirements; some need standard desktops with standard set of apps and others require high performance personalized desktops. Citrix has come up with a solution to meet these demands with Citrix FlexCast Technology. You can deliver any kind of virtualized desktop with FlexCast technology, there are five different categories in which FlexCast models are available. Hosted Shared or HSD Hosted Virtual Desktop or HVD Streamed VHD Local VMs On-Demand Apps The detailed discussion on these models is out of scope for this article. To read more about the FlexCast models, please visit http://support.citrix.com/article/CTX139331. Modular framework architecture To understand the XenDesktop architecture, it is better to break down the architecture into discrete independent modules rather than visualizing it as an integrated one single big piece. Citrix provided this modularized approach to design and architect XenDesktop to solve end customers set of requirements and objectives. This modularized approach solves customer requirements by providing a platform that is highly resilient, flexible and scalable. This reference architecture is based on information gathered by multiple Citrix consultants working on a wide range of XenDesktop implementations. Have a look at the basic components of the XenDesktop architecture that everyone should be aware of before getting involved with troubleshooting: We won't be spending much time on understanding each component of the reference architecture, http://www.citrix.com/content/dam/citrix/en_us/documents/products-solutions/xendesktop-deployment-blueprint.pdf in detail as this is out of scope for this book. We would be going through each component quickly. What's new in XenDesktop 7.x With the release of Citrix XenDesktop 7, Citrix has introduced a lot of improvements over previous releases. With every new product release, there is lot of information published and sometimes it becomes very difficult to get the key information that all system administrators would be looking for to understand what has been changed and what the key benefits of the new release are. The purpose of this section would be to highlight the new key features that XenDesktop 7.x brings to the kitty for all Citrix administrators. This section would not provide you all the details regarding the new features and changes that XenDesktop 7.x has introduced but highlights the key points that every Citrix administrator should be aware of while administrating XenDesktop 7. Key Highlights: XenApp and XenDesktop are part of now single setup Cloud integration to support desktop deployments on the cloud IMA database doesn't exist anymore IMA is replaced by FMA (Flexcast Management Architecture) Zones Concept are no more zones or ZDC (Data Collectors) Microsoft SQL is the only supported Database Sites are used instead of Farms XenApp and XenDesktop can now share consoles, Citrix Studio and Desktop Director are used for both products Shadowing feature is deprecated; Citrix recommends Microsoft Remote Assistance to be used Locally installed applications integrated to be used with Server based desktops HDX & mobility features Profile Management is included MCS can now be leveraged for both Server & Desktop OS MCS now works with KMS Storefront replaces Web Interface Remote-PC Access No more Citrix Streaming Profile Manager; Citrix recommends MS App-V Core component is being replaced by a VDA agent Summary We should now have a basic understanding on desktop virtualization concepts, Architecture, new features in XenDesktop 7.x, XenDesktop delivery models based on FlexCast Technology. Resources for Article: Further resources on this subject: High Availability, Protection, and Recovery using Microsoft Azure [article] Designing a XenDesktop® Site [article] XenMobile™ Solutions Bundle [article]
Read more
  • 0
  • 0
  • 4284

article-image-working-powershell
Packt
07 Sep 2015
17 min read
Save for later

Working with PowerShell

Packt
07 Sep 2015
17 min read
In this article, you will cover: Retrieving system information – Configuration Service cmdlets Administering hosts and machines – Host and MachineCreation cmdlets Managing additional components – StoreFront Admin and Logging cmdlets (For more resources related to this topic, see here.) Introduction With hundreds or thousands of hosts to configure and machines to deploy, configuring all the components manually could be difficult. As for the previous XenDesktop releases, and also with the XenDesktop 7.6 version, you can find an integrated set of PowerShell modules. With its use, IT technicians are able to reduce the time required to perform management tasks by the creation of PowerShell scripts, which will be used to deploy, manage, and troubleshoot at scale the greatest part of the XenDesktop components. Working with PowerShell instead of the XenDesktop GUI will give you more flexibility in terms of what kind of operations to execute, having a set of additional features to use during the infrastructure creation and configuration phases. Retrieving system information – Configuration Service cmdlets In this recipe, we will use and explain a general-purpose PowerShell cmdlet: the Configuration Service category. This is used to retrieve general configuration parameters, and to obtain information about the implementation of the XenDesktop Configuration Service. Getting ready No preliminary tasks are required. You have already installed the Citrix XenDesktop PowerShell SDK during the installation of the Desktop Controller role machine(s). To be able to run a PowerShell script (.ps1 format), you have to enable the script execution from the PowerShell prompt in the following way, using its application: Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Force How to do it… In this section, we will explain and execute the commands associated with the XenDesktop System and Services configuration area: Connect to one of the Desktop Broker servers, by using a remote Desktop connection, for instance. Right-click on the PowerShell icon installed on the Windows taskbar and select the Run as Administrator option. Load the Citrix PowerShell modules by typing the following command and then press the Enter key: Asnp Citrix* As an alternative to the Asnp command, you can use the Add-PSSnapin command. Retrieve the active and configured Desktop Controller features by running the following command: Get-ConfigEnabledFeature To retrieve the current status of the Config Service, run the following command. The output result will be OK in the absence of configuration issues: Get-ConfigServiceStatus To get the connection string used by the Configuration Service and to connect to the XenDesktop database, run the following command: Get-ConfigDBConnection Starting from the previously received output, it's possible to configure the connection string to let the Configuration Service use the system DB. For this command, you have to specify the Server, Initial Catalog, and Integrated Security parameters: Set-ConfigDBConnection –DBConnection"Server=<ServernameInstanceName>; Initial Catalog=<DatabaseName>; Integrated Security=<True | False>" Starting from an existing Citrix database, you can generate a SQL procedure file to use as a backup to recreate the database. Run the following command to complete this task, specifying the DatabaseName and ServiceGroupName parameters: Get-ConfigDBSchema -DatabaseName<DatabaseName> -ServiceGroupName<ServiceGroupName>> Path:FileName.sql You need to configure a destination database with the same name as that of the source DB, otherwise the script will fail! To retrieve information about the active Configuration Service objects (Instance, Service, and Service Group), run the following three commands respectively: Get-ConfigRegisteredServiceInstance Get-ConfigService Get-ConfigServiceGroup To test a set of operations to check the status of the Configuration Service, run the following script: #------------ Script - Configuration Service #------------ Define Variables $Server_Conn="SqlDatabaseServer.xdseven.localCITRIX,1434" $Catalog_Conn="CitrixXD7-Site-First" #------------ write-Host"XenDesktop - Configuration Service CmdLets" #---------- Clear the existing Configuration Service DB connection $Clear = Set-ConfigDBConnection -DBConnection $null Write-Host "Clearing any previous DB connection - Status: " $Clear #---------- Set the Configuration Service DB connection string $New_Conn = Set-ConfigDBConnection -DBConnection"Server=$Server_Conn; Initial Catalog=$Catalog_Conn; Integrated Security=$true" Write-Host "Configuring the DB string connection - Status: " $New_Conn $Configured_String = Get-configDBConnection Write-Host "The new configured DB connection string is: " $Configured_String You have to save this script with the .ps1 extension, in order to invoke it with PowerShell. Be sure to change the specific parameters related to your infrastructure, in order to be able to run the script. This is shown in the following screenshot: How it works... The Configuration Service cmdlets of XenDesktop PowerShell permit the managing of the Configuration Service and its related information: the Metadata for the entire XenDesktop infrastructure, the Service instances registered within the VDI architecture, and the collections of these services, called Service Groups. This set of commands offers the ability to retrieve and check the DB connection string to contact the configured XenDesktop SQL Server database. These operations are permitted by the Get-ConfigDBConnection command (to retrieve the current configuration) and the Set-ConfigDBConnection command (to configure the DB connection string); both the commands use the DB Server Name with the Instance name, DB name, and Integrated Security as information fields. In the attached script, we have regenerated a database connection string. To be sure to be able to recreate it, first of all we have cleared any existing connection, setting it to null (verify the command associated with the $Clear variable), then we have defined the $New_Conn variable, using the Set-ConfigDBConnection command; all the parameters are defined at the top of the script, in the form of variables. Use the Write-Host command to echo results on the standard output. There's more... In some cases, you may need to retrieve the state of the registered services, in order to verify their availability. You can use the Test-ConfigServiceInstanceAvailability cmdlet, retrieving whether the service is responding or not and its response time. Run the following example to test the use of this command: Get-ConfigRegisteredServiceInstance | Test-ConfigServiceInstanceAvailability | more Use the –ForceWaitForOneOfEachType parameter to stop the check for a service category, when one of its services responds. Administering hosts and machines – Host and MachineCreation cmdlets In this recipe, we will describe how to create the connection between the Hypervisor and the XenDesktop servers, and the way to generate machines to assign to the end users, all by using Citrix PowerShell. Getting ready No preliminary tasks are required. You have already installed the Citrix XenDesktop PowerShell SDK during the installation of the Desktop Controller role machine(s). To be sure to be able to run a PowerShell script (the.ps1 format), you have to enable the script execution from the PowerShell prompt in this way: Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Force How to do it… In this section, we will discuss the PowerShell commands used to connect XenDesktop with the supported hypervisors plus the creation of the machines from the command line: Connect to one of the Desktop Broker servers. Click on the PowerShell icon installed on the Windows taskbar. Load the Citrix PowerShell modules by typing the following command, and then press the Enter key: Asnp Citrix* To list the available Hypervisor types, execute this task: Get-HypHypervisorPlugin –AdminAddress<BrokerAddress> To list the configured properties for the XenDesktop root-level location (XDHyp:), execute the following command: Get-ChildItemXDHyp:HostingUnits Please refer to the PSPath, Storage, and PersonalvDiskStorage output fields to retrieve information on the storage configuration. Execute the following cmdlet to add a storage resource to the XenDesktop Controller host: Add-HypHostingUnitStorage –LiteralPath<HostPathLocation> -StoragePath<StoragePath> -StorageType<OSStorage|PersonalvDiskStorage> - AdminAddress<BrokerAddress> To generate a snapshot for an existing VM, perform the following task: New-HypVMSnapshot –LiteralPath<HostPathLocation> -SnapshotDescription<Description> Use the Get-HypVMMacAddress -LiteralPath<HostPathLocation> command to list the MAC address of specified desktop VMs. To provision machine instances starting from the Desktop base image template, run the following command: New-ProvScheme –ProvisioningSchemeName<SchemeName> -HostingUnitName<HypervisorServer> -IdentityPoolName<PoolName> -MasterImageVM<BaseImageTemplatePath> -VMMemoryMB<MemoryAssigned> -VMCpuCount<NumberofCPU> To specify the creation of instances with the Personal vDisk technology, use the following option: -UsePersonalVDiskStorage. After the creation process, retrieve the provisioning scheme information by running the following command: Get-ProvScheme –ProvisioningSchemeName<SchemeName> To modify the resources assigned to desktop instances in a provisioning scheme, use the Set-ProvScheme cmdlet. The permitted parameters are –ProvisioningSchemeName, -VMCpuCount, and –VMMemoryMB. To update the desktop instances to the latest version of the Desktop base image template, run the following cmdlet: Publish-ProvMasterVmImage –ProvisioningSchemeName<SchemeName> -MasterImageVM<BaseImageTemplatePath> If you do not want to maintain the pre-update instance version to use as a restore checkpoint, use the –DoNotStoreOldImage option. To create machine instances, based on the previously configured provisioning scheme for an MCS architecture, run this command: New-ProvVM –ProvisioningSchemeName<SchemeName> -ADAccountName"DomainMachineAccount" Use the -FastBuild option to make the machine creation process faster. On the other hand, you cannot start up the machines until the process has been completed. Retrieve the configured desktop instances by using the next cmdlet: Get-ProvVM –ProvisioningSchemeName<SchemeName> -VMName<MachineName> To remove an existing virtual desktop, use the following command: Remove-ProvVM –ProvisioningSchemeName<SchemeName> -VMName<MachineName> -AdminAddress<BrokerAddress> The next script will combine the use of part of the commands listed in this recipe: #------------ Script - Hosting + MCS #----------------------------------- #------------ Define Variables $LitPath = "XDHyp:HostingUnitsVMware01" $StorPath = "XDHyp:HostingUnitsVMware01datastore1.storage" $Controller_Address="192.168.110.30" $HostUnitName = "Vmware01" $IDPool = $(Get-AcctIdentityPool -IdentityPoolName VDI-DESKTOP) $BaseVMPath = "XDHyp:HostingUnitsVMware01VMXD7-W8MCS-01.vm" #------------ Creating a storage location Add-HypHostingUnitStorage –LiteralPath $LitPath -StoragePath $StorPath -StorageTypeOSStorage -AdminAddress $Controller_Address #---------- Creating a Provisioning Scheme New-ProvScheme –ProvisioningSchemeName Deploy_01 -HostingUnitName $HostUnitName -IdentityPoolName $IDPool.IdentityPoolName -MasterImageVM $BaseVMPathT0-Post.snapshot -VMMemoryMB 4096 -VMCpuCount 2 -CleanOnBoot #---------- List the VM configured on the Hypervisor Host dir $LitPath*.vm exit How it works... The Host and MachineCreation cmdlet groups manage the interfacing with the Hypervisor hosts, in terms of machines and storage resources. This allows you to create the desktop instances to assign to the end user, starting from an existing and mapped Desktop virtual machine. The Get-HypHypervisorPlugin command retrieves and lists the available hypervisors to use to deploy virtual desktops and to configure the storage types. You need to configure an operating system storage area or a Personal vDisk storage zone. The way to map an existing storage location from the Hypervisor to the XenDesktop controller is by running the Add-HypHostingUnitStorage cmdlet. In this case you have to specify the destination path on which the storage object will be created (LiteralPath), the source storage path on the Hypervisor machine(s) (StoragePath), and the StorageType previously discussed. The storage types are in the form of XDHyp:HostingUnits<UnitName>. To list all the configured storage objects, execute the following command: dirXDHyp:HostingUnits<UnitName> *.storage After configuring the storage area, we have discussed the Machine Creation Service (MCS) architecture. In this cmdlets collection, we have the availability of commands to generate VM snapshots from which we can deploy desktop instances (New-HypVMSnapshot), and specify a name and a description for the generated disk snapshot. Starting from the available disk image, the New-ProvScheme command permits you to create a resource provisioning scheme, on which to specify the desktop base image, and the resources to assign to the desktop instances (in terms of CPU and RAM -VMCpuCount and –VMMemoryMB), and if generating these virtual desktops in a non-persistent mode (-CleanOnBoot option), with or without the use of the Personal vDisk technology (-UsePersonalVDiskStorage). It's possible to update the deployed instances to the latest base image update through the use of the Publish-ProvMasterVmImage command. In the generated script, we have located all the main storage locations (the LitPath and StorPath variables) useful to realize a provisioning scheme, then we have implemented a provisioning procedure for a desktop based on an existing base image snapshot, with two vCPUs and 4GB of RAM for the delivered instances, which will be cleaned every time they stop and start (by using the -CleanOnBoot option). You can navigate the local and remote storage paths configured with the XenDesktop Broker machine; to list an object category (such as VM or Snapshot) you can execute this command: dirXDHyp:HostingUnits<UnitName>*.<category> There's more... The discussed cmdlets also offer you the technique to preserve a virtual desktop from an accidental deletion or unauthorized use. With the Machine Creation cmdlets group, you have the ability to use a particular command, which allows you to lock critical desktops: Lock-ProvVM. This cmdlet requires as parameters the name of the scheme to which they refer (-ProvisioningSchemeName) and the ID of the virtual desktop to lock (-VMID). You can retrieve the Virtual Machine ID by running the Get-ProvVM command discussed previously. To revert the machine lock, and free the desktop instance from accidental deletion or improper use, you have to execute the Unlock-ProvVM cmdlet, using the same parameter showed for the lock procedure. Managing additional components – StoreFrontÔ admin and logging cmdlets In this recipe, we will use and explain how to manage and configure the StoreFront component, by using the available Citrix PowerShell cmdlets. Moreover, we will explain how to manage and check the configurations for the system logging activities. Getting ready No preliminary tasks are required. You have already installed the Citrix XenDesktop PowerShell SDK during the installation of the Desktop Controller role machine(s). To be able to run a PowerShell script (in the.ps1 format), you have to enable the script execution from the PowerShell prompt in this way: Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Force How to do it… In this section, we will explain and execute the commands associated with the Citrix Storefront system: Connect to one of the Desktop Broker servers. Click on the PowerShell icon installed on the Windows taskbar. Load the Citrix PowerShell modules by typing the following command, and then press the Enter key: Asnp Citrix* To execute a command, you have to press the Enter button after completing the right command syntax. Retrieve the currently existing StoreFront service instances, by running the following command: Get-SfService To limit the number of rows as output result, you can add the –MaxRecordCount<value> parameter. To list the detailed information about the StoreFront service(s) currently configured, execute the following command: Get-SfServiceInstance –AdminAddress<ControllerAddress> The status of the currently active StoreFront instances can be retrieved by using the Get-SfServiceStatus command. The OK output will confirm the correct service execution. To list the task history associated with the configured StoreFront instances, you have to run the following command: Get-SfTask You can filter the desired information for the ID of the researched task (-taskid) and sort the results by the use of the –sortby parameter. To retrieve the installed database schema versions, you can execute the following command: Get-SfInstalledDBVersion By applying the –Upgrade and –Downgrade filters, you will receive respectively the schemas for which the database version can be updated or reverted to a previous compatible one. To modify the StoreFront configurations to register its state on a different database, you can use the following command: Set-SfDBConnection-DBConnection<DBConnectionString> -AdminAddress<ControllerAddress> Be careful when you specify the database connection string; if not specified, the existing database connections and configurations will be cleared! To check that the database connection has been correctly configured, the following command is available: Test-SfDBConnection-DBConnection<DBConnectionString>-AdminAddress<ControllerAddress> The second discussed cmdlets allows the logging group to retrieve information about the current status of the logging service and run the following command: Get-LogServiceStatus To verify the language used and whether the logging service has been enabled, run the following command: Get-LogSite The available configurable locales are en, ja, zh-CN, de, es, and fr. The available states are Enabled, Disabled, NotSupported, and Mandatory. The NotSupported state will show you an incorrect configuration for the listed parameters. To retrieve detailed information about the running logging service, you have to use the following command: Get-LogService As discussed earlier for the StoreFront commands, you can filter the output by applying the –MaxRecordCount<value> parameter. In order to get all the operations logged within a specified time range, run the following command; this will return the global operations count: Get-LogSummary –StartDateRange<StartDate>-EndDateRange<EndDate> The date format must be the following: AAAA-MM-GGHH:MM:SS. To list the collected operations per day in the specified time period, run the previous command in the following way: Get-LogSummary –StartDateRange<StartDate> -EndDateRange<EndDate>-intervalSeconds 86400 The value 86400 is the number of seconds that are present in a day. To retrieve the connection string information about the database on which logging data is stored, execute the following command: Get-LogDataStore To retrieve detailed information about the high level operations performed on the XenDesktop infrastructure, you have to run the following command: Get-LogHighLevelOperation –Text <TextincludedintheOperation> -StartTime<FormattedDateandTime> -EndTime<FormattedDateandTime>-IsSuccessful<true | false>-User <DomainUserName>-OperationType<AdminActivity | ConfigurationChange> The indicated filters are not mandatory. If you do not apply any filters, all the logged operations will be returned. This could be a very long output. The same information can be retrieved for the low level system operations in the following way: Get-LogLowLevelOperation-StartTime<FormattedDateandTime> -EndTime<FormattedDateandTime> -IsSuccessful<true | false>-User <DomainUserName> -OperationType<AdminActivity | ConfigurationChange> In the How it works section we will explain the difference between the high and low level operations. To log when a high level operation starts and stops respectively, use the following two commands: Start-LogHighLevelOperation –Text <OperationDescriptionText>- Source <OperationSource> -StartTime<FormattedDateandTime> -OperationType<AdminActivity | ConfigurationChange> Stop-LogHighLevelOperation –HighLevelOperationId<OperationID> -IsSuccessful<true | false> The Stop-LogHighLevelOperation must be related to an existing start high level operation, because they are related tasks. How it works... Here, we have discussed two new PowerShell command collections for the XenDesktop 7 versions: the cmdlet related to the StoreFront platform; and the activities Logging set of commands. The first collection is quite limited in terms of operations, despite the other discussed cmdlets. In fact, the only actions permitted with the StoreFront PowerShell set of commands are retrieving configurations and settings about the configured stores and the linked database. More activities can be performed regarding the modification of existing StoreFront clusters, by using the Get-SfCluster, Add-SfServerToCluster, New-SfCluster, and Set-SfCluster set of operations. More interesting is the PowerShell Logging collection. In this case, you can retrieve all the system-logged data, putting it into two principal categories: High-level operations: These tasks group all the system configuration changes that are executed by using the Desktop Studio, the Desktop Director, or Citrix PowerShell. Low-level operations: This category is related to all the system configuration changes that are executed by a service and not by using the system software's consoles. With the low level operations command, you can filter for a specific high level operation to which the low level refers, by specifying the -HighLevelOperationId parameter. This cmdlet category also gives you the ability to track the start and stop of a high level operation, by the use of Start-LogHighLevelOperation and Stop-LogHighLevelOperation. In this second case, you have to specify the previously started high level operation. There's more... In case of too much information in the log store, you have the ability to clear all of it. To refresh all the log entries, we use the following command: Remove-LogOperation -UserName<DBAdministrativeCredentials> -Password <DBUserPassword>-StartDateRange <StartDate> -EndDateRange <EndDate> The not encrypted –Password parameter can be substituted by –SecurePassword, the password indicated in secure string form. The credentials must be database administrative credentials, with deleting permissions on the destination database. This is a not reversible operation, so ensure that you want to delete the logs in the specified time range, or verify that you have some form of data backup. Resources for Article: Further resources on this subject: Working with Virtual Machines [article] Virtualization [article] Upgrading VMware Virtual Infrastructure Setups [article]
Read more
  • 0
  • 0
  • 3216

article-image-storage-policy-based-management
Packt
07 Sep 2015
10 min read
Save for later

Storage Policy-based Management

Packt
07 Sep 2015
10 min read
In this article by Jeffery Taylor, the author of the book VMware Virtual SAN Cookbook, we have a functional VSAN cluster, we can leverage the power of Storage Policy-based Management (SPBM) to control how we deploy our virtual machines (VMs).We will discuss the following topics, with a recipe for each: Creating storage policies Applying storage policies to a new VM or a VM deployed from a template Applying storage policies to an existing VM migrating to VSAN Viewing a VM's storage policies and object distribution Changing storage policies on a VM already residing in VSAN Modifying existing storage policies (For more resources related to this topic, see here.) Introduction SPBM is where the administrative power of converged infrastructure becomes apparent. You can define VM-thick provisioning on a sliding scale, define how fault tolerant the VM's storage should be, make distribution and performance decisions, and more. RAID-type decisions for VMs resident on VSAN are also driven through the use of SPBM. VSAN can provide RAID-1 (mirrored) and RAID-0 (striped) configurations, or a combination of the two in the form of RAID-10 (mirrored set of stripes). All of this is done on a per-VM basis. As the storage and compute infrastructures are now converged, you can define how you want a VM to run in the most logical place—at the VM level or its disks. The need for a datastore-centric configuration, storage tiering, and so on is obviated and made redundant through the power of SPBM. Technically, the configuration of storage policies is optional. If you choose not to define any storage policies, VSAN will create VMs and disks according to its default cluster-wide storage policy. While this will provide for generic levels of fault tolerance and performance, it is strongly recommended to create and apply storage policies according to your administrative need. Much of the power given to you through a converged infrastructure and VSAN is in the policy-driven and VM-centric nature of policy-based management.While some of these options will be discussed throughout the following recipes, it is strongly recommended that you review the storage-policy appendix to familiarize yourself with all the storage-policy options prior to continuing. Creating VM storage policies Before a storage policy can be applied, it must be created. Once created, the storage policy can be applied to any part of any VM resident on VSAN-connected storage. You will probably want to create a number of storage policies to suit your production needs. Once created, all storage policies are tracked by vCenter and enforced/maintained by VSAN itself. Because of this, your policy selections remain valid and production continues even in the event of a vCenter outage. In the example policy that we will create in this recipe, the VM policy will be defined as tolerating the failure of a single VSAN host. The VM will not be required to stripe across multiple disks and it will be 30percent thick-provisioned. Getting ready Your VSAN should be deployed and functional as per the previous article. You should be logged in to the vSphere Web Client as an administrator or as a user with rights to create, modify, apply, and delete storage policies. How to do it… From the vSphere 5.5 Web Client, navigate to Home | VM Storage Policies. From the vSphere 6.0 Web Client, navigate to Home | Policies and Profiles | VM Storage Policies. Initially, there will be no storage policies defined unless you have already created storage policies for other solutions. This is normal. In VSAN 6.0, you will have the VSAN default policy defined here prior to the creation of your own policies. Click the Create a new VM storage policy button:   A wizard will launch to guide you through the process. If you have multiple vCenter Server systems in linked-mode, ensure that you have selected the appropriate vCenter Server system from the drop-down menu. Give your storage policy a name that will be useful to you and a description of what the policy does. Then, click Next:   The next page describes the concept of rule sets and requires no interaction. Click the Next button to proceed. When creating the rule set, ensure that you select VSAN from the Rules based on vendor-specific capabilities drop-down menu. This will expose the <Add capability> button. Select Number of failures to tolerate from the drop-down menu and specify a value of 1:   Add other capabilities as desired. For this example, we will specify a single stripe with 30% space reservation. Once all required policy definitions have been applied, click Next:   The next page will tell you which datastores are compatible with the storage policy you have created. As this storage policy is based on specific capabilities exposed by VSAN, only your VSAN datastore will appear as a matching resource. Verify that the VSAN datastore appears, and then click Next. Review the summary page and ensure that the policy is being created on the basis of your specifications. When finished, click Finish. The policy will be created. Depending on the speed of your system, this operation should be nearly instantaneous but may take several seconds to finish. How it works… The VSAN-specific policy definitions are presented through VMware Profile-Driven Storage service, which runs with vCenter Server. Profile-Driven Storage Service determines which policy definitions are available by communicating with the ESXi hosts that are enabled for VSAN. Once VSAN is enabled, each host activates a VASA provider daemon, which is responsible for communicating policy options to and receiving policy instructions from Profile-Driven Storage Service. There's more… The nature of the storage policy definitions enabled by VSAN is additive. No policy option mutually excludes any other, and they can be combined in any way that your policy requirements demand. For example, specifying a number of failures to tolerate will not preclude the specification cache reservation. See also For a full explanation of all policy options and when you might want to use them Applying storage policies to a new VM or a VM deployed from a template When creating a new VM on VSAN, you will want to apply a storage policy to that VM according to your administrative needs. As VSAN is fully integrated into vSphere and vCenter, this is a straightforward option during the normal VM deployment wizard. The workflow described in this recipe is for creating a new VM on VSAN. If deployed from a template, the wizard process is functionally identical from step 4 of the How to do it… section in this recipe. Getting ready You should be logged into vSphere Web Client as an administrator or a user authorized to create virtual machines. You should have at least one storage policy defined (see previous recipe). How to do it… Navigate to Home | Hosts and Clusters | Datacenter | Cluster. Right-click the cluster, and then select New Virtual Machine…:   In the subsequent screen, select Create a new virtual machine. Proceed through the wizard through Step 2b. For the compute resource, ensure that you select your VSAN cluster or one of its hosts:   On the next step, select one of the VM storage policies that you created in the previous recipe. Once you select a VSAN storage policy, only the VSAN datastore will appear as compatible. Any other datastores that you have present will be ineligible for selection:   Complete the rest of the VM-deployment wizard as you normally would to select the guest OS, resources, and so on. Once completed, the VM will deploy and it will populate in the inventory tree on the left side. The VM summary will reflect that the VM resides on the VSAN storage:   How it works… All VMs resident on the VSAN storage will have a storage policy applied. Selecting the appropriate policy during VM creation means that the VM will be how you want it to be from the beginning of the VM's life. While policies can be changed later, this could involve a reconfiguration of the object, which can take time to complete and can result in increased disk and network traffic once it is initiated. Careful decision making during deployment can help you save time later. Applying storage policies to an existing VM migrating to VSAN When introducing VSAN into an existing infrastructure, you may have existing VMs that reside on the external storage, such as NFS, iSCSI, or Fibre Channel (FC). When the time comes to move these VMs into your converged infrastructure and VSAN, we will have to make policy decisions about how these VMs should be handled. Getting ready You should be logged into vSphere Web Client as an administrator or a user authorized to create, migrate, and modify VMs. How to do it… Navigate to Home | Hosts and Clusters | Datacenter | Cluster. Identify the VM that you wish to migrate to VSAN. For the example used in this recipe, we will migrate the VM called linux-vm02 that resides on NFS Datastore. Right-click the VM and select Migrate… from the context menu:   In the resulting page, select Change datastore or Change both host and datastore as applicable, and then click Next. If the VM does not already reside on one of your VSAN-enabled hosts, you must choose the Change both host and datastore option for your migration. In the next step, select one of the VM storage policies that you created in the previous recipe. Once you select a VSAN storage policy, only the VSAN datastore will appear as compatible. Any other datastores that you have present will be ineligible for selection:   You can apply different storage policies to different VM disks. This can be done by performing the following steps: Click on the Advanced >> button to reveal various parts of the VM: Once clicked, the Advanced >> button will change to << Basic.   In the Storage column, click the existing datastore to reveal a drop-down menu. Click Browse. In the subsequent window, select the desired policy from the VM Storage Policy drop-down menu. You will find that the only compatible datastore is your VSAN datastore. Click OK:   Repeat the preceding step as needed for other disks and the VM configuration file. After performing the preceding steps, click on Next. Review your selection on the final page, and then click Finish. Migrations can potentially take a long time, depending on how large the VM is, the speed of the network, and other considerations. Please monitor the progress of your VM relocation tasks using the Recent Tasks pane:   Once the migration task finishes, the VM's Summary tab will reflect that the datastore is now the VSAN datastore. For the example of this VM, the VM moved from NFS Datastore to vsanDatastore:   How it works… Much like the new VM workflow, we select the storage policy that we want to use during the migration of the VM to VSAN. However, unlike the deploy-from-template or VM-creation workflows, this process requires none of the VM configuration steps. We only have to select the storage policy, and then SPBM instructs VSAN how to place and distribute the objects. All object-distribution activities are completely transparent and automatic. This process can be used to change the storage policy of a VM already resident in the VSAN cluster, but it is more cumbersome than modifying the policies by other means. Summary In this article, we learned that storage policies give you granular control over how the data for any given VM or VM disk is handled. Storage policies allow you to define how many mirrors (RAID-1) and how many stripes (RAID-0) are associated with any given VM or VM disk. Resources for Article: Further resources on this subject: Working with Virtual Machines [article] Virtualization [article] Upgrading VMware Virtual Infrastructure Setups [article]
Read more
  • 0
  • 0
  • 1211

article-image-storage-configurations
Packt
07 Sep 2015
21 min read
Save for later

Storage Configurations

Packt
07 Sep 2015
21 min read
In this article by Wasim Ahmed, author of the book Proxmox Cookbook, we will cover topics such as local storage, shared storage, Ceph storage, and a recipe which shows you how to configure the Ceph RBD storage. (For more resources related to this topic, see here.) A storage is where virtual disk images of virtual machines reside. There are many different types of storage systems with many different features, performances, and use case scenarios. Whether it is a local storage configured with direct attached disks or a shared storage with hundreds of disks, the main responsibility of a storage is to hold virtual disk images, templates, backups, and so on. Proxmox supports different types of storages, such as NFS, Ceph, GlusterFS, and ZFS. Different storage types can hold different types of data. For example, a local storage can hold any type of data, such as disk images, ISO/container templates, backup files and so on. A Ceph storage, on the other hand, can only hold a .raw format disk image. In order to provide the right type of storage for the right scenario, it is vital to have a proper understanding of different types of storages. The full details of each storage is beyond the scope of this article, but we will look at how to connect them to Proxmox and maintain a storage system for VMs. Storages can be configured into two main categories: Local storage Shared storage Local storage Any storage that resides in the node itself by using directly attached disks is known as a local storage. This type of storage has no redundancy other than a RAID controller that manages an array. If the node itself fails, the storage becomes completely inaccessible. The live migration of a VM is impossible when VMs are stored on a local storage because during migration, the virtual disk of the VM has to be copied entirely to another node. A VM can only be live-migrated when there are several Proxmox nodes in a cluster and the virtual disk is stored on a shared storage accessed by all the nodes in the cluster. Shared storage A shared storage is one that is available to all the nodes in a cluster through some form of network media. In a virtual environment with shared storage, the actual virtual disk of the VM may be stored on a shared storage, while the VM actually runs on another Proxmox host node. With shared storage, the live migration of a VM becomes possible without powering down the VM. Multiple Proxmox nodes can share one shared storage, and VMs can be moved around since the virtual disk is stored on different shared storages. Usually, a few dedicated nodes are used to configure a shared storage with their own resources apart from sharing the resources of a Proxmox node, which could be used to host VMs. In recent releases, Proxmox has added some new storage plugins that allow users to take advantage of some great storage systems and integrating them with the Proxmox environment. Most of the storage configurations can be performed through the Proxmox GUI. Ceph storage Ceph is a powerful distributed storage system, which provides RADOS Block Device (RBD) object storage, Ceph filesystem (CephFS), and Ceph Object Storage. Ceph is built with a very high-level of reliability, scalability, and performance in mind. A Ceph cluster can be expanded to several petabytes without compromising data integrity, and can be configured using commodity hardware. Any data written to the storage gets replicated across a Ceph cluster. Ceph was originally designed with big data in mind. Unlike other types of storages, the bigger a Ceph cluster becomes, the higher the performance. However, it can also be used in small environments just as easily for data redundancy. A lower performance can be mitigated using SSD to store Ceph journals. Refer to the OSD Journal subsection in this section for information on journals. The built-in self-healing features of Ceph provide unprecedented resilience without a single point of failure. In a multinode Ceph cluster, the storage can tolerate not just hard drive failure, but also an entire node failure without losing data. Currently, only an RBD block device is supported in Proxmox. Ceph comprises a few components that are crucial for you to understand in order to configure and operate the storage. The following components are what Ceph is made of: Monitor daemon (MON) Object Storage Daemon (OSD) OSD Journal Metadata Server (MSD) Controlled Replication Under Scalable Hashing map (CRUSH map) Placement Group (PG) Pool MON Monitor daemons form quorums for a Ceph distributed cluster. There must be a minimum of three monitor daemons configured on separate nodes for each cluster. Monitor daemons can also be configured as virtual machines instead of using physical nodes. Monitors require a very small amount of resources to function, so allocated resources can be very small. A monitor can be set up through the Proxmox GUI after the initial cluster creation. OSD Object Storage Daemons (OSDs) are responsible for the storage and retrieval of actual cluster data. Usually, each physical storage device, such as HDD or SSD, is configured as a single OSD. Although several OSDs can be configured on a single physical disc, it is not recommended for any production environment at all. Each OSD requires a journal device where data first gets written and later gets transferred to an actual OSD. By storing journals on fast-performing SSDs, we can increase the Ceph I/O performance significantly. Thanks to the Ceph architecture, as more and more OSDs are added into the cluster, the I/O performance also increases. An SSD journal works very well on small clusters with about eight OSDs per node. OSDs can be set up through the Proxmox GUI after the initial MON creation. OSD Journal Every single piece of data that is destined to be a Ceph OSD first gets written in a journal. A journal allows OSD daemons to write smaller chunks to allow the actual drives to commit writes that give more time. In simpler terms, all data gets written to journals first, then the journal filesystem sends data to an actual drive for permanent writes. So, if the journal is kept on a fast-performing drive, such as SSD, incoming data will be written at a much higher speed, while behind the scenes, slower performing SATA drives can commit the writes at a slower speed. Journals on SSD can really improve the performance of a Ceph cluster, especially if the cluster is small, with only a few terabytes of data. It should also be noted that if there is a journal failure, it will take down all the OSDs that the journal is kept on the journal drive. In some environments, it may be necessary to put two SSDs to mirror RAIDs and use them as journaling. In a large environment with more than 12 OSDs per node, performance can actually be gained by collocating a journal on the same OSD drive instead of using SSD for a journal. MDS The Metadata Server (MDS) daemon is responsible for providing the Ceph filesystem (CephFS) in a Ceph distributed storage system. MDS can be configured on separate nodes or coexist with already configured monitor nodes or virtual machines. Although CephFS has come a long way, it is still not fully recommended to use in a production environment. It is worth mentioning here that there are many virtual environments actively running MDS and CephFS without any issues. Currently, it is not recommended to configure more than two MDSs in a Ceph cluster. CephFS is not currently supported by a Proxmox storage plugin. However, it can be configured as a local mount and then connected to a Proxmox cluster through the Directory storage. MDS cannot be set up through the Proxmox GUI as of version 3.4. CRUSH map A CRUSH map is the heart of the Ceph distributed storage. The algorithm for storing and retrieving user data in Ceph clusters is laid out in the CRUSH map. CRUSH allows a Ceph client to directly access an OSD. This eliminates a single point of failure and any physical limitations of scalability since there are no centralized servers or controllers to manage data in and out. Throughout Ceph clusters, CRUSH maintains a map of all MONs and OSDs. CRUSH determines how data should be chunked and replicated among OSDs spread across several local nodes or even nodes located remotely. A default CRUSH map is created on a freshly installed Ceph cluster. This can be further customized based on user requirements. For smaller Ceph clusters, this map should work just fine. However, when Ceph is deployed with very big data in mind, this map should be customized. A customized map will allow better control of a massive Ceph cluster. To operate Ceph clusters of any size successfully, a clear understanding of the CRUSH map is mandatory. For more details on the Ceph CRUSH map, visit http://ceph.com/docs/master/rados/operations/crush-map/ and http://cephnotes.ksperis.com/blog/2015/02/02/crushmap-example-of-a-hierarchical-cluster-map. As of Proxmox VE 3.4, we cannot customize the CRUSH map throughout the Proxmox GUI. It can only be viewed through a GUI and edited through a CLI. PG In a Ceph storage, data objects are aggregated in groups determined by CRUSH algorithms. This is known as a Placement Group (PG) since CRUSH places this group in various OSDs depending on the replication level set in the CRUSH map and the number of OSDs and nodes. By tracking a group of objects instead of the object itself, a massive amount of hardware resources can be saved. It would be impossible to track millions of individual objects in a cluster. The following diagram shows how objects are aggregated in groups and how PG relates to OSD: To balance available hardware resources, it is necessary to assign the right number of PGs. The number of PGs should vary depending on the number of OSDs in a cluster. The following is a table of PG suggestions made by Ceph developers: Number of OSDs Number of PGs Less than 5 OSDs 128 Between 5-10 OSDs 512 Between 10-50 OSDs 4096 Selecting the proper number of PGs is crucial since each PG will consume node resources. Too many PGs for the wrong number of OSDs will actually penalize the resource usage of an OSD node, while very few assigned PGs in a large cluster will put data at risk. A rule of thumb is to start with the lowest number of PGs possible, then increase them as the number of OSDs increases. For details on Placement Groups, visit http://ceph.com/docs/master/rados/operations/placement-groups/. There's a great PG calculator created by Ceph developers to calculate the recommended number of PGs for various sizes of Ceph clusters at http://ceph.com/pgcalc/. Pools Pools in Ceph are like partitions on a hard drive. We can create multiple pools on a Ceph cluster to separate stored data. For example, a pool named accounting can hold all the accounting department data, while another pool can store the human resources data of a company. When creating a pool, assigning the number of PGs is necessary. During the initial Ceph configuration, three default pools are created. They are data, metadata, and rbd. Deleting a pool will delete all stored objects permanently. For details on Ceph and its components, visit http://ceph.com/docs/master/. The following diagram shows a basic Proxmox+Ceph cluster: The preceding diagram shows four Proxmox nodes, three Monitor nodes, three OSD nodes, and two MDS nodes comprising a Proxmox+Ceph cluster. Note that Ceph is on a different network than the Proxmox public network. Depending on the set replication number, each incoming data object needs to be written more than once. This causes high bandwidth usage. By separating Ceph on a dedicated network, we can ensure that a Ceph network can fully utilize the bandwidth. On advanced clusters, a third network is created only between Ceph nodes for cluster replication, thus improving network performance even further. As of Proxmox VE 3.4, the same node can be used for both Proxmox and Ceph. This provides a great way to manage all the nodes from the same Proxmox GUI. It is not advisable to put Proxmox VMs on a node that is also configured as Ceph. During day-to-day operations, Ceph nodes do not consume large amounts of resources, such as CPU or memory. However, when Ceph goes into rebalancing mode due to OSD or node failure, a large amount of data replication occurs, which takes up lots of resources. Performance will degrade significantly if resources are shared by both VMs and Ceph. Ceph RBD storage can only store .raw virtual disk image files. Ceph itself does not come with a GUI to manage, so having the option to manage Ceph nodes through the Proxmox GUI makes administrative tasks mush easier. Refer to the Monitoring the Ceph storage subsection under the How to do it... section of the Connecting the Ceph RBD storage recipe later in this article to learn how to install a great read-only GUI to monitor Ceph clusters. Connecting the Ceph RBD storage In this recipe, we are going to see how to configure a Ceph block storage with a Proxmox cluster. Getting ready The initial Ceph configuration on a Proxmox cluster must be accomplished through a CLI. After the Ceph installation, initial configurations and one monitor creation for all other tasks can be accomplished through the Proxmox GUI. How to do it... We will now see how to configure the Ceph block storage with Proxmox. Installing Ceph on Proxmox Ceph is not installed by default. Prior to configuring a Proxmox node for the Ceph role, Ceph needs to be installed and the initial configuration must be created through a CLI. The following steps need to be performed on all Proxmox nodes that will be part of the Ceph cluster: Log in to each node through SSH or a console. Configure a second network interface to create a separate Ceph network with a different subnet. Reboot the nodes to initialize the network configuration. Using the following command, install the Ceph package on each node: # pveceph install –version giant Initializing the Ceph configuration Before Ceph is usable, we have to create the initial Ceph configuration file on one Proxmox+Ceph node. The following steps need to be performed only on one Proxmox node that will be part of the Ceph cluster: Log in to the node using SSH or a console. Run the following command create the initial Ceph configuration: # pveceph init –network <ceph_subnet>/CIDR Run the following command to create the first monitor: # pveceph createmon Configuring Ceph through the Proxmox GUI After the initial Ceph configuration and the creation of the first monitor, we can continue with further Ceph configurations through the Proxmox GUI or simply run the Ceph Monitor creation command on other nodes. The following steps show how to create Ceph Monitors and OSDs from the Proxmox GUI: Log in to the Proxmox GUI as a root or with any other administrative privilege. Select a node where the initial monitor was created in previous steps, and then click on Ceph from the tabbed menu. The following screenshot shows a Ceph cluster as it appears after the initial Ceph configuration: Since no OSDs have been created yet, it is normal for a new Ceph cluster to show PGs stuck and unclean error Click on Disks on the bottom tabbed menu under Ceph to display the disks attached to the node, as shown in the following screenshot: Select an available attached disk, then click on the Create: OSD button to open the OSD dialog box, as shown in the following screenshot: Click on the Journal Disk drop-down menu to select a different device or collocate the journal on the same OSD by keeping it as the default. Click on Create to finish the OSD creation. Create additional OSDs on Ceph nodes as needed. The following screenshot shows a Proxmox node with three OSDs configured: By default, Proxmox has created OSDs with an ext3 partition. However, sometimes, it may be necessary to create OSDs with different partition types due to a requirement or for performance improvement. Enter the following command format through the CLI to create an OSD with a different partition type: # pveceph createosd –fstype ext4 /dev/sdX The following steps show how to create Monitors through the Proxmox GUI: Click on Monitor from the tabbed menu under the Ceph feature. The following screenshot shows the Monitor status with the initial Ceph Monitor we created earlier in this recipe: Click on Create to open the Monitor dialog box. Select a Proxmox node from the drop-down menu. Click on the Create button to start the monitor creation process. Create a total of three Ceph monitors to establish a Ceph quorum. The following screenshot shows the Ceph status with three monitors and OSDs added: Note that even with three OSDs added, the PGs are still stuck with errors. This is because by default, the Ceph CRUSH is set up for two replicas. So far, we've only created OSDs on one node. For a successful replication, we need to add some OSDs on the second node so that data objects can be replicated twice. Follow the steps described earlier to create three additional OSDs on the second node. After creating three more OSDs, the Ceph status should look like the following screenshot: Managing Ceph pools It is possible to perform basic tasks, such as creating and removing Ceph pools through the Proxmox GUI. Besides these, we can see check the list, status, number of PGs, and usage of the Ceph pools. The following steps show how to check, create, and remove Ceph pools through the Proxmox GUI: Click on the Pools tabbed menu under Ceph in the Proxmox GUI. The following screenshot shows the status of the default rbd pool, which has replica 1, 256 PG, and 0% usage: Click on Create to open the pool creation dialog box. Fill in the required information, such as the name of the pool, replica size, and number of PGs. Unless the CRUSH map has been fully customized, the ruleset should be left at the default value 0. Click on OK to create the pool. To remove a pool, select the pool and click on Remove. Remember that once a Ceph pool is removed, all the data stored in this pool is deleted permanently. To increase the number of PGs, run the following command through the CLI: #ceph osd pool set <pool_name> pg_num <value> #ceph osd pool set <pool_name> pgp_num <value> It is only possible to increase the PG value. Once increased, the PG value can never be decreased. Connecting RBD to Proxmox Once a Ceph cluster is fully configured, we can proceed to attach it to the Proxmox cluster. During the initial configuration file creation, Ceph also creates an authentication keyring in the /etc/ceph/ceph.client.admin.keyring directory path. This keyring needs to be copied and renamed to match the name of the storage ID to be created in Proxmox. Run the following commands to create a directory and copy the keyring: # mkdir /etc/pve/priv/ceph # cd /etc/ceph/ # cp ceph.client.admin.keyring /etc/pve/priv/ceph/<storage>.keyring For our storage, we are naming it rbd.keyring. After the keyring is copied, we can attach the Ceph RBD storage with Proxmox using the GUI: Click on Datacenter, then click on Storage from the tabbed menu. Click on the Add drop-down menu and select the RBD storage plugin. Enter the information as described in the following table: Item Type of value Entered value ID The name of the storage. rbd Pool The name of the Ceph pool. rbd Monitor Host The IP address and port number of the Ceph MONs. We can enter multiple MON hosts for redundancy. 172.16.0.71:6789;172.16.0.72:6789; 172.16.0.73:6789 User name The default Ceph administrator. Admin Nodes The Proxmox nodes that will be able to use the storage. All Enable The checkbox for enabling/disabling the storage. Enabled Click on Add to attach the RBD storage. The following screenshot shows the RBD storage under Summary: Monitoring the Ceph storage Ceph itself does not come with any GUI to manage or monitor the cluster. We can view the cluster status and perform various Ceph-related tasks through the Proxmox GUI. There are several third-party software that allow Ceph-only GUI to manage and monitor the cluster. Some software provide management features, while others provide read-only features for Ceph monitoring. Ceph Dash is such a software that provides an appealing read-only GUI to monitor the entire Ceph cluster without logging on to the Proxmox GUI. Ceph Dash is freely available through GitHub. There are other heavyweight Ceph GUI dashboards, such as Kraken, Calamari, and others. In this section, we are only going to see how to set up the Ceph Dash cluster monitoring GUI. The following steps can be used to download and start Ceph Dash to monitor a Ceph cluster using any browser: Log in to any Proxmox node, which is also a Ceph MON. Run the following commands to download and start the dashboard: # mkdir /home/tools # apt-get install git # git clone https://github.com/Crapworks/ceph-dash # cd /home/tools/ceph-dash # ./ceph_dash.py Ceph Dash will now start listening on port 5000 of the node. If the node is behind a firewall, open port 5000 or any other ports with port forwarding in the firewall. Open any browser and enter <node_ip>:5000 to open the dashboard. The following screenshot shows the dashboard of the Ceph cluster we have created: We can also monitor the status of the Ceph cluster through a CLI using the following commands: To check the Ceph status: # ceph –s To view OSDs in different nodes: # ceph osd tree To display real-time Ceph logs: # ceph –w To display a list of Ceph pools: # rados lspools To change the number of replicas of a pool: # ceph osd pool set size <value> Besides the preceding commands, there are many more CLI commands to manage Ceph and perform advanced tasks. The Ceph official documentation has a wealth of information and how-to guides along with the CLI commands to perform them. The documentation can be found at http://ceph.com/docs/master/. How it works… At this point, we have successfully integrated a Ceph cluster with a Proxmox cluster, which comprises six OSDs, three MONs, and three nodes. By viewing the Ceph Status page, we can get lot of information about a Ceph cluster at a quick glance. From the previous figure, we can see that there are 256 PGs in the cluster and the total cluster storage space is 1.47 TB. A healthy cluster will have the PG status as active+clean. Based on the nature of issue, the PGs can have various states, such as active+unclean, inactive+degraded, active+stale, and so on. To learn details about all the states, visit http://ceph.com/docs/master/rados/operations/pg-states/. By configuring a second network interface, we can separate a Ceph network from the main network. The #pveceph init command creates a Ceph configuration file in the /etc/pve/ceph.conf directory path. A newly configured Ceph configuration file looks similar to the following screenshot: Since the ceph.conf configuration file is stored in pmxcfs, any changes made to it are immediately replicated in all the Proxmox nodes in the cluster. As of Proxmox VE 3.4, Ceph RBD can only store a .raw image format. No templates, containers, or backup files can be stored on the RBD block storage. Here is the content of a storage configuration file after adding the Ceph RBD storage: rbd: rbd monhost 172.16.0.71:6789;172.16.0.72:6789;172.16.0.73:6789 pool rbd content images username admin If a situation dictates the IP address change of any node, we can simply edit this content in the configuration file to manually change the IP address of the Ceph MON nodes. See also To learn about Ceph in greater detail, visit http://ceph.com/docs/master/ for the official Ceph documentation Also, visit https://indico.cern.ch/event/214784/session/6/contribution/68/material/slides/0.pdf to find out why Ceph is being used at CERN to store the massive data generated by the Large Hadron Collider (LHC) Summary In this article, we came across with different configurations for a variety of storage categories and got hands-on practice with various stages in configuring the Ceph RBD storage. Resources for Article: Further resources on this subject: Deploying New Hosts with vCenter [article] Let's Get Started with Active Di-rectory [article] Basic Concepts of Proxmox Virtual Environment [article]
Read more
  • 0
  • 0
  • 3633
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-deploying-app-v-5-virtual-environment
Packt
12 Aug 2015
10 min read
Save for later

Deploying App-V 5 in a Virtual Environment

Packt
12 Aug 2015
10 min read
In this article written by James Preston, author of the book Microsoft Application Virtualization Cookbook, we will cover the following topics: Enabling the App-V shared content store mode Publishing applications through Microsoft RemoteApp Pre-caching applications in the local store Publishing applications through Citrix StoreFront (For more resources related to this topic, see here.) App-V 5 is the perfect companion for your virtual session or desktop delivery environment, allowing you to abstract applications from the user and desktop, as shown in the following image, and, in turn, reducing infrastructure requirements through features such as shared content store mode.   In this article, we will cover how to deploy App-V5 in these environments. Enabling the App-V shared content store mode In this recipe, we will cover enabling the App-V shared content store mode, which prevents the caching of App-V files on a client so that the application is launched from the server hosting the application directly. This feature is ideal for environments where there is ample network bandwidth between remote desktop session hosts (or client virtual machines in a VDI deployment) and where administrators are looking to reduce the overall need for storage by the hosts. While some files are still cached on the local machine (for example, for shortcuts or Shell extensions), the following screenshot shows the amount of storage saved on an Office 2013 deployment, where the shared content store mode is turned on (the screenshot on the right):   With the shared content store mode enabled, you can check the amount of storage space used by a package by checking the size of the individual package's folders at the following path on a client where the package is deployed (where Package ID is the GUID assigned to that package): C:ProgramDataApp-V<Package ID>. Getting ready To complete these steps, you will need to deploy a Remote Desktop Services environment (on the server RDS). The server RDS must also have the App-V client and any prerequisites deployed on it. How to do it… The following list shows you the high-level tasks involved in this recipe and the tasks required to complete the recipe (all of the actions in this recipe will take place on the server DC): Link the App-V5 Settings Group Policy Object to the Remote Desktop Server's OU. Create a Group Policy object for the server RDS. Enable the shared content store mode within that policy. The implementation of the preceding tasks is as follows: On the server DC, load the Group Policy Management console. Expand the tree structure to display the Remote Desktop Server's Organizational Unit and click on Link an Existing GPO…. From the window that appears, select the App-V 5 Settings policy and click on OK. Next, right-click on the OU and select Create a GPO in this domain, and Link it here.... Set the name of the policy as App-V 5 Shared Content Store and click on OK. Let's take a look at the following screenshot: Right-click on the policy you have just created and click on Edit…. In the window that appears, right-click on App-V 5 Shared Content Store and click on Properties. Then, tick the Disable User Configuration settings box and click on OK. Next, navigate to Computer Configuration | Policies | Administrative Templates | System | App-V | Streaming and double-click on Shared Content (SCS) mode. Set the policy to Enabled and click on OK. There's more… To verify that the setting is applied on the server RDS, open a PowerShell session and run the following command: Get-AppvClientConfiguration If the SharedContentStoreMode value is 1 and the SetByGroupPolicy value is True, then the policy is correctly applied. Publishing applications through Microsoft RemoteApp In this recipe, we will publish the Audacity package to the RDS server to be accessed by users through the Remote Desktop Web Access. Getting ready To complete these steps, you will need to deploy a Remote Desktop Services environment (on the server RDS). How to do it… The following list shows you the high-level tasks involved in this recipe and the tasks required to complete the recipe (all of the actions in this recipe will take place on the server RDS): Create a Security Group for your remote desktop session hosts. Publish the Audacity package to that Security Group through the App-V Management console. Publish the Audacity package through Server Manager. The implementation of the preceding tasks is as follows: On the server DC, launch the Active Directory Users and Computers console and navigate to demo.org | Domain Groups, and create a new Security Group called RDS Session Hosts. Add the server RDS to the group that you just created. On your Windows 8 client PC, log in to the App-V Management console as Sam Adams, select the Audacity package and click on the Edit option next to the AD ACCESS option. Under FIND VALID ACTIVE DIRECTORY GROUP AND GRANT ACCESS, enter demo.orgRDS Session Hosts and click on Check. In the drop-down menu that appears, select RDS Session Hosts and click on Grant Access. On the server RDS, wait for the App-V Publishing Refresh to occur (or force the process manually) for the Audacity shortcut to appear on the desktop. Launch Server Manager, and from the left-hand side bar, select Remote Desktop. From the left-hand side, select QuickSessionCollection (the collection created by default). Under REMOTEAPP PROGRAMS, navigate to Tasks | Publish RemoteApp Programs. In the window that appears, tick the box next to Audacity and click on Next, as shown in the following screenshot: Note that the path to the Audacity application is pointing at the App-V Installation root in %SYSTEMDRIVE%ProgramDataMicrosoftAppV. Review the confirmation window and click on Publish. On your Windows 8 client, open Internet Explorer and browse to https://rds.demo.org/RDWeb, accepting any invalid SSL certificate prompts and allowing the Remote Desktop plugin to run. Log in as Sam Adams and launch the Audacity application.   There's more… It is possible to limit applications within a remote desktop collection to users in a specific Security Group. To do this, right-click on the application as it appears under REMOTEAPP PROGRAMS and click on Edit Properties:   In the window that appears, click on User Assignment and set the radio button to Only specified users and groups. You will now be able to access the Add… button, which brings up an Active Directory search dialog, from where you can add the Audacity Users security group to limit the application to only the users in that group.   Precaching applications in the local store As an alternative to using the Shared Content Store mode, applications can be forced to be cached within the local store on your RDS session hosts. This would be advantageous in scenarios where the bandwidth from a central high speed storage device is more expensive than providing dedicated storage to the RDS session hosts. Getting ready To complete these tasks, you will need to deploy a Remote Desktop Services environment (on the server RDS). How to do it… The following list shows you the high-level tasks involved in this recipe and the tasks required to complete the recipe (all of the actions in this recipe will take place on the server DC): Create a group policy object for the server RDS. Enable background application caching within that policy. The implementation of the preceding tasks is as follows: On the server DC, load the Group Policy Management console. Expand the tree structure to display Remote Desktop Servers Organizational Unit, right-click on the OU, and select Create a GPO in this domain, and Link it here.... Set the name of the policy to App-V 5 Cache Applications and click on OK. Right-click on the policy you have just created and click on Edit…. In the window that appears, right-click on App-V5 Cache Applications and click on Properties, tick the Disable User Configuration settings box and click on OK. Next, navigate to Computer Configuration | Policies | Administrative Templates | System | App-V | Specify what to load in background (aka AutoLoad). Set the policy to Enabled, with Autoload Options set to All, and click on OK. There's more… Individual applications can be targeted for caching using the Mount-AppvClientPackage PowerShell command. For example, to mount the package named Audacity 2.0.6 (which has been already published to the Remote Desktop session host), the administrator would run the following command: Mount-AppvClientPackage –Name "Audacity 2.0.6" This would generate the following result:   Note that the PercentLoaded value is shown as 100 to indicate that the package is completely loaded within the local store. Publishing applications through Citrix StoreFront Apart from being a great addition to the Microsoft Virtual environment, App-V is also supported by Citrix XenDesktop. In this recipe, we will look at publishing the Audacity package through Citrix StoreFront. Getting ready In addition to this, the server XenDesktop and XD-HOST will be used in this recipe. XenDesktop is configured with an installation of XenDesktop 7.6 with a Machine Catalogue containing the server XD-HOST (configured as a Server OS Machine) and a delivery group that has been set up to service both applications and desktops. The server XD-HOST should have the App-V RDS client installed. Finally, the App-V applications that you wish to deploy through Citrix StoreFront must also be published to the server XD-HOST through the App-V Management console; in this case, Audacity. How to do it… The following list shows you the high-level steps involved in this recipe and the tasks required to complete the recipe (all of the actions in this recipe will take place on the server XenDesktop): Set up App-V Publishing in Citrix Studio. Publish applications through the Delivery Group. The implementation of the preceding tasks is as follows: On the server XenDesktop, launch Citrix Studio. Navigate to Citrix Studio | Configuration, right-click on App-V Publishing, and click on Add App-V Publishing. In the window that appears, enter the details of your App-V Management and Publishing servers, click on Test connection… to confirm that the details are correct and then click on Save. Navigate to Delivery Groups, right-click on the delivery group you have created, and click on Add Applications. On the introduction page of the wizard that appears, click on Next. On the applications page of the wizard, select Audacity from the list provided (which will be discovered automatically from your server XD-HOST) and click on Next. Note that you can also select to publish multiple applications at the same time. Review the summary screen and click on Finish. There's more… Similar to publishing through the Microsoft remote desktop web app, it is possible to limit access to your applications to specific users or security groups. To limit access, right-click on your application in the Applications tab of the Delivery Groups page and click on Properties.   In the window that appears, select the Limit Visibility tab and select Limit visibility for this application to the users listed below. Click on the Add users… button to choose users and security groups from Active Directory to be included in the group. Summary In this article, we learned about enabling App-V shared content store mode which prevents the caching of App-V sored files on the client system. We also had a look into publishing applications through Microsoft RemoteApp which publishes the Audacity package to the RDS server which enables the user to access from the Remote Desktop Web Access. Then we learned about precaching application in the local store which forces the application to be cached in to the RDS session hosts, which has certain advantages. Finally, we learned about publishing the applications through Citrix StoreFront where we published the Audacity server through Citrix StoreFront. Resources for Article: Further resources on this subject: Virtualization [article] Customization in Microsoft Dynamics CRM [article] Installing Postgre SQL [article]
Read more
  • 0
  • 0
  • 2584

article-image-storage-scalability
Packt
11 Aug 2015
17 min read
Save for later

Storage Scalability

Packt
11 Aug 2015
17 min read
In this article by Victor Wu and Eagle Huang, authors of the book, Mastering VMware vSphere Storage, we will learn that, SAN storage is a key component of a VMware vSphere environment. We can choose different vendors and types of SAN storage to deploy on a VMware Sphere environment. The advanced settings of each storage can affect the performance of the virtual machine, for example, FC or iSCSI SAN storage. It has a different configuration in a VMware vSphere environment. Host connectivity of Fibre Channel storage is accessed by Host Bus Adapter (HBA). Host connectivity of iSCSI storage is accessed by the TCP/IP networking protocol. We first need to know the concept of storage. Then we can optimize the performance of storage in a VMware vSphere environment. In this article, you will learn these topics: What the vSphere storage APIs for Array Integration (VAAI) and Storage Awareness (VASA) are The virtual machine storage profile VMware vSphere Storage DRS and VMware vSphere Storage I/O Control (For more resources related to this topic, see here.) vSphere storage APIs for array integration and storage awareness VMware vMotion is a key feature in vSphere hosts. An ESXi host cannot provide the vMotion feature if it is without shared SAN storage. SAN storage is a key component in a VMware vSphere environment. In large-scale virtualization environments, there are many virtual machines stored in SAN storage. When a VMware administrator executes virtual machine cloning or migrates a virtual machine to another ESXi host by vMotion, this operation allocates the resource on that ESXi host and SAN storage. In vSphere 4.1 and later versions, it can support VAAI. The vSphere storage API is used by a storage vendor who provides hardware acceleration or offloads vSphere I/O between storage devices. These APIs can reduce the resource overhead on ESXi hosts and improve performance for ESXi host operations, for example, vMotion, virtual machine cloning, creating a virtual machine, and so on. VAAI has two APIs: the hardware acceleration API and the array thin provisioning API. The hardware acceleration API is used to integrate with VMware vSphere to offload storage operations to the array and reduce the CPU overload on the ESXi host. The following table lists the features of the hardware acceleration API for block and NAS: Array integration Features Description Block Fully copy This blocks clone or copy offloading. Block zeroing This is also called write same. When you provision an eagerzeroedthick VMDK, the SCSI command is issued to write zeroes to disks. Atomic Test & Set (ATS) This is a lock mechanism that prevents the other ESXi host from updating the same VMFS metadata. NAS Full file clone This is similar to Extended Copy (XCOPY) hardware acceleration. Extended statistics This feature is enabled in space usage in the NAS data store. Reserved space The allocated space of virtual disk in thick format. The array thin provisioning API is used to monitor the ESXi data store space on the storage arrays. It helps prevent the disk from running out of space and reclaims disk space. For example, if the storage is assigned as 1 x 3 TB LUN in the ESXi host, but the storage can only provide 2 TB of data storage space, it is considered to be 3 TB in the ESXi host. Streamline its monitoring LUN configuration space in order to avoid running out of physical space. When vSphere administrators delete or remove files from the data store that is provisioned LUN, the storage can reclaim free space in the block level. In vSphere 4.1 or later, it can support VAAI features. In vSphere 5.5, you can reclaim the space on thin provisioned LUN using esxcli. VMware VASA is a piece of software that allows the storage vendor to provide information about their storage array to VMware vCenter Server. The information includes storage capability, the state of physical storage devices, and so on. vCenter Server collects this information from the storage array using a software component called VASA provider, which is provided by the storage array vendor. A VMware administrator can view the information in VMware vSphere Client / VMware vSphere Web Client. The following diagram shows the architecture of VASA with vCenter Server. For example, the VMware administrator requests to create a 1 x data store in VMware ESXi Server. It has three main components: the storage array, the storage provider and VMware vCenter Server. The following is the procedure to add the storage provider to vCenter Server: Log in to vCenter by vSphere Client. Go to Home | Storage Providers. Click on the Add button. Input information about the storage vendor name, URL, and credentials. Virtual machine storage profile The storage provider can help the vSphere administrator know the state of the physical storage devices and the capabilities on which their virtual machines are located. It also helps choose the correct storage in terms of performance and space by using virtual machine storage policies. A virtual machine storage policy helps you ensure that a virtual machine guarantees a specified level of performance or capacity of storage, for example, the SSD/SAS/NL-SAS data store, spindle I/O, and redundancy. Before you define a storage policy, you need to specify the storage requirement for your application that runs on the virtual machine. It has two types of storage requirement, which is storage-vendor-specific storage capability and user-defined storage capability. Storage-vendor-specific storage capability comes from the storage array. The storage vendor provider informs vCenter Server that it can guarantee the use of storage features by using storage-vendor-specific storage capability. vCenter Server assigns vendor-specific storage capability to each ESXi data store. User-defined storage capability is the one that you can define and assign storage profile to each ESXi datastore. In vSphere 5.1/5.5, the name of the storage policy is VM storage profile. Virtual machine storage policies can include one or more storage capabilities and assign to one or more VM. The virtual machine can be checked for storage compliance if it is placed on compliant storage. When you migrate, create, or clone a virtual machine, you can select the storage policy and apply it to that machine. The following procedure shows how to create a storage policy and apply it to a virtual machine in vSphere 5.1 using user-defined storage capability: The vSphere ESXi host requires the license edition of Enterprise Plus to enable the VM storage profile feature. The following procedure is adding the storage profile into vCenter Server: Log in to vCenter Server using vSphere Client. Click on the Home button in the top bar, and choose the VM Storage Profiles button under Management. Click on the Manage Storage Capabilities button to create user-defined storage capability. Click on the Add button to create the name of the storage capacity, for example, SSD Storage, SAS Storage, or NL-SAS Storage. Then click on the Close button. Click on the Create VM Storage Profile button to create the storage policy. Input the name of the VM storage profile, as shown in the following screenshot, and then click on the Next button to select the user-defined storage capability, which is defined in step 4. Click on the Finish button. Assign the user-defined storage capability to your specified ESXi data store. Right-click on the data store that you plan to assign the user-defined storage capability to. This capability is defined in step 4. After creating the VM storage profile, click on the Enable VM Storage Profiles button. Then click on the Enable button to enable the profiles. The following screenshot shows Enable VM Storage Profiles: After enabling the VM storage profile, you can see VM Storage Profile Status as Enabled and Licensing Status as Licensed, as shown in this screenshot: We have successfully created the VM storage profile. Now we have to associate the VM storage profile with a virtual machine. Right-click on a virtual machine that you plan to apply to the VM storage profile, choose VM Storage Profile, and then choose Manage Profiles. From the drop-down menu of VM Storage Profile select your profile. Then you can click on the Propagate to disks button to associate all virtual disks or decide which virtual disks you want to associate with that profile by setting manually. Click on OK. Finally, you need to check the compliance of VM Storage Profile on this virtual machine. Click on the Home button in the top bar. Then choose the VM Storage Profiles button under Management. Go to Virtual Machines and click on the Check Compliance Now button. The Compliance Status will display Compliant after compliance checking, as follows: Pluggable Storage Architecture (PSA) exists in the SCSI middle layer of the VMkernel storage stack. PSA is used to allow thirty-party storage vendors to use their failover and load balancing techniques for their specific storage array. A VMware ESXi host uses its multipathing plugin to control the ownership of the device path and LUN. The VMware default Multipathing Plugin (MPP) is called VMware Native Multipathing Plugin (NMP), which includes two subplugins as components: Storage Array Type Plugin (SATP) and Path Selection Plugin (PSP). SATP is used to handle path failover for a storage array, and PSP is used to issue an I/O request to a storage array. The following diagram shows the architecture of PSA: This table lists the operation tasks of PSA and NMP in the ESXi host:   PSA NMP Operation tasks Discovers the physical paths Manages the physical path Handles I/O requests to the physical HBA adapter and logical devices Creates, registers, and deregisters logical devices Uses predefined claim rules to control storage devices Selects an optimal physical path for the request The following is an example of operation of PSA in a VMkernel storage stack: The virtual machine sends out an I/O request to a logical device that is managed by the VMware NMP. The NMP requests the PSP to assign to this logical device. The PSP selects a suitable physical path to send the I/O request. When the I/O operation is completed successfully, the NMP reports that the I/O operation is complete. If the I/O operation reports an error, the NMP calls the SATP. The SATP fails over to the new active path. The PSP selects a new active path from all available paths and continues the I/O operation. The following diagram shows the operation of PSA: VMware vSphere provides three options for the path selection policy. These are Most Recently Used (MRU), Fixed, and Round Robin (RR). The following table lists the advantages and disadvantages of each path: Path selection Description Advantage Disadvantage MRU The ESXi host selects the first preferred path at system boot time. If this path becomes unavailable, the ESXi host changes to the other active path. You can select your preferred path manually in the ESXi host. The ESXi host does not revert to the original path when that l path becomes available again. Fixed You can select the preferred path manually. The ESXi host can revert to the original path when the preferred path becomes available again. If the ESXi host cannot select the preferred path, it selects an available preferred path randomly. RR The ESXi host uses automatic path selection. The storage I/O across all available paths and enable load balancing across all paths. The storage is required to support ALUA mode. You cannot know which path is preferred because the storage I/O across all available paths. The following is the procedure of changing the path selection policy in an ESXi host: Log in to vCenter Server using vSphere Client. Go to the configuration of your selected ESXi host, choose the data store that you want to configure, and click on the Properties… button. Click on the Manage Paths… button. Select the drop-down menu and click on the Change button. If you plan to deploy a third-party MPP on your ESXi host, you need to follow up the storage vendor's instructions for the installation, for example, EMC PowerPath/VE for VMware that it is a piece of path management software for VMware's vSphere server and Microsoft's Hyper-V server. It also can provide load balancing and path failover features. VMware vSphere Storage DRS VMware vSphere Storage DRS (SDRS) is the placement of virtual machines in an ESX's data store cluster. According to storage capacity and I/O latency, it is used by VMware storage vMotion to migrate the virtual machine to keep the ESX's data store in a balanced status that is used to aggregate storage resources, and enable the placement of the virtual disk (VMDK) of virtual machine and load balancing of existing workloads. What is a data store cluster? It is a collection of ESXi's data stores grouped together. The data store cluster is enabled for vSphere SDRS. SDRS can work in two modes: manual mode and fully automated mode. If you enable SDRS in your environment, when the vSphere administrator creates or migrates a virtual machine, SDRS places all the files (VMDK) of this virtual machine in the same data store or different a data store in the cluster, according to the SDRS affinity rules or anti-affinity rules. The VMware ESXi host cluster has two key features: VMware vSphere High Availability (HA) and VMware vSphere Distributed Resource Scheduler (DRS). SDRS is different from the host cluster DRS. The latter is used to balance the virtual machine across the ESXi host based on the memory and CPU usage. SDRS is used to balance the virtual machine across the SAN storage (ESX's data store) based on the storage capacity and IOPS. The following table lists the difference between SDRS affinity rules and anti-affinity rules: Name of SDRS rules Description VMDK affinity rules This is the default SDRS rule for all virtual machines. It keeps each virtual machine's VMDKs together on the same ESXi data store. VMDK anti-affinity rules Keep each virtual machine's VMDKs on different ESXi data stores. You can apply this rule into all virtual machine's VMDKs or to dedicated virtual machine's VMDKs. VM anti-affinity rules Keep the virtual machine on different ESXi data stores. This rule is similar to the ESX DRS anti-affinity rules. The following is the procedure to create a storage DRS in vSphere 5: Log in to vCenter Server using vSphere Client. Go to home and click on the Datastores and Datastore Clusters button. Right-click on the data center and choose New Datastore Cluster. Input the name of the SDRS and then click on the Next button. Choose Storage DRS mode, Manual Mode and Fully Automated Mode. Manual Mode: According to the placement and migration recommendation, the placement and migration of the virtual machine are executed manually by the user.Fully Automated Mode: Based on the runtime rules, the placement of the virtual machine is executed automatically. Set up SDRS Runtime Rules. Then click on the Next button. Enable I/O metric for SDRS recommendations is used to enable I/O load balancing. Utilized Space is the percentage of consumed space allowed before the storage DRS executes an action. I/O Latency is the percentage of consumed latency allowed before the storage DRS executes an action. This setting can execute only if the Enable I/O metric for SDRS recommendations checkbox is selected. No recommendations until utilization difference between source and destination is is used to configure the space utilization difference threshold. I/O imbalance threshold is used to define the aggressive of IOPs load balancing. This setting can execute only if the Enable I/O metric for SDRS recommendations checkbox is selected. Select the ESXi host that is required to create SDRS. Then click on the Next button. Select the data store that is required to join the data store cluster, and click on the Next button to complete. After creating SDRS, go to the vSphere Storage DRS panel on the Summary tab of the data store cluster. You can see that Storage DRS is Enabled. On the Storage DRS tab on the data store cluster, it displays the recommendation, placement, and reasons. Click on the Apply Recommendations button if you want to apply the recommendations. Click on the Run Storage DRS button if you want to refresh the recommendations. VMware vSphere Storage I/O Control What is VMware vSphere Storage I/O Control? It is used to control in order to share and limit the storage of I/O resources, for example, the IOPS. You can control the number of storage IOPs allocated to the virtual machine. If a certain virtual machine is required to get more storage I/O resources, vSphere Storage I/O Control can ensure that that virtual machine can get more storage I/O than other virtual machines. The following table shows example of the difference between vSphere Storage I/O Control enabled and without vSphere Storage I/O Control: In this diagram, the VMware ESXi Host Cluster does not have vSphere Storage I/O Control. VM 2 and VM 5 need to get more IOPs, but they can allocate only a small amount of I/O resources. On the contrary, VM 1 and VM 3 can allocate a large amount of I/O resources. Actually, both VMs are required to allocate a small amount of IOPs. In this case, it wastes and overprovisions the storage resources. In the diagram to the left, vSphere Storage I/O Control is enabled in the ESXi Host Cluster. VM 2 and VM 5 are required to get more IOPs. They can allocate a large amount of I/O resources after storage I/O control is enabled. VM 1, VM 3, and VM 4 are required to get a small amount of I/O resources, and now these three VMs allocate a small amount of IOPs. After enabling storage I/O control, it helps reduce waste and overprovisioning of the storage resources. When you enable VMware vSphere Storage DRS, vSphere Storage I/O Control is automatically enabled on the data stores in the data store cluster. The following is the procedure to be carried out to enable vSphere Storage I/O control on an ESXi data store, and set up storage I/O shares and limits using vSphere Client 5: Log in to vCenter Server using vSphere Client. Go to the Configuration tab of the ESXi host, select the data store, and then click on the Properties… button. Select Enabled under Storage I/O Control, and click on the Close button. After Storage I/O Control is enabled, you can set up the storage I/O shares and limits on the virtual machine. Right-click on the virtual machine and select Edit Settings. Click on the Resources tab in the virtual machine properties box, and select Disk. You can individually set each virtual disk's Shares and Limit field. By default, all virtual machine shares are set to Normal and with Unlimited IOPs. Summary In this article, you learned what VAAI and VASA are. In a vSphere environment, the vSphere administrator learned how to configure the storage profile in vCenter Server and assign to the ESXi data store. We covered the benefits of vSphere Storage I/O Control and vSphere Storage DRS. When you found that it has a storage performance problem in the vSphere host, we saw how to troubleshoot the performance problem, and found out the root cause. Resources for Article: Further resources on this subject: Essentials of VMware vSphere [Article] Introduction to vSphere Distributed switches [Article] Network Virtualization and vSphere [Article]
Read more
  • 0
  • 0
  • 2136

article-image-working-virtual-machines
Packt
11 Aug 2015
7 min read
Save for later

Working with Virtual Machines

Packt
11 Aug 2015
7 min read
In this article by Yohan Rohinton Wadia the author of Learning VMware vCloud Air, we are going to walk through setting up and accessing virtual machines. (For more resources related to this topic, see here.) What is a virtual machine? Most of you reading this article must be aware of what a virtual machine is, but for the sake of simplicity, let's have a quick look at what it really is. A virtual machine is basically an emulation of a real or physical computer which runs on an operating system and can host your favorite applications as well. Each virtual machine consists of a set of files that govern the way the virtual machine is configured and run. The most important of these files would be a virtual drive, that acts just as a physical drive storing all your data, applications and operating system; and a configuration file that basically tells the virtual machine how much resources are dedicated to it, which networks or storage adapters to use, and so on. The beauty of these files is that you can port them from one virtualization platform to another and manage them more effectively and securely as compared to a physical server. The following diagram shows an overview of how a virtual machine works over a host: Virtual machine creation in vCloud Air is a very simple and straight forward process. vCloud Air provides you with three mechanisms using which you can create your own virtual machines briefly summarized as follows: Wizard driven: vCloud Air provides a simple wizard using which you can deploy virtual machines from pre-configured templates. This option is provided via the vCloud Air web interface itself. Using vCloud Director: vCloud Air provides an advanced option as well for users who want to create their virtual machines from scratch. This is done via the vCloud Director interface and is a bit more complex as compared to the wizard driven option. Bring your own media: Because vCloud Air natively runs on VMware vSphere and vCloud Director platforms, its relatively easy for you to migrate your own media, templates and vApps into vCloud Air using a special tool called as VMware vCloud Connector. Create a virtual machine using template As we saw earlier, VMware vCloud Air provides us with a default template using which you can deploy virtual machines in your public cloud in a matter of seconds. The process is a wizard driven activity where you can select and configure the virtual machine's resources such as CPU, memory, hard disk space all with a few simple clicks. The following steps will you create a virtual machine using a template: Login to your vCloud Air (https://vchs.vmware.com/login) using the username and password that we set during the sign in process. From the Home page, select the VPC on Demand tab. Once there, from the drop-down menu above the tabs, select your region and the corresponding VDC where you would like to deploy your first virtual machine. In this case, I have selected the UK-Slough-6 as the region and MyFirstVDC as the default VDC where I will deploy my virtual machines:If you have selected more than one VDC, you will be prompted to select a specific virtual data center before you start the wizard as a virtual machine cannot span across regions or VDCs. From the Virtual Machines tab, select the Create your first virtual machine option. This will bring up the VM launch wizard as shown here: As you can see here, there are two tabs provided by default: a VMware Catalog and another section called as My Catalog. This is an empty catalog by default but this is the place where all your custom templates and vApps will be shown if you have added them from the vCloud Director portal or purchased them from the Solutions Exchange site as well. Select any particular template to get started with. You can choose your virtual machine to be either powered by a 32 bit or a 64 bit operating system. In my case, I have selected a CentOS 6.4 64 bit template for this exercise. Click Continue once done. Templates provided by vCloud Air are either free or paid. The paid ones generally have a $ sign marked next to the OS architecture, indicating that you will be charged once you start using the virtual machine. You can track all your purchases using the vCloud Air billing statement. The next step is to define the basic configuration for your virtual machine. Provide a suitable name for your virtual machine. You can add an optional description to it as well. Next, select the CPU, memory and storage for the virtual machine. The CPU and memory resources are linked with each other so changing the CPU will automatically set the default vRAM for the virtual machine as well; however you can always increase the vRAM as per your needs. In this case, the virtual machine has 2 CPUs and 4 GB vRAM allocated to it. Select the amount of storage you want to provide to your virtual machine. VMware can allocate a maximum of 2 TB of storage as a single drive to a virtual machine. However as a best practice; it is always good to add more storage by adding multiple drives rather than storing it all on one single drive. You can optionally select your disks to be either standard or SSD-accelerated; both features we will discuss shortly. Virtual machine configuration Click on Create Virtual Machine once you are satisfied with your changes. Your virtual machine will now be provisioned within few minutes. By default, the virtual machine is not powered on after it is created. You can power it on by selecting the virtual machine and clicking on the Power On icon in the tool bar above the virtual machine: Status of the virtual machine created There you have it. Your very first virtual machine is now ready for use! Once powered on, you can select the virtual machine name to view its details along with a default password that is auto-generated by vCloud Air. Accessing virtual machines using the VMRC Once your virtual machines are created and powered on, you can access and view them easily using the virtual machine remote console (VMRC). There are two ways to invoke the VMRC, one is by selecting your virtual machine from the vCloud Air dashboard, selecting the Actions tab and select the option Open in Console as shown: The other way to do so is by selecting the virtual machine name. This will display the Settings page for that particular virtual machine. To launch the console select the Open Virtual Machine option as shown: Make a note of the Guest OS Password from the Guest OS section. This is the default password that will be used to log in to your virtual machine. To log in to the virtual machine, use the following credentials: Username: root Password: <Guest_OS_Password> This is shown in the following screenshot: You will be prompted to change this password on your first login. Provide a strong new password that contains at least one special character and contains an alphanumeric pattern as well. Summary There you have it! Your very own Linux virtual machine on the cloud! Resources for Article: Further resources on this subject: vCloud Networks [Article] Creating your first VM using vCloud technology [Article] Securing vCloud Using the vCloud Networking and Security App Firewall [Article]
Read more
  • 0
  • 0
  • 2583

article-image-storage-ergonomics
Packt
07 Aug 2015
19 min read
Save for later

Storage Ergonomics

Packt
07 Aug 2015
19 min read
In this article by Saurabh Grover, author of the book Designing Hyper-V Solutions, we will be discussing the last of the basics to get you equipped to create and manage a simple Hyper-V structure. No server environment, physical or virtual, is complete without a clear consideration and consensus over the underlying storage. In this article, you will learn about the details of virtual storage, how to differentiate one from the other, and how to convert one to the other and vice versa. We will also see how Windows Server 2012 R2 removes dependencies on raw device mappings by way of pass-through or iSCSI LUN, which were required for guest clustering. VHDX can now be shared and delivers better results than pass-through disks. There are more merits to VHDX than the former, as it allows you to extend the size even if the virtual machine is alive. Previously, Windows Server 2012 added a very interesting facet for storage virtualization in Hyper-V when it introduced virtual SAN, which adds a virtual host bus adapter (HBA) capability to a virtual machine. This allows a VM to directly view the fibre channel SAN. This in turn allows FC LUN accessibility to VMs and provides you with one more alternative for shared storage for guest clustering. Windows Server 2012 also introduced the ability to utilize the SMI-S capability, which was initially tested on System Center VMM 2012. Windows 2012 R2 carries the torch forward, with the addition of new capabilities. We will discuss this feature briefly in this article. In this article, you will cover the following: Two types of virtual disks, namely VHD and VHDX Merits of using VHDX from Windows 2012 R2 onwards Virtual SAN storage Implementing guest clustering using shared VHDX Getting an insight into SMI-S (For more resources related to this topic, see here.) Virtual storage A virtual machine is a replica of a physical machine in all rights and with respect to the building components, regardless of the fact that it is emulated, resembles, and delivers the same performance as a physical machine. Every computer ought to have storage for the OS or application loading. This condition applies to virtual machines as well. If VMs are serving as independent servers for roles such as domain controller or file server, where the server needs to maintain additional storage apart from the OS, the extended storage can be extended for domain user access without any performance degradation. Virtual machines can benefit from multiple forms of storage, namely VHD/VHDX, which are file-based storage; iSCSI LUNs; pass-through LUNs, which are raw device mappings; and of late, virtual-fibre-channel-assigned LUNs. There have been enhancements to each of these, and all of these options have a straightforward implementation procedure. However, before you make a selection, you should identify the use case according to your design strategy and planned expenditure. In the following section, we will look at the storage choices more closely. VHD and VHDX VHD is the old flag bearer for Microsoft virtualization ever since the days of virtual PC and virtual server. The same was enhanced and employed in early Hyper-V releases. However, as a file-based storage that gets mounted as a normal storage for a virtual machine, VHD had its limitations. VHDX, a new feature addition to Windows Server 2012, was built further upon the limitations of its predecessor and provides greater storage capacity, support for large sector disks, and better protection against corruption. In the current release of Windows Server 2012 R2, VHDX has been bundled with more ammo. VHDX packed a volley of feature enhancements when it was initially launched, and with Windows Server 2012 R2, Microsoft only made it better. If we compare the older, friendlier version of VHD with VHDX, we can draw the following inferences: Size factor: VHD had an upper size limit of 2 TB, while VHDX gives you a humungous maximum capacity of 64 TB. Large disk support: With the storage industry progressing towards 4 KB sector disks from the 512 bytes sector, for applications that still may depend on the older sector format, there are two offerings from the disk alignment perspective: native 4 KB disk and 512e (or 512 byte emulation disks). The operating system, depending on whether it supports native 4 KB disk or not, will either write 4 KB chunks of data or inject 512 bytes of data into a 4 KB sector. The process of injecting 512 bytes into a 4 KB sector is called RMW, or Read-Write-Modify. VHDs are generically supported on 512e disks. Windows Server 2012 and R2 both support native 4 KB disks. However, the VHD driver has a limitation; it cannot open VHD files on physical 4 KB disks. This limitation is checked by enabling VHD to be aligned to 4 KB and RMW ready, but if you are migrating from the older Hyper-V platform, you will need to convert it accordingly. VHDX, on the other hand, is the "superkid". It can be used on all disk forms, namely 512, 512e, and the native 4 KB disk as well, without any RMW dependency. Data corruption safety: In the event of power outages or failures, the possibility of data corruption is reduced with VHDX. Metadata inside the VHDX is updated via a logging process that ensures that the allocations inside VHDX are committed successfully. Offloaded data transfers (ODX): With Windows Server 2012 Hyper-V supporting this feature, data transfer and moving and sizing of virtual disks can be achieved at the drop of a hat, without host server intervention. The basic prerequisite for utilizing this feature is to host the virtual machines on ODX-capable hardware. Thereafter, Windows Server 2012 self-detects and enables the feature. Another important clause is that virtual disks (VHDX) should be attached to the SCSI, not IDE. TRIM/UNMAP: Termed by Microsoft in its documentation as efficiency in representing data, this feature works in tandem with thin provisioning. It adds the ability to allow the underlying storage to reclaim space and maintain it optimally small. Shared VHDX: This is the most interesting feature in the collection released with Windows Server 2012 R2. It made guest clustering (failover clustering in virtual machines) in Hyper-V a lot simpler. With Windows Server 2012, you could set up a guest cluster using virtual fibre channel or iSCSI LUN. However, the downside was that the LUN was exposed to the user of the virtual machine. Shared VHDX proves to be the ideal shared storage. It gives you the benefit of storage abstraction, flexibility, and faster deployment of guest clusters, and it can be stored on an SMB share or a cluster-shared volume (CSV). Now that we know the merits of using VHDX over VHD, it is important to realize that either of the formats can be converted into the other and can be used under various types of virtual disks, allowing users to decide a trade-off between performance and space utilization. Virtual disk types Beyond the two formats of virtual hard disks, let's talk about the different types of virtual hard disks and their utility as per the virtualization design. There are three types of virtual hard disks, namely dynamically expanding, fixed-size, and differencing virtual hard disks: Dynamically expanding: Also called a dynamic virtual hard disk, this is the default type. It gets created when you create a new VM or a new VHD/VHDX. This is Hyper-V's take on thin provisioning. The VHD/VHDX file will start off from a small size and gradually grow up to the maximum defined size for the file as and when chunks of data get appended or created inside the OSE (short for operating system environment) hosted by the virtual disk. This disk type is quite beneficial, as it prevents storage overhead and utilizes as much as required, rather than committing the entire block. However, due to the nature of the virtual storage, as it spawns in size, the actual file gets written in fragments across the Hyper-V CSV or LUN (physical storage). Hence, it affects the performance of the disk I/O operations of the VM. Fixed size: As the name indicates, the virtual disk type commits the same block size on the physical storage as its defined size. In other words, if you have specified a fixed size 1 TB, it will create a 1 TB VHDX file in the storage. The creation of a fixed size takes a considerable amount of time, commits space on the underlying storage, and does allow SAN thin provisioning to reclaim it, somewhat like whitespaces in a database. The advantage of using this type is that it delivers amazing read performance and heavy workloads from SQL, and exchange can benefit from it. Differencing: This is the last of the lot, but quite handy as an option when it comes to quick deployment of virtual machines. This is by far an unsuitable option, unless employed for VMs with a short lifespan, namely pooled VDI (short for virtual desktop infrastructure) or lab testing. The idea behind the design is to have a generic virtual operating system environment (VOSE) in a shut down state at a shared location. The VHDX of the VOSE is used as a parent or root, and thereafter, multiple VMs can be spawned with differencing or child virtual disks that use the generalized OS from the parent and append changes or modifications to the child disk. So, the parent stays unaltered and serves as a generic image. It does not grow in size; on the contrary, the child disk keeps on growing as and when data is added to the particular VM. Unless used for short-lived VMs, the long-running VMs could enter an outage state or may be performance-stricken soon due to the unpredictable growth pattern of a differencing disk. Hence, these should be avoided for server virtual machines without even a second thought. Virtual disk operations Now we will apply all of the knowledge gained about virtual hard disks, and check out what actions and customizations we can perform on them. Creating virtual hard disks This goal can be achieved in different ways: You can create a new VHD when you are creating a new VM, using the New Virtual Machine Wizard. It picks up the VHDX as the default option. You can also launch the New Virtual Hard Disk Wizard from a virtual machine's settings. This can be achieved by PowerShell cmdlets as well:New-VHD You may employ the Disk Management snap-in to create a new VHD as well. The steps to create a VHD here are pretty simple: In the Disk Management snap-in, select the Action menu and select Create VHD, like this: Figure 5-1: Disk Management – Create VHD This opens the Create and Attach Virtual Hard Disk applet. Specify the location to save the VHD at, and fill in Virtual hard disk format and Virtual hard disk type as depicted here in figure 5-2: Figure 5-2: Disk Management – Create and Attach Virtual Hard Disk The most obvious way to create a new VHD/VHDX for a VM is by launching New Virtual Hard Disk Wizard from the Actions pane in the Hyper-V Manager console. Click on New and then select the Hard Disk option. It will take you to the following set of screens: On the Before You Begin screen, click on Next, as shown in this screenshot: Figure 5-3: New Virtual Hard Disk Wizard – Create VHD The next screen is Choose Disk Format, as shown in figure 5-4. Select the relevant virtual hard disk format, namely VHD or VHDX, and click on Next. Figure 5-4: New Virtual Hard Disk Wizard – Virtual Hard Disk Format In the screen for Choose Disk Type, select the relevant virtual hard disk type and click on Next, as shown in the following screenshot: Figure 5-5: New Virtual Hard Disk Wizard– Virtual Hard Disk Type The next screen, as shown in figure 5-6, is Specify Name and Location. Update the Name and Location fields to store the virtual hard disk and click on Next. Figure 5-6: New Virtual Hard Disk Wizard – File Location The Configure Disk screen, shown in figure 5-7, is an interesting one. If needs be, you can convert or copy the content of a physical storage (local, LUN, or something else) to the new virtual hard disk. Similarly, you can copy the content from an older VHD file to the Windows Server 2012 or R2 VHDX format. Then click on Next. Figure 5-7: New Virtual Hard Disk Wizard – Configure Disk On the Summary screen, as shown in the following screenshot, click on Finish to create the virtual hard disk: Figure 5-8: New Virtual Hard Disk Wizard – Summary Editing virtual hard disks There may be one or more reasons for you to feel the need to modify a previously created virtual hard disk to suit a purpose. There are many available options that you may put to use, given a particular virtual disk type. Before you edit a VHDX, it's a good practice to inspect the VHDX or VHD. The Inspect Disk option can be invoked from two locations: from the VM settings under the IDE or SCSI controller, or from the Actions pane of the Hyper-V Manager console. Also, don't forget how to do this via PowerShell: Get-VHD -Path "E:Hyper-VVirtual hard disks1.vhdx" You may now proceed with editing a virtual disk. Again, the Edit Disk option can be invoked in exactly the same fashion as Inspect Disk. When you edit a VHDX, you are presented with four options, as shown in figure 5-9. It may sound obvious, but not all the options are for all the disk types: Compact: This operation is used to reduce or compact the size of a virtual hard disk, though the preset capacity remains the same. A dynamic disk, or differencing disk, grows as data elements are added, though deletion of the content does not automatically reclaim the storage capacity. Hence, a manual compact operation becomes imperative reduce the file size. PowerShell cmdlet can also do this trick, as follows: Optimize-VHD Convert: This is an interesting one, and it almost makes you change your faith. As the name indicates, this operation allows you to convert one virtual disk type to another and vice versa. You can also create a new virtual disk of the desired format and type at your preferred location. The PowerShell construct used to help you achieve the same goal is as follows: Convert-VHD Expand: This operation comes in handy, similar to Extend a LUN. You end up increasing the size of a virtual hard disk, which happens visibly fast for a dynamic disk and a bit slower for its fixed-size cousins. After this action, you have to perform the follow-up action inside the virtual machine to increase the volume size from disk management. Now, for the PowerShell code: Resize-VHD Merge: This operation is disk-type-specific—differencing virtual disks. It allows two different actions. You can either merge the differencing disk with the original parent, or create a new merged VHD out of all the contributing VHDs, namely the parent and the child or the differencing disk. The latter is the preferred way of doing it, as in utmost probability, there would be more than differencing to a parent. In PowerShell, the alternative the cmdlet is this: Merge-VHD Figure 5-9: Edit Virtual Hard Disk Wizard – Choose Action Pass-through disks As the name indicates, these are physical LUNs or hard drives passed on from the Hyper-V hosts, and can be assigned to a virtual machine as a standard disk. A once popular method on older Hyper-V platforms, this allowed the VM to harness the full potential of the raw device bypassing the Hyper-V host filesystem and also not getting restricted by the 2 TB limit of VHDs. A lot has changed over the years, as Hyper-V has matured into a superior virtualization platform and introduced VHDX, which went past the size limitation. with Windows Server 2012 R2 can be used as a shared storage for Hyper-V guest clusters. There are, however, demerits to this virtual storage. When you employ a pass-through disk, the virtual machine configuration file is stored separately. Hence, the snapshotting becomes unknown to this setup. You would not be able to utilize the dynamic disk's or differential disk's abilities here too. Another challenge of using this form of virtual storage is that when using a VSS-based backup, the VSS writer ignores the pass-through and iSCSI LUN. Hence, a complex backup plan has to be implemented by involving a running backup within VM and on the virtualization host separately. The following are steps, along with a few snapshots, that show you how to set up a pass-through disk: Present a LUN to the Hyper-V host. Confirm the LUN in Disk Management and ensure that it stays in the Offline State and as Not Initialized. Figure 5-10: Hyper-V Host Disk Management In Hyper-V Manager, right-click on the VM you wish to assign the pass-through to and select Settings. Figure 5-11: VM Settings – Pass-through disk placement Select SCSI Controller (or IDE in the case of Gen-1 VM) and then select the Physical hard disk option, as shown in the preceding screenshot. In the drop-down menu, you will see the raw device or LUN you wish to assign. Select the appropriate option and click on OK. Check Disk Management within the virtual machine to confirm that the disk has visibility. Figure 5-12: VM Disk Management – Pass-through Assignment Bring it online and initialize. Figure 5-13: VM Disk Management – Pass-through Initialization As always the preceding path can be chalked out with the help of a PowerShell cmdlet: Add-VMHardDiskDrive -VMName VM5 –ControllerType SCSI – ControllerNumber 0 –ControllerLocation 2 –DiskNumber 3 Virtual fibre channel Let's move on to the next big offering in Windows Server 2012 and R2 Hyper-V Server. There was pretty much a clamor for direct FC connectivity to virtual machines, as pass-through disks were supported only via iSCSI LUNs (with some major drawbacks not with FC). Also, needless to say, FC is faster. Enterprises with high-performance workloads relying on the FC SAN refrained from virtualizing or migrating to the cloud. Windows Server 2012 introduced the virtual fibre channel SAN ability in Hyper-V, which extended the HBA (short for host bus adapter) abilities to a virtual machine, granting them a WWN (short for world wide node name) and allowing access to a fibre channel SAN over a virtual SAN. The fundamental principle behind the virtual SAN is the same as the Hyper-V virtual switch, wherein you create a virtual SAN that hooks up to the SAN fabric over the physical HBA of the Hyper-V host. The virtual machine has new synthetic hardware for the last piece. It is called a virtual host bus adapter or vHBA, which gets its own set of WWNs, namely WWNN (node name) and WWPN (port name). The WWN is to the FC protocol as MAC is to the Ethernet. Once the WWNs are identified at the fabric and the virtual SAN, the storage admins can set up zoning and present the LUN to the specific virtual machine. The concept is straightforward, but there are prerequisites that you will need to ensure are in place before you can get down to the nitty-gritty of the setup: One or more Windows Server 2012 or R2 Hyper-V hosts. Hosts should have one or more FC HBAs with the latest drivers, and should support the virtual fibre channel and NPIV. NPIV may be disabled at the HBA level (refer to the vendor documentation prior to deployment). The same can be enabled using command-line utilities or GUI-based such as OneCommand manager, SANSurfer, and so on. NPIV should be enabled on the SAN fabric or actual ports. Storage arrays are transparent to NPIV, but they should support devices that present LUNs. Supported guest operating systems for virtual SAN are Windows 2008, Windows 2008 R2, Windows 2012, and Windows 2012 R2. The virtual fibre channel does not allow boot from SAN, unlike pass-through disks. We are now done with the prerequisites! Now, let's look at two important aspects of SAN infrastructure, namely NPIV and MPIO. N_Port ID virtualization (NPIV) An ANSI T11 standard extension, this feature allows virtualization of the N_Port (WWPN) of an HBA, allowing multiple FC initiators to share a single HBA port. The concept is popular and is widely accepted and promoted by different vendors. Windows Server 2012 and R2 Hyper-V utilizes this feature to the best, wherein each virtual machine partaking in the virtual SAN gets assigned a unique WWPN and access to the SAN over a physical HBA spawning its own N_Port. Zoning follows next, wherein the fabric can have the zone directed to the VM WWPN. This attribute leads to a very small footprint, and thereby, easier management and operational and capital expenditure. Summary It is going to be quite a realization that we have covered almost all the basic attributes and aspects required for a simple Windows Server 2012 R2 Hyper-V infrastructure setup. If we revise the contents, we will notice this: we started off in this article by understanding and defining the purpose of virtual storage, and what the available options are for storage to be used with a virtual machine. We reviewed various virtual hard disk types, formats, and associated operations that may be required to customize a particular type or modify it accordingly. We recounted how the VHDX format is superior to its predecessor VHD and which features were added with the latest Window Server releases, namely 2012 and 2012 R2. We discussed shared VHDX and how it can be used as an alternative to the old-school iSCSI or FC LUN as a shared storage for Windows guest clustering. Pass-through disks are on their way out, and we all know the reason why. The advent of the virtual fibre channel with Windows Server 2012 has opened the doors for virtualization of high-performance workloads relying heavily on FC connectivity, which until now was a single reason and enough of a reason to decline consolidation of these workloads. Resources for Article: Further resources on this subject: Hyper-V Basics [article] Getting Started with Hyper-V Architecture and Components [article] Hyper-V building blocks for creating your Microsoft virtualization platform [article]
Read more
  • 0
  • 0
  • 1159
article-image-essentials-vmware-vsphere
Packt
09 Jul 2015
7 min read
Save for later

Essentials of VMware vSphere

Packt
09 Jul 2015
7 min read
In this article by Puthiyavan Udayakumar, author of the book VMware vSphere Design Essentials, we will cover the following topics: Essentials of designing VMware vSphere The PPP framework The challenges and encounters faced on virtual infrastructure (For more resources related to this topic, see here.) Let's get started with understanding the essentials of designing VMware vSphere. Designing is nothing but assembling and integrating VMware vSphere infrastructure components together to form the baseline for a virtualized datacenter. It has the following benefits: Saves power consumption Decreases the datacenter footprint and helps towards server consolidation Fastest server provisioning On-demand QA lab environments Decreases hardware vendor dependency Aids to move to the cloud Greater savings and affordability Superior security and High Availability Designing VMware vSphere Architecture design principles are usually developed by the VMware architect in concurrence with the enterprise CIO, Infrastructure Architecture Board, and other key business stakeholders. From my experience, I would always urge you to have frequent meetings to observe functional requirements as much as possible. This will create a win-win situation for you and the requestor and show you how to get things done. Please follow your own approach, if it works. Architecture design principles should be developed by the overall IT principles specific to the customer's demands, if they exist. If not, they should be selected to ensure positioning of IT strategies in line with business approaches. In nutshell, architect should aim to form an effective architecture principles that fulfills the infrastructure demands, following are high level principles that should be followed across any design: Design mission and plans Design strategic initiatives External influencing factors When you release a design to the customer, keep in mind that the design must have the following principles: Understandable and robust Complete and consistent Stable and capable of accepting continuous requirement-based changes Rational and controlled technical diversity Without the preceding principles, I wouldn't recommend you to release your design to anyone even for peer review. For every design, irrespective of the product that you are about to design, try the following approach; it should work well but if required I would recommend you make changes to the approach. The following approach is called PPP, which will focus on people's requirements, the product's capacity, and the process that helps to bridge the gap between the product capacity and people requirements: The preceding diagram illustrates three entities that should be considered while designing VMware vSphere infrastructure. Please keep in mind that your design is just a product designed by a process that is based on people's needs. In the end, using this unified framework will aid you in getting rid of any known risks and its implications. Functional requirements should be meaningful; while designing, please make sure there is a meaning to your design. Selecting VMware vSphere from other competitors should not be a random pick, you should always list the benefits of VMware vSphere. Some of them are as follows: Server consolidation and easy hardware changes Dynamic provisioning of resources to your compute node Templates, snapshots, vMotion, DRS, DPM, High Availability, fault tolerance, auto monitoring, and solutions for warnings and alerts Virtual Desktop Infrastructure (VDI), building a disaster recovery site, fast deployments, and decommissions The PPP framework Let's explore the components that integrate to form the PPP framework. Always keep in mind that the design should consist of people, processes, and products that meet the unified functional requirements and performance benchmark. Always expect the unexpected. Without these metrics, your design is incomplete; PPP always retains its own decision metrics. What does it do, who does it, and how is it done? We will see the answers in the following diagrams: The PPP Framework helps you to get started with requirements gathering, design vision, business architecture, infrastructure architecture, opportunities and solutions, migration planning, fixing the tone for implementing and design governance. The following table illustrates the essentials of the three-dimensional approach and the basic questions that are required to be answered before you start designing or documenting about designing, which will in turn help to understand the real requirements for a specific design: Phase Description Key components Product Results of what? In what hardware will the VM reside? What kind of CPU is required? What is the quantity of CPU, RAM, storage per host/VM? What kind of storage is required? What kind of network is required? What are the standard applications that need to be rolled out? What kind of power and cooling are required? How much rack and floor space is demanded? People Results of who? Who is responsible for infrastructure provisioning? Who manages the data center and supplies the power? Who is responsible for implementation of the hardware and software patches? Who is responsible for storage and back up? Who is responsible for security and hardware support? Process Results of how? How should we manage the virtual infrastructure? How should we manage hosted VMs? How should we provision VM on demand? How should a DR site be active during a primary site failure? How should we provision storage and backup? How should we take snapshots of VMs? How should we monitor and perform periodic health checks? Before we start to apply the PPP framework on VMware vSphere, we will discuss the list of challenges and encounters faced on the virtual infrastructure. List of challenges and encounters faced on the virtual infrastructure In this section, we will see a list of challenges and encounters faced with virtual infrastructure due to the simple reason that we fail to capture the functional and non-functional demands of business users, or do not understand the fit-for-purpose concept: Resource Estimate Misfire: If you underestimate the amount of memory required up-front, you could change the number of VMs you attempt to run on the VMware ESXi host hardware. Resource unavailability: Without capacity management and configuration management, you cannot create dozens or hundreds of VMs on a single host. Some of the VMs could consume all resources, leaving other VMs unknown. High utilization: An army of VMs can also throw workflows off-balance due to the complexities they can bring to provisioning and operational tasks. Business continuity: Unlike a PC environment, VMs cannot be backed up to an actual hard drive. This is why 80 percent of IT professionals believe that virtualization backup is a great technological challenge. Security: More than six out of ten IT professionals believe that data protection is a top technological challenge. Backward compatibility: This is especially challenging for certain apps and systems that are dependent on legacy systems. Monitoring performance: Unlike physical servers, you cannot monitor the performance of VMs with common hardware resources such as CPU, memory, and storage. Restriction of licensing: Before you install software on virtual machines, read the license agreements; they might not support this; hence, by hosting on VMs, you might violate the agreement. Sizing the database and mailbox: Proper sizing of databases and mailboxes is really critical to the organization's communication systems and for applications. Poor design of storage and network: A poor storage design or a networking design resulting from a failure to properly involve the required teams within an organization is a sure-fire way to ensure that this design isn't successful. Summary In this article we covered a brief introduction of the essentials of designing VMware vSphere which focused on the PPP framework. We also had look over the challenges and encounters faced on the virtual infrastructure. Resources for Article: Further resources on this subject: Creating and Managing VMFS Datastores [article] Networking Performance Design [article] The Design Documentation [article]
Read more
  • 0
  • 0
  • 1236

article-image-process-designing-xendesktop-deployments
Packt
09 Jul 2015
29 min read
Save for later

Process of designing XenDesktop® deployments

Packt
09 Jul 2015
29 min read
In this article by, Govardhan Gunnala and Daniele Tosatto, authors of the book Mastering Citrix® XenDesktop®, we will discuss the process of designing XenDesktop® deployments. The uniqueness of the XenDesktop architecture is its modular five layer model. It covers all the key decisions in designing the XenDesktop deployment. User layer: Defines the users and their requirements Access layer: Defines how the users will access the resources Desktop/resource layer: Defines what resources will be delivered Control layer: Defines managing and maintaining the solution Hardware layer: Defines what resources it needs for implementing the chosen solution (For more resources related to this topic, see here.) While FMA is simple at a high level, its implementation can become complex depending on the technologies/options that are chosen for each component across the layers of FMA. Along with great flexibility, comes the responsibility of diligently choosing the technologies/options for fulfilling your business requirements. Importantly, the decisions made in the first three layers impact the last two layers of the deployment. It means that fixing a wrong decision anywhere in the first three layers during/after implementation stage would have less or no scope, and may even lead to implement the solution from the scratch again. Your design decisions speak for your solution's effectiveness in helping with the given business requirements. The layered architecture of the XenDesktop FMA, featuring the components at each layer is given in the following diagram. Each component of XenDesktop will fall under one of the layers shown in the succeeding diagram. We'll see what decisions are to be made for each of these components at each layer in the next sub section. Decisions to be made at each layer I will have to write a separate book for discussing all the possible technologies/options that are available at each layer. Following is a highly summarized list of the decisions to be made at each layer. This will help you in realizing the breadth of designing XenDesktop. This high level coverage of the various options will help you in locating and considering all the possible options that are available for making the right decisions and avoiding any slippages and missing any considerations. User layer The user layer refers to the specification of the users who will utilize the XenDesktop deployment. A business requirement statement may mention that the service users can either be the internal business users or the external customers accessing the service from Internet. Furthermore, both of these users may also need mobile access to the XenDesktop services. The Citrix receiver is the only component that belongs to the user layer, and XenDesktop is dependent on it for successfully delivering a XenDesktop session. By correlating this technical aspect with the preceding business requirement statement, one needs to consider all the possible aspects of receiver software on the client devices. This involves making the following decisions: Endpoint/user devices related: What are the devices that the users are supposed to access the services from? Who owns and administrates those devices throughout their lifecycle? Endpoints supported: Corporate computers, laptops, or mobiles running on Windows or thin clients. User smart devices, such as Android tablets, Apple iPads, and so on. In case of service providers, the endpoints can usually be any device and they need to be supported. Endpoint ownership: Device management includes security, availability, and compliance. Maintaining the responsibility of the devices on network. Endpoint lifecycle: Devices become either outdated or limited very quickly. Define minimum device hardware requirements to run your business workloads. Endpoint form factor: Choose the devices that may either be fully featured or have limited thin clients, or be a mix of both to support features, such as HDX graphics, multi-monitors, and so on. Thin client selection: Choose if the thin clients, such as Dell Wyse Zero clients, running on the limited functionality operating systems would suffice your user requirements. Understand its licensing cost. Receiver selection: Once you determine your endpoint device and its capabilities, you need to decide on the receiver selection that can be run on the devices. The greatest thing is that receiver is available for almost any device. Receiver type: Choose the receiver that is required for your device. Since the Receiver software for each platform (OS) differs, it is important to use the appropriate Receiver software for your devices while considering the platform that it runs on. You can download the appropriate Receiver software for your device from http://www.Citrix.com/go/receiver.html page. Initial deployment: Receiver is like any other software that will fit into your overall application portfolio. Determine how you would deploy this application on your devices. For corporate desktops and mobiles, you may use the enterprise application deployment and the mobile device management software. Otherwise, the users will be prompted to install it when they access the StoreFront URL, or they can even download it from Citrix for facilitating the installation process. For user-managed mobile devices, you can get it from the respective Windows or Google Apple stores/marketplaces. Initial configuration: Similar to other applications, Receiver requires certain initial configuration. It can be configured either manually or by using a provisioning file, group policy, and e-mail based discovery. Keeping the Receiver software up-to-date: Once you have installed Receiver on user devices, you will also require a mechanism for deploying the updates to Receiver. This can also be the way of initial deployments. Access layer An access layer refers to the specification of how the users gain access to the resources. A business requirement statement may usually state that the users should be validated for gaining access, and the access should be secured when the user is connected over the Internet. The technical components that fall under this layer include firewall(s), NetScaler, and StoreFront. These components play a broader role in the overall networking infrastructure of the company, which also includes the XenDesktop, as well as complete Citrix solutions in the environment. Their key activities include firewalling, external to internal IP address NATing, NetScaler Gateway to secure the connection between the virtual desktop and the user device, global load balancing, user validation/authentication, and GUI presentation of the enumerated resources to the end users. It involves making the following decisions: Authentication: Authentication point: A user can be authenticated at the NetScaler Gateway or StoreFront. Authentication policy: Various business use cases and compliance makes certain modes of authentication mandatory. You can choose from the different authentication methods supported at: StoreFront: Basic authentication by using a username and a password; Domain Pass-through, the NS Gateway pass-through, smart card, and even unauthenticated access. NetScaler Gateway: LDAP, RADIUS (token), client certificates. StoreFront: The decisions that are to be made around the scope of StoreFront are as follows: Unauthenticated access: Provides access to the users who don't require a username and a password, but they are still able to access the administrator allowed resources. Usually, this fits well with public or Kiosk systems. High availability: Making the StoreFront servers available at all times. Hardware load balancing, DNS Round Robin, Windows network load balancing, and so on. Delivery controller high availability and StoreFront: Building high availability for the delivery controller is recommended since they are needed for forming successful connections. Defining more than one delivery controller for the stores makes StoreFront auto failover to the next server in the list. Security - Inbound traffic: Consider securing the user connection to virtual desktops from the internal StoreFront and the external NetScaler Gateway. Security – Backend traffic: Consider securing the communication between the StoreFront and the XML services running on the controller servers. As these will be within the internal network, they can be secured by using the internal private certificate. Routing Receiver with Beacons: Receiver supports websites called Beacons to identify whether the user connection is internal or external. StoreFront provides Receiver with the http(s) addresses of the Beacon points during the initial connection. Resource Presentation: StoreFront presents a webpage, which provides self-service of the resources by the user. Scalability: The StoreFront server load and capacity for the user workload. Multi-site App synchronization: StoreFront can connect to the controllers at multiple site deployments. StoreFront can replicate the user subscribed applications across the servers. NetScaler Gateway: In the NetScaler Gateway, the decision regarding the secured external user access from public Internet involves the following: Topology: NetScaler supports two topologies: 1-Arm (normal security) and 2-Arm (high security). High availability: The NetScaler Gateways can be configured in pairs to provide high availability. Platform: NetScaler is available in different platforms, such as VPX, MDX, and SDX. They have different SSL throughput and SSL Transaction Per Second (TPS) metrics. Pre-authentication policy: Specifies about the Endpoint Analysis (EPA) scans for evaluating whether the endpoints meet the pre-set security criteria. This is available when NetScaler is chosen as the authentication point. Session management: The session policies define the overall user experience by classifying the endpoints into mobile and non-mobile devices. Session profile defines the details needed for gaining access to the environment. These are in two forms: SSLVPN and HDX proxy. Preferred data center: In multi-active data center deployments, StoreFront can determine the user resources primary data center and NetScaler can direct the user connections to that. Static and dynamic methods are used for specifying the preferred data center. Desktop/resource layer The desktop or resource layer refers to the specification of which resources (applications and desktops) users will receive. This layer comes with various options, which are tailored for business user roles and their requirements. This layer makes XenDesktop a better fit for achieving the varying user needs across each of their departments. It includes specification of the FlexCast model (type of desktop), user personalization, and delivering the application to the users in the desktop session. An example business requirement statement may specify that all the permanent employees would require a desktop with all the basic applications pre-installed based on their team and role, with their user settings and data to be retained. For all the contract employees, provide a basic desktop with controlled access to the applications on-demand and do not retain their user data. It includes various components, such as profile management solutions (including Windows profiles, the Citrix profile management, AppSense), Citrix print server, Windows operating systems, application delivery, and so on. It involves making decisions, such as: Images: It involves choosing the FlexCast model that is tailored to the user requirements, thereby delivering the expected desktop behavior to the end users, as follows: Operating system related: It requires choosing the desktop or the server operating systems for your master image, which depends on the FlexCast model that you are choosing from. Hosted Shared Hosted VDI: Pooled-static, pooled-random, pooled with PvD, dedicated, existing, physical/remote PC, streamed and streamed with PvD Streamed VHD Local VM On-demand apps Local apps In case of the desktop OS, it's also important to choose the right OS architecture according to the 32-bit or 64-bit processor architecture of the desktop. Computer policies: Define the controls over the user connection, security and bandwidth settings, devices or connection types, and so on. Specify all the policy features similar to that of the user policies. Machine catalogs: Define your catalog settings, including the FlexCast model, AD computer accounts, provisioning method, OS of the base image, and so on. Delivery groups: Assign desktops or applications to the user groups. Application folders: This is a tidy interface feature in Studio for organizing the applications into folders for easy management. StoreFront integration: This is an option for specifying the StoreFront URL for the Receiver in the master image so that the users will be auto connected to the storefront in the session. Resource allocation: This defines the hardware resources for the desktop VMs. It primarily involves hosts and storage. Depending on your estimated workloads, you can define the resources, such as number of virtual processors (vCPU), amount of virtual memory (vRAM), storage requirements for the needed disk space, and also the following resources Graphics (GPU): For the advanced use cases, you may choose to allocate the pass-through GPU, hardware vGPU, or the software vGPUs. IOPS: Depending on the operating system, the FlexCast model, and estimated workloads, you can analyze the overall IOPS load from the system and plan the corresponding hardware to support that load. Optimizations: Depending on the operating system, you can apply various optimizations to Windows that run on the master image. This greatly reduces the overall load later. Bandwidth requirements: Bandwidth can be a limiting factor in case of WAN and remote user connections of slow networks. Bandwidth consumption and user experience depend on various factors, such as the operating system being used, the application design, and screen resolution. To retain high user experience, it's important to consider the bandwidth requirements and optimization technologies, as follows: Bandwidth minimizing technologies: These include Quality of Service (QoS), HDX RealTime, and WAN Optimization, with Citrix's own CloudBridge solution. HDX Encoding Method: HDX encoding method also affects the bandwidth usage. For XenDesktop 7.x, there are three encoding methods that are available. These will appropriately be employed by the HDX protocol. These are Desktop Composition Redirection, H.264 Enhanced SuperCodec, and Legacy Mode (XenDesktop 5.X Adaptive Display). Session Bandwidth: Bandwidth needed in a session depends on the user interaction with desktop and applications. Latency: HDX can typically perform well up to 300 ms latency and the experience begins to degrade as latency increases. Personalization: This is an essential element of the desktop environment. It involves the decisions that are critical for the end user experience/acceptance and for the overall success of the solution during implementation. Following are the decisions that are involved in personalization. User profiles: This involves the decisions that are related to the user login, roaming of their settings, and seamless profile experience across overall Windows network: Profile type: Choose which profile type works for your user requirements. Possible options include local, roaming, mandatory, and hybrid profile with Citrix Profile Management. Citrix Profile Management provides various additional features, such as profile streaming, active write back, and configuring profiles using an .ini file, and so on. Folder redirection: This option saves the user's application settings in the profile. Represents special folders, such as AppData, Desktop, and so on. Folder exclusion: This option is for setting the exclusion of folders that are to be saved in the user profile. Usually, it refers to the local and IE Temp folders of a user profile. Profile caching: Caching profiles on a local system improves the user login experience and it occurs by default. You need to consider this depending on the type of virtual desktop FlexCast mode. Profile permissions: Specify whether the administrator needs access to the user profiles based on information sensitivity. Profile path: The decision to place the user profiles on a network location for high availability. It affects the logon performance depending on how close the profile is to the virtual desktop from which the user is logging on. It can be managed either from Active Directory or through Citrix Profile Management. User profile replication between data centers: This involves making the user profiles highly available and supporting the profile roaming among multiple data centers. User policies: Involves the decision regarding deploying the user settings and controlling those using management policies providing consistent settings for users, such as: Preferred policy engine: This requires choosing the policy processing for the Windows systems. The Citrix policies can be defined and managed from either Citrix Studio or the Active Directory group policy. Policy filtering: The Citrix policies can be applied to the users and their desktop with the various filter options that are available in the Citrix policy engine. If the group policies have been used, then you'll use the group policy filtering options. Policy precedence: The Citrix policies are processed in the order of LCSDOU (Local, Citrix, Site, Domain, OU policies). Baseline policy: This defines the policy with default and common settings for all the desktop images. Citrix provides the policy templates that suit specific business use cases. A baseline should cover security requirements, common network conditions, and managing the user device or the user profile requirements. Such a baseline can be configured using the security policies, connection-based policies, device-based policies, and profile-based policies. Printing: This is one of the most common desktop user requirements. XenDesktop supports printing, which can work for various scenarios. The printing technology involves deploying and using appropriate drivers. Provisioning printers: These can either be a static or dynamic set of printers. The options for dynamic printers do and do not auto-create all the client printers and auto-create the non-network client printers only. You can also set the options for session printers through the Citrix policy, which can include either static or dynamic printers. Furthermore, you can also set proximity printers. Managing print drivers: This option can be configured so that printer drivers are auto-installed during the session creation. It can be installed by using either the generic Citrix universal printer driver, or the manual option. You can also have all the known drivers preinstalled on the master image. Citrix even provides the Citrix universal print server, which extends XenDesktop universal printing support to network printing. Print job routing: It can be routed among client device or through the network server. The ICA protocol is used for compressing and sending data. Personal vDisk: Desktops with personal vDisks retain the user changes. Choosing the personal vDisk depends on the user requirements and the FlexCast Model that was opted for. Personal vDisk can be set to thin provisioned for estimated growth, but it can't be shrunk later. Applications: The application separation into another layer improves the scalability of the overall desktop solution. Applications are critical elements, which the users require from a desktop environment: Application delivery method: Applications can be installed on the base image, on the Personal vDisks, streamed into the session, or through the on-demand XenApp hosted mode. It also depends on application compatibility, and it requires technical expertise and tools, such as AppDNA, for effectively resolving them. Application streaming: XenDesktop supports App-V to build isolated application packages, which can be streamed to desktops. 16-bit legacy application delivery: If there are any legacy 16-bit applications to be supported, then you can choose from the 32 bit OS, VM hosted App, or a parallel XenApp5 deployment. Control layer Control layer speaks about all the backend systems that are required for managing and maintaining the overall solution through its life cycle. The control layer includes most of the XenDesktop components that are further classified into categories, such as resource/access controllers, image/desktop controllers, and infrastructure controllers. These respectively correspond to the first three layers of FMA, as shown here: Resource/access controllers: Supports the access layer Image/desktop controllers: Supports the desktop/resource layer Infrastructure controllers: Provides the underlying hardware for the overall FMA components/environment This layer involves the specification of capacity, configuration, and the topology of the environment. Building required/planned redundancy for each of these components enables achieving the enterprise business capabilities, such as HA, scalability, disaster recovery, load balancing, and so on. Components and technologies that operate under this layer include Active Directory, group policies, site database, Citrix licensing, XenDesktop delivery controllers, XenClient hypervisor, the Windows server and the Desktop operating systems, provisioning services, which can be either MCS or PVS and their controllers, and so on. An example business requirement statement may be as follows: Build a highly available desktop environment for a fast growing business users group. We currently have a head count of 30 users, which is expected to double in a year. It involves making the following decisions: Infrastructure controllers: It includes common infrastructure, which is required for XenDesktop to function in the Windows domain network. Active Directory: This is used for the authentication and authorization of users in a Citrix environment. It's also responsible for providing and synchronizing time on the systems, which is critical for Kerberos. For the most part, your AD structure will be in-place, and it may require certain changes for accommodating your XenDesktop requirements, such as: Forest design: It involves choosing the AD forest and domain decisions, such as multi-domain, multi-forest, domain and forest trusts, and so on, which will define the users of the XenDesktop resources. Site design: It involves choosing the number of sites that represent your geographical locations, the number of domain controllers, the subnets that accommodate the IP addresses, site links for replication, and so on. Organizational unit structure: Planning the OU structure for easier management of XenDesktop Workers and VDAs. In the case of multi-forest deployment scenarios (as supported in App Orchestration), having the same OU structure is critical. Naming standards: Planning proper conventions for XenDesktop AD objects, which includes users, security groups, XenDesktop servers, OUs, and so on. User groups: This helps in choosing the individual user names or groups. The user security groups are recommended as they reduce validation to just one object despite the number of users in it. Policy control: This helps in planning GPOs ordering and sizing, inheritance, filtering, enforcement, blocking, and loopback processing for reducing the overall processing time on the VDAs and servers. Database: Citrix uses the Microsoft SQL server database for most of its products, as follows: Edition: Microsoft ships the SQL server database in different editions, which provide varying features and capabilities. Using the standard edition for typical XenDesktop production deployments is recommended. For larger/enterprise deployments, depending on the requirement, a higher edition may be required. Database and Transaction Log Sizing: This involves estimating the storage requirements for the Site Configuration database, Monitoring database, and configuration logging databases. Database Location: By default, the Configuration Logging and the Monitoring databases are located within the Site Configuration database. Separating these into separate databases and relocating the Monitoring database to a different SQL server is recommended. High availability: Choose from VM-level HA, Mirroring, AlwaysOn Failover Cluster, and AlwaysOn Availability Groups. Database Creation: Usually, the database is automatically recreated during the XenDesktop installation. Alternatively, they can be created by using the scripts. Citrix licensing: Citrix licensing for XenDesktop requires the existence of a Citrix license server on the network. You can install and manage the multiple Citrix licenses. License type: Choose from user, device, and concurrent licenses. Version: Citrix's new license servers are backward compatible. Sizing: A license server can be scaled out to support a higher number of license requests per second. High availability: License server comes with a 30 day grace period to usually help in recovering from failures. High Availability for license server can be implemented through Window clustering technology or duplication of the virtual server. Optimization: Optimize the number of the receiving and processing threads depending on your hardware. This is generally required in large and heavily-loaded enterprise environments. Resource controllers: The resource controllers include the XenDesktop, the XenApp controllers, and the XenClient synchronizer, as shown here: XenDesktop and XenApp delivery controller. Number of sites: It is considered to have been based on network, risk tolerance, security requirements. Delivery controller sizing: Delivery controller scalability is based on CPU utilization. The more processor cores are available, the more virtual desktops a controller can support. High availability: Always plan for the N+1 deployment of the controllers for achieving the HA. Then, update the controllers' details on VDA through policy. Host connection configuration: Host connections define the hosts, storage repositories, and guest network to be used by the virtual machines on hypervisors. XML service encryption: The XML service protocol running on delivery controllers uses clear text for exchanging all data except passwords. Consider using an SSL encryption for sending the StoreFront data over a secure HTTP connection. Server OS load management: The default maximum number of sessions per server has been set to 250. Using real time usage monitoring and loading analysis, you can define appropriate load management policies. Session PreLaunch and Session Linger: Designed for helping the users in quickly accessing the applications by starting the sessions before they are requested (session prelaunch) and by keeping the user sessions active after a user closes all the applications in a session (session linger). XenClient synchronizer: It includes considerations for its architecture, processor specification, memory specification, network specification, high availability, the SQL database, remote synchronizer servers, storage repository size and location, and external access, and Active Directory integration. Image controllers: This includes all the image provisioning controllers. MCS is built-into the delivery controller. We'll have PVS considerations, such as the following: Farms: A farm represents the top level of the PVS infrastructure. Depending on your networking and administration boundaries, you can define the number of farms to be deployed in your environment. Sites: Each Farm consists of one or more sites, which contain all the PVS objects. While multiple sites share the same database, the target devices can only failover to the other Provisioning Servers that are within the same site. Your networking and organization structure determines the number of sites in your deployment. High availability: If implemented, PVS will be a critical component of the virtual desktop infrastructure. HA should be considered for its database, PVS servers, vDisks and storage, networking and TFTP, and so on. Bootstrap delivery: There are three methods in which the target device can receive the bootstrap program. This can be done by using the DHCP options, the PXE broadcasts, and the boot device manager. Write cache placement: Write cache uniquely identifies the target device by including the target device's MAC address and disk identifier. Write cache can be placed on the following: Cache on the Device Hard Drive, Cache on the Device Hard Drive Persisted, Cache in the Device RAM, Cache in the Device RAM with overflow on the hard disk, and Cache on the Provisioning Server Disk, and Cache on the Provisioning Server Disk Persisted. vDisk format and replication: PVS supports the use of fixed-size or dynamic vDisks. vDisks hosted on a SAN, local, or Direct Attached Storage must be replicated between the vDisk stores whenever a vDisk is created or changed. It can be replicated either manually or automatically. Virtual or physical servers, processor and memory: The virtual Provisioning Servers are preferred when sufficient processor, memory, disk and networking resources are guaranteed. Scale up or scale out: Determining whether to scale up or scale out the servers requires considering factors like redundancy, failover times, datacenter capacity, hardware costs, hosting costs, and so on. Bandwidth requirements and network configuration: PVS can boot 500 devices simultaneously. A 10Gbps network is recommended for provisioning services. Network configuration should consider the PVS Uplink, the Hypervisor Uplink, and the VM Uplink. Recommended switch settings include either Disable Spanning Tree or Enable Portfast, Storm Control, and Broadcast Helper. Network interfaces: Teaming the multiple network interfaces with link aggregation can provide a greater throughput. Consider the NIC features TCP Offloading and Receive Side Scaling (RSS) while selecting NICs. Subnet affinity: It is a load balancing algorithm, which helps in ensuring that the target devices are connected to the most appropriate Provisioning Server. It can be configured to Best Effort and Fixed. Auditing: By default, the auditing feature is disabled. When enabled, the audit trail information is written in the provisioning services database along with the general configuration data. Antivirus: The antivirus software can cause file-locking issues on the PVS server by contending with the files being accessed by PVS Services. The vDisk store and the write cache should be excluded from any antivirus scans in order to prevent file contention issues. Hardware layer The hardware layer involves choosing the right capacity, make, and hardware features of the backend systems that are required for the overall solution as defined in the control layer. In-line with the control layer, the hardware layer decisions will change if any of the first three layer decisions are changed. Components and technologies that operate under this layer include server hardware, storage technologies, hard disks and the RAID configurations, hypervisors and their management software, backup solutions, monitoring, network devices and connectivity, and so on. It involves making the decisions shown here: Hardware Sizing: The hardware sizing is usually done in two ways. The first, and the preferred, way is to plan ahead and purchase the hardware based on the workload requirements. The second way to size the hosts to use the existing hardware in the best configuration to support the different workload requirements, as follows: Workload separation: Workloads can either be separated into dedicated resource clusters or be mixed in the same physical hosts. Control host sizing: The VM resource allocation for each control component should be determined in the control layer and it should be allocated accordingly. Desktop host sizing: This involves choosing the physical resources required for the virtual desktops as well as the hosted server deployments. It includes estimating the pCPU, pRAM, GPU, and the number of hosts. Hypervisors: This involves choosing from the supported hypervisors that include major players, such as Hyper-V, XenServer, and ESX. Choosing from these requires considering a vast range of parameters, such as host hardware - processor and memory, storage requirements, network requirements, scale up/out, and host scalability. Further considerations to be made also include the following: Networking: Networks, physical NIC, NIC teaming, virtual NICs—hosts, virtual NICs—guests, and IP addressing VM provisioning: Templates High availability: Microsoft Hyper-V: Failover clustering, cluster shared volumes, CSV cache VMware ESXi: VMware vSphere high availability cluster Citrix XenServer: XenServer high availability by using the server pool Monitoring: Use the hypervisor specific vendor provided management and monitoring tools for hypervisor monitoring; use hardware specific vendor provided monitoring tools for hardware level monitoring. Backup and recovery: Backup method and components to be backed up. Storage: Storage architecture, RAID level, numbers of disks, disk type, storage bandwidth, tiered storage, thin provisioning, and data de-duplication Disaster recovery Data center utilization: The XenDesktop deployments can leverage multiple data centers for improving the user performance and the availability of resources. Multiple data centers can be deployed in an active/active or an active/passive configuration. An active/active configuration allows for both data centers to be utilized, although the individual users are tied to a specific location. Data center connectivity: An active/active data center configuration utilizing GSLB (Global Server Load Balancing) ensures that the users will be able to establish a connection even if one datacenter is unavailable. In the active/active configuration, the considerations that should be made are as follows: data center failover time, application servers, and StoreFront optimal routing. Capacity in the secondary data center: Planning of the secondary data center capacity is determined by the cost and by the management to support full capacity in each data center. A percent of the overall users, or a percent of the users per application, may be considered for the secondary data center facility. Then, it also needs the consideration of the type and amount of resources that will be made available in a failover scenario. Tools for designing XenDesktop® In the previous section, we saw a broad list of components, technologies, and configuration options, and so on, which we learned are involved in the process of designing the XenDesktop deployment. Obviously, designing the XenDesktop deployment for large, advanced, and complex business scenarios is a mammoth task, which requires operational knowledge of a broad range of technologies. Understanding the maze of this complexity, Citrix constantly helps the customers with great learning resources through handbooks, reviewer guides, blueprints, online eDocs, and training sessions. To ease the life of technical architects and XenDesktop designing and deployment consultants, Citrix has developed an online designing portal called Project Accelerator, which automates, streamlines, and covers all the broad aspects that are involved in the XenDesktop deployment. Project Accelerator Citrix designed the Project Accelerator web based designing tool, and it is available to the customers after they login. Its design is based on the Citrix consulting best practices for the XenDesktop deployment and implementation. It follows the layered FMA and allows you to create a close to deployment architecture. It covers all the key decisions and facilitates modifying them and evaluating their impact on the overall architecture. Upon completion of the design, it generates an architectural diagram and a deployment sizing plan. One can define more than one project and customize them in parallel to achieve multiple deployment plans. I highly recommended starting your Production XenDesktop deployment with the Project Accelerator architecture and the sizing design. Virtual Desktop Handbook Citrix provides the handbook along with new XenDesktop releases. The handbook covers the latest features of that XenDesktop version and provides detailed information on the design decisions. It provides all the possible options for each of the decisions involved, and these options are evaluated and validated in an in-depth manner by the Citrix Solutions lab. They include the Citrix Consulting leading best practices as well. This helps architects and engineers to consider the recommended technologies, and then evaluate them further for fulfilling the business requirements. The Virtual Desktop Handbook for latest the version of XenDesktop, that is, 7.x, can be found at: http://support.Citrix.com/article/CTX139331. XenDesktop® Reviewer's Guide The Reviewer's Guide is also released along with the new versions of XenDesktop. They are designed for helping businesses in quickly installing and configuring the XenDesktop for evaluation. They provide a step-by-step screencast of the installation and configuration wizards of XenDesktop. This provides practical guidance to the IT administrators for successfully installing and delivering the XenDesktop sessions. The XenDesktop Reviewers Guide for the latest version of XenDesktop, that is, 7.6, can be found at https://www.citrix.com/content/dam/citrix/en_us/documents/products-solutions/xendesktop-reviewers-guide.pdf. Summary We learnt the decision making that is involved in designing the XenDesktop in general, and we also saw the deployment designs of the complex environments involving the cloud capabilities. We also saw different tools for designing XenDesktop. Resources for Article: Further resources on this subject: Understanding Citrix®Provisioning Services 7.0 [article] Installation and Deployment of Citrix Systems®' CPSM [article] Designing, Sizing, Building, and Configuring Citrix VDI-in-a-Box [article]
Read more
  • 0
  • 0
  • 2091

article-image-getting-started-hyper-v-architecture-and-components
Packt
04 Jun 2015
19 min read
Save for later

Getting Started with Hyper-V Architecture and Components

Packt
04 Jun 2015
19 min read
In this article by Vinícius R. Apolinário, author of the book Learning Hyper-V, we will cover the following topics: Hypervisor architecture Type 1 and 2 Hypervisors Microkernel and Monolithic Type 1 Hypervisors Hyper-V requirements and processor features Memory configuration Non-Uniform Memory Access (NUMA) architecture (For more resources related to this topic, see here.) Hypervisor architecture If you've used Microsoft Virtual Server or Virtual PC, and then moved to Hyper-V, I'm almost sure that your first impression was: "Wow, this is much faster than Virtual Server". You are right. And there is a reason why Hyper-V performance is much better than Virtual Server or Virtual PC. It's all about the architecture. There are two types of Hypervisor architectures. Hypervisor Type 1, like Hyper-V and ESXi from VMware, and Hypervisor Type 2, like Virtual Server, Virtual PC, VMware Workstation, and others. The objective of the Hypervisor is to execute, manage and control the operation of the VM on a given hardware. For that reason, the Hypervisor is also called Virtual Machine Monitor (VMM). The main difference between these Hypervisor types is the way they operate on the host machine and its operating systems. As Hyper-V is a Type 1 Hypervisor, we will cover Type 2 first, so we can detail Type 1 and its benefits later. Type 1 and Type 2 Hypervisors Hypervisor Type 2, also known as hosted, is an implementation of the Hypervisor over and above the OS installed on the host machine. With that, the OS will impose some limitations to the Hypervisor to operate, and these limitations are going to reflect on the performance of the VM. To understand that, let me explain how a process is placed on the processor: the processor has what we call Rings on which the processes are placed, based on prioritization. The main Rings are 0 and 3. Kernel processes are placed on Ring 0 as they are vital to the OS. Application processes are placed on Ring 3, and, as a result, they will have less priority when compared to Ring 0. The issue on Hypervisors Type 2 is that it will be considered an application, and will run on Ring 3. Let's have a look at it: As you can see from the preceding diagram, the hypervisor has an additional layer to access the hardware. Now, let's compare it with Hypervisor Type 1: The impact is immediate. As you can see, Hypervisor Type 1 has total control of the underlying hardware. In fact, when you enable Virtualization Assistance (hardware-assisted virtualization) at the server BIOS, you are enabling what we call Ring -1, or Ring decompression, on the processor and the Hypervisor will run on this Ring. The question you might have is "And what about the host OS?" If you install the Hyper-V role on a Windows Server for the first time, you may note that after installation, the server will restart. But, if you're really paying attention, you will note that the server will actually reboot twice. This behavior is expected, and the reason it will happen is because the OS is not only installing and enabling Hyper-V bits, but also changing its architecture to the Type 1 Hypervisor. In this mode, the host OS will operate in the same way a VM does, on top of the Hypervisor, but on what we call parent partition. The parent partition will play a key role as the boot partition and in supporting the child partitions, or guest OS, where the VMs are running. The main reason for this partition model is the key attribute of a Hypervisor: isolation. For Microsoft Hyper-V Server you don't have to install the Hyper-V role, as it will be installed when you install the OS, so you won't be able to see the server booting twice. With isolation, you can ensure that a given VM will never have access to another VM. That means that if you have a compromised VM, with isolation, the VM will never infect another VM or the host OS. The only way a VM can access another VM is through the network, like all other devices in your network. Actually, the same is true for the host OS. This is one of the reasons why you need an antivirus for the host and the VMs, but this will be discussed later. The major difference between Type 1 and Type 2 now is that kernel processes from both host OS and VM OS will run on Ring 0. Application processes from both host OS and VM OS will run on Ring 3. However, there is one piece left. The question now is "What about device drivers?" Microkernel and Monolithic Type 1 Hypervisors Have you tried to install Hyper-V on a laptop? What about an all-in-one device? A PC? A server? An x64 based tablet? They all worked, right? And they're supposed to work. As Hyper-V is a Microkernel Type 1 Hypervisor, all the device drivers are hosted on the parent partition. A Monolithic Type 1 Hypervisor hosts its drivers on the Hypervisor itself. VMware ESXi works this way. That's why you should never use a standard ESXi media to install an ESXi host. The hardware manufacturer will provide you with an appropriate media with the correct drivers for the specific hardware. The main advantage of the Monolithic Type 1 Hypervisor is that, as it always has the correct driver installed, you will never have a performance issue due to an incorrect driver. On the other hand, you won't be able to install this on any device. The Microkernel Type 1 Hypervisor, on the other hand, hosts its drivers on the parent partition. That means that if you installed the host OS on a device, and the drivers are working, the Hypervisor, and in this case Hyper-V, will work just fine. There are other hardware requirements. These will be discussed later in this article. The other side of this is that if you use a generic driver, or a wrong version of it, you may have performance issues, or even driver malfunction. What you have to keep in mind here is that Microsoft does not certify drivers for Hyper-V. Device drivers are always certified for Windows Server. If the driver is certified for Windows Server, it is also certified for Hyper-V. But you always have to ensure the use of correct driver for a given hardware. Let's take a better look at how Hyper-V works as a Microkernel Type 1 Hypervisor: As you can see from the preceding diagram, there are multiple components to ensure that the VM will run perfectly. However, the major component is the Integration Components (IC), also called Integration Services. The IC is a set of tools that you should install or upgrade on the VM, so that the VM OS will be able to detect the virtualization stack and run as a regular OS on a given hardware. To understand this more clearly, let's see how an application accesses the hardware and understand all the processes behind it. When the application tries to send a request to the hardware, the kernel is responsible for interpreting this call. As this OS is running on an Enlightened Child Partition (Means that IC is installed), the Kernel will send this call to the Virtual Service Client (VSC) that operates as a synthetic device driver. The VSC is responsible for communicating with the Virtual Service Provider (VSP) on the parent partition, through VMBus, so the VSC can use the hardware resource. The VMBus will then be able to communicate with the hardware for the VM. The VMBus, a channel-based communication, is actually responsible for communicating with the parent partition and hardware. For the VMBus to access the hardware, it will communicate directly with a component on the Hypervisor called hypercalls. These hypercalls are then redirected to the hardware. However, only the parent partition can actually access the physical processor and memory. The child partitions access a virtual view of these components that are translated on the guest and the host partitions. New processors have a feature called Second Level Address Translation (SLAT) or Nested Paging. This feature is extremely important on high performance VMs and hosts, as it helps reduce the overhead of the virtual to physical memory and processor translation. On Windows 8, SLAT is a requirement for Hyper-V. It is important to note that Enlightened Child Partitions, or partitions with IC, can be Windows or Linux OS. If the child partitions have a Linux OS, the name of the component is Linux Integration Services (LIS), but the operation is actually the same. Another important fact regarding ICs is that they are already present on Windows Server 2008 or later. But, if you are running a newer version of Hyper-V, you have to upgrade the IC version on the VM OS. For example, if you are running Hyper-V 2012 R2 on the host OS and the guest OS is running Windows Server 2012 R2, you probably don't have to worry about it. But if you are running Hyper-V 2012 R2 on the host OS and the guest OS is running Windows Server 2012, then you have to upgrade the IC on the VM to match the parent partition version. Running guest OS Windows Server 2012 R2 on a VM on top of Hyper-V 2012 is not recommended. For Linux guest OS, the process is the same. Linux kernel version 3 or later already have LIS installed. If you are running an old version of Linux, you should verify the correct LIS version of your OS. To confirm the Linux and LIS versions, you can refer to an article at http://technet.microsoft.com/library/dn531030.aspx. Another situation is when the guest OS does not support IC or LIS, or an Unenlightened Child Partition. In this case, the guest OS and its kernel will not be able to run as an Enlightened Child Partition. As the VMBus is not present in this case, the utilization of hardware will be made by emulation and performance will be degraded. This only happens with old versions of Windows and Linux, like Windows 2000 Server, Windows NT, and CentOS 5.8 or earlier, or in case that the guest OS does not have or support IC. Now that you understand how the Hyper-V architecture works, you may be thinking "Okay, so for all of this to work, what are the requirements?" Hyper-V requirements and processor features At this point, you can see that there is a lot of effort for putting all of this to work. In fact, this architecture is only possible because hardware and software companies worked together in the past. The main goal of both type of companies was to enable virtualization of operating systems without changing them. Intel and AMD created, each one with its own implementation, a processor feature called virtualization assistance so that the Hypervisor could run on Ring 0, as explained before. But this is just the first requirement. There are other requirement as well, which are as follows: Virtualization assistance (also known as Hardware-assisted virtualization): This feature was created to remove the necessity of changing the OS for virtualizing it. On Intel processors, it is known as Intel VT-x. All recent processor families support this feature, including Core i3, Core i5, and Core i7. The complete list of processors and features can be found at http://ark.intel.com/Products/VirtualizationTechnology. You can also use this tool to check if your processor meets this requirement which can be downloaded at: https://downloadcenter.intel.com/Detail_Desc.aspx?ProductID=1881&DwnldID=7838. On AMD Processors, this technology is known as AMD-V. Like Intel, all recent processor families support this feature. AMD provides a tool to check processor compatibility that can be downloaded at http://www.amd.com/en-us/innovations/software-technologies/server-solution/virtualization. Data Execution Prevention (DEP): This is a security feature that marks memory pages as either executable or nonexecutable. For Hyper-V to run, this option must be enabled on the System BIOS. For an Intel-based processor, this feature is called Execute Disable bit (Intel XD bit) and No Execute Bit (AMD NX bit). This configuration will vary from one System BIOS to another. Check with your hardware vendor how to enable it on System BIOS. x64 (64-bit) based processor: This processor feature uses a 64-bit memory address. Although you may find that all new processors are x64, you might want to check if this is true before starting your implementation. The compatibility checkers above, from Intel and AMD, will show you if your processor is x64. Second Level Address Translation (SLAT): As discussed before, SLAT is not a requirement for Hyper-V to work. This feature provides much more performance on the VMs as it removes the need for translating physical and virtual pages of memory. It is highly recommended to have the SLAT feature on the processor ait provides more performance on high performance systems. As also discussed before, SLAT is a requirement if you want to use Hyper-V on Windows 8 or 8.1. To check if your processor has the SLAT feature, use the Sysinternals tool—Coreinfo— that can be downloaded at http://technet.microsoft.com/en-us/sysinternals/cc835722.aspx. There are some specific processor features that are not used exclusively for virtualization. But when the VM is initiated, it will use these specific features from the processor. If the VM is initiated and these features are allocated on the guest OS, you can't simply remove them. This is a problem if you are going to Live Migrate this VM from a host to another host; if these specific features are not available, you won't be able to perform the operation. At this moment, you have to understand that Live Migration moves a powered-on VM from one host to another. If you try to Live Migrate a VM between hosts with different processor types, you may be presented with an error. Live Migration is only permitted between the same processor vendor: Intel-Intel or AMD-AMD. Intel-AMD Live Migration is not allowed under any circumstance. If the processor is the same on both hosts, Live Migration and Share Nothing Live Migration will work without problems. But even within the same vendor, there can be different processor families. In this case, you can remove these specific features from the Virtual Processor presented to the VM. To do that, open Hyper-V Manager | Settings... | Processor | Processor Compatibility. Mark the Migrate to a physical computer with a different processor version option. This option is only available if the VM is powered off. Keep in mind that enabling this option will remove processor-specific features for the VM. If you are going to run an application that requires these features, they will not be available and the application may not run. Now that you have checked all the requirements, you can start planning your server for virtualization with Hyper-V. This is true from the perspective that you understand how Hyper-V works and what are the requirements for it to work. But there is another important subject that you should pay attention to when planning your server: memory. Memory configuration I believe you have heard this one before "The application server is running under performance". In the virtualization world, there is an obvious answer to it: give more virtual hardware to the VM. Although it seems to be the logical solution, the real effect can be totally opposite. During the early days, when servers had just a few sockets, processors, and cores, a single channel made the communication between logical processors and memory. But server hardware has evolved, and today, we have servers with 256 logical processors and 4 TB of RAM. To provide better communication between these components, a new concept emerged. Modern servers with multiple logical processors and high amount of memory use a new design called Non-Uniform Memory Access (NUMA) architecture. Non-Uniform Memory Access (NUMA) architecture NUMA is a memory design that consists of allocating memory to a given node, or a cluster of memory and logical processors. Accessing memory from a processor inside the node is notably faster than accessing memory from another node. If a processor has to access memory from another node, the performance of the process performing the operation will be affected. Basically, to solve this equation you have to ensure that the process inside the guest VM is aware of the NUMA node and is able to use the best available option: When you create a virtual machine, you decide how many virtual processors and how much virtual RAM this VM will have. Usually, you assign the amount of RAM that the application will need to run and meet the expected performance. For example, you may ask a software vendor on the application requirements and this software vendor will say that the application would be using at least 8 GB of RAM. Suppose you have a server with 16 GB of RAM. What you don't know is that this server has four NUMA nodes. To be able to know how much memory each NUMA node has, you must divide the total amount of RAM installed on the server by the number of NUMA nodes on the system. The result will be the amount of RAM of each NUMA node. In this case, each NUMA node has a total of 4 GB of RAM. Following the instructions of the software vendor, you create a VM with 8 GB of RAM. The Hyper-V standard configuration is to allow NUMA spanning, so you will be able to create the VM and start it. Hyper-V will accommodate 4 GB of RAM on two NUMA nodes. This NUMA spanning configuration means that a processor can access the memory on another NUMA node. As mentioned earlier, this will have an impact on the performance if the application is not aware of it. On Hyper-V, prior to the 2012 version, the guest OS was not informed about the NUMA configuration. Basically, in this case, the guest OS would see one NUMA node with 8 GB of RAM, and the allocation of memory would be made without NUMA restrictions, impacting the final performance of the application. Hyper-V 2012 and 2012 R2 have the same feature—the guest OS will see the virtual NUMA (vNUMA) presented to the child partition. With this feature, the guest OS and/or the application can make a better choice on where to allocate memory for each process running on this VM. NUMA is not a virtualization technology. In fact, it has been used for a long time, and even applications like SQL Server 2005 already used NUMA to better allocate the memory that its processes are using. Prior to Hyper-V 2012, if you wanted to avoid this behavior, you had two choices: Create the VM and allocate the maximum vRAM of a single NUMA node for it, as Hyper-V will always try to allocate the memory inside of a single NUMA node. In the above case, the VM should not have more than 4 GB of vRAM. But for this configuration to really work, you should also follow the next choice. Disable NUMA Spanning on Hyper-V. With this configuration disabled, you will not be able to run a VM if the memory configuration exceeds a single NUMA node. To do this, you should clear the Allow virtual machines to span physical NUMA nodes checkbox on Hyper-V Manager | Hyper-V Settings... | NUMA Spanning. Keep in mind that disabling this option will prevent you from running a VM if no nodes are available. You should also remember that even with Hyper-V 2012, if you create a VM with 8 GB of RAM using two NUMA nodes, the application on top of the guest OS (and the guest OS) must understand the NUMA topology. If the application and/or guest OS are not NUMA aware, vNUMA will not have effect and the application can still have performance issues. At this point you are probably asking yourself "How do I know how many NUMA nodes I have on my server?" This was harder to find in the previous versions of Windows Server and Hyper-V Server. In versions prior to 2012, you should open the Performance Monitor and check the available counters in Hyper-V VM Vid NUMA Node. The number of instances represents the number of NUMA Nodes. In Hyper-V 2012, you can check the settings for any VM. Under the Processor tab, there is a new feature available for NUMA. Let's have a look at this screen to understand what it represents: In Configuration, you can easily confirm how many NUMA nodes the host running this VM has. In the case above, the server has only 1 NUMA node. This means that all memory will be allocated close to the processor. Multiple NUMA nodes are usually present on servers with high amount of logical processors and memory. In the NUMA topology section, you can ensure that this VM will always run with the informed configuration. This is presented to you because of a new Hyper-V 2012 feature called Share Nothing Live Migration, which will be explained in detail later. This feature allows you to move a VM from one host to another without turning the VM off, with no cluster and no shared storage. As you can move the VM turned on, you might want to force the processor and memory configuration, based on the hardware of your worst server, ensuring that your VM will always meet your performance expectations. The Use Hardware Topology button will apply the hardware topology in case you moved the VM to another host or in case you changed the configuration and you want to apply the default configuration again. To summarize, if you want to make sure that your VM will not have performance problems, you should check how many NUMA nodes your server has and divide the total amount of memory by it; the result is the total memory on each node. Creating a VM with more memory than a single node will make Hyper-V present a vNUMA to the guest OS. Ensuring that the guest OS and applications are NUMA aware is also important, so that the guest OS and application can use this information to allocate memory for a process on the correct node. NUMA is important to ensure that you will not have problems because of host configuration and misconfiguration on the VM. But, in some cases, even when planning the VM size, you will come to a moment when the VM memory is stressed. In these cases, Hyper-V can help with another feature called Dynamic Memory. Summary In this we learned about the Hypervisor architecture and different Hypervisor types. We explored in brief about Microkernel and Monolithic Type 1 Hypervisors. In addition to this, this article also explains the Hyper-V requirements and processor features, Memory configuration and the NUMA architecture. Resources for Article: Further resources on this subject: Planning a Compliance Program in Microsoft System Center 2012 [Article] So, what is Microsoft © Hyper-V server 2008 R2? [Article] Deploying Applications and Software Updates on Microsoft System Center 2012 Configuration Manager [Article]
Read more
  • 0
  • 0
  • 2369
article-image-deploying-new-hosts-vcenter
Packt
04 Jun 2015
8 min read
Save for later

Deploying New Hosts with vCenter

Packt
04 Jun 2015
8 min read
In this article by Konstantin Kuminsky author of the book, VMware vCenter Cookbook, we will review some options and features available in vCenter to improve an administrator's efficiency. (For more resources related to this topic, see here.) Deploying new hosts faster with scripted installation Scripted installation is an alternative way to deploy ESXi hosts. It can be used when several hosts need to be deployed or upgraded. The installation script contains ESXi settings and can be accessed by a host during the ESXi boot from the following locations: FTP HTTP or HTTPS NFS USB flash drive or CD-ROM How to do it... The following sections describe the process of creating an installation script and using it to boot the ESXi host. Creating an installation script An installation script contains installation options for ESXi. It's a text file with the .cfg extension. The best way to create an installation script is to use the default script supplied with the ESXi installer and modify it. The default script is located in the /etc/vmware/weasel/ folder location and is called ks.cfg. Commands that can be modified include, but are not limited to: The install, installorupgrade, or upgrade commands define the ESXi disk—location, where the installation or upgrade will be installed. The available options are: --disk: This option is the disk name which can be specified as path (/vmfs/devices/disks/vmhbaX:X:X), VML name (vml.xxxxxxxx) or as LUN UID (vmkLUM_UID) –overwritevmfs: This option wipes the existing datastore. --preservevmfs: This option keeps the existing datastore. --novmfsondisk: This option prevents a new partition from being created. The Network command, which specifies the network settings. Most of the available options are self-explanatory: --bootproto=[dhcp|static] --device: MAC address of NIC to use --ip --gateway --nameserver --netmask --hostname --vlanid A full list of installation and upgrade commands can be found in the vSphere5 documentation on the VMware website at https://www.vmware.com/support/pubs/. Use the installation script to configure ESXi In order to use the installation script, you will need to use additional ESXi boot options. Boot a host from the ESXi installation disk. When the ESXi installer screen appears, press Shift + O to provide additional boot options. In the command prompt, type the following: ks=<location of the script> <additional boot options> The valid locations are as follows: ks=cdrom:/path ks=file://path ks=protocol://path ks=usb:/path The additional options available are as follows: gateway: This option is the default gateway ip: This option is the IP address nameserver: This option is the DNS server netmask: This option is the subnet mask vlanid: This option is the VLAN ID netdevice: This option is the MAC address of NIC to use bootif: This option is the MAC address of NIC to use in PXELINUX format For example, for the HTTP location, the command will look like this: ks=http://XX.XX.XX.XX/scripts/ks-v1.cfg nameserver=XX.XX.XX.XX ip=XX.XX.XX.XX netmask=255.255.255.0 gateway=XX.XX.XX.XX Deploying new hosts faster with auto deploy vSphere Auto Deploy is VMware's solution to simplify the deployment of large numbers of ESXi hosts. It is one of the available options for ESXi deployment along with an interactive and scripted installation. The main difference of Auto Deploy compared to other deployment options is that the ESXi configuration is not stored on the host's disk. Instead, it's managed with image and host profiles by the Auto Deploy server. Getting ready Before using Auto Deploy, confirm the following: The Auto Deploy server is installed and registered with vCenter. It can be installed as a standalone server or as part of the vCenter installation. The DHCP server exists in the environment. The DHCP server is configured to point to the TFTP server for PXE boot (option 66) with the boot filename undionly.kpxe.vmw-hardwired. The TFTP server that will be used for PXE boot exists and is configured properly. The machine where Auto Deploy cmdlets will run has the following installed: Microsoft .NET 2.0 or later PowerShell 2.0 or later PowerCLI including Auto Deploy cmdlets New hosts that will be provisioned with Auto Deploy must: Meet the hardware requirements for ESXi 5 Have network connectivity to vCenter, preferably 1 Gbps or higher Have PXE boot enabled How to do it... Once prerequisites are met, the following steps are required to start deploying hosts. Configuring the TFTP server In order to configure the TFTP server with the correct boot image for ESXi, execute the following steps: In vCenter, go to Home | Auto Deploy. Switch to the Administration tab. From the Auto Deploy page, click on Download TFTP Boot ZIP. Download the file and unzip it to the appropriate folder on the TFTP server. Creating an image profile Image profies are created using Image Builder PowerCLI cmdlets. Image Builder requires PowerCLI and can be installed on a machine that's used to run administrative tasks. It doesn't have to be a vCenter server or Auto Deploy server and the only requirement for this machine is that it must have access to the software depot—a file server that stores image profiles. Image profiles can be created from scratch or by cloning an existing profile. The following steps outline the process of creating an image profile by cloning. The steps assume that: The Image Builder has been installed. The appropriate software depot has been downloaded from the VMware website by going to http://www.vmware.com/downloads and searching for the software depot. Cloning an existing profile included in the depot is the easiest way to create a new profile. The steps to do so are as follows: Add a depot with the image profile to be cloned: Add-EsxSoftwareDepot -DepotUrl <Path to softwaredepot> Find the name of the profile to be cloned using Get-ESXImageProfile. Clone the profile: New-EsxImageProfile -CloneProfile <Existing profile name> - Name <New profile name> Add a software package to the new image profile: Add-EsxSoftwarePackage -ImageProfile <New profile name> - SoftwarePackage <Package> At this point, the software package will be validated and in case of errors, or if there are any dependencies that need to be resolved, an appropriate message will be displayed. Assigning an image profile to hosts To create a rule that assigns an image profile to a host, execute the following steps: Connect to vCenter with PowerCLI: Connect-VIServer <vCenter IP or FQDN> Add the software depot with the correct image profile to the PowerCLI session: Add-EsxSoftwareDepot <depot URL> Locate the image profile using the Get-EsxImageProfile cmdlet. Define a rule that assigns hosts with certain attributes to an image profile. For example, for hosts with IP addresses for a range, run the following command: New-DeployRule -Name <Rule name> -Item <Profile name> -Pattern "ipv4=192.168.1.10-192.168.1.20" Add-DeployRule <Rule name> Assigning a host profile to hosts Optionally, the existing host profile can be assigned to hosts. To accomplish this, execute the following steps: Connect to vCenter with PowerCLI: Connect-VIServer <vCenter IP or FQDN> Locate the host profile name using the Get-VMhostProfile command. Define a rule that assigns hosts with certain attributes to a host profile. For example, for hosts with IP addresses for a range, run the following command: New-DeployRule -Name <Rule name> -Item <Profile name> -Pattern "ipv4=192.168.1.10-192.168.1.20" Add-DeployRule <Rule name> Assigning a host to a folder or cluster in vCenter To make sure a host is placed in a certain folder or cluster once it boots, do the following: Connect to vCenter with PowerCLI: Connect-VIServer <vCenter IP or FQDN> Define a rule that assigns hosts with certain attributes to a folder or cluster. For example, for hosts with IP addresses for a range, run the following command: New-DeployRule -Name <Rule name> -Item <Folder name> -Pattern "ipv4=192.168.1.10-192.168.1.20" Add-DeployRule <Rule name> If a host is assigned to a cluster it inherits that cluster's host profile. How it works... Auto Deploy utilizes the PXE boot to connect to the Auto Deploy server and get an image profile, vCenter location, and optionally, host profiles. The detailed process is as follows: The host gets gPXE executable and gPXE configuration files from the PXE TFTP server. As gPXE executes, it uses instructions from the configuration file to query the Auto Deploy server for specific information. The Auto Deploy server returns the requested information specified in the image and host profiles. The host boots using this information. Auto Deploy adds a host to the specified vCenter server. The host is placed in maintenance mode when additional information such as IP address is required from the administrator. To exit maintenance mode, the administrator will need to provide this information and reapply the host profile. When a new host boots for the first time, vCenter creates a new object and stores it together with the host and image profiles in the database. For any subsequent reboots, the existing object is used to get the correct host profile and any changes that have been made. More details can be found in the vSphere 5 documentation on the VMware website at https://www.vmware.com/support/pubs/. Summary In this article we learnt how new hosts can be deployed with scripted installation and auto deploy techniques. Resources for Article: Further resources on this subject: VMware vRealize Operations Performance and Capacity Management [Article] Backups in the VMware View Infrastructure [Article] Application Packaging in VMware ThinApp 4.7 Essentials [Article]
Read more
  • 0
  • 0
  • 1476

article-image-upgrading-vmware-virtual-infrastructure-setups
Packt
04 Jun 2015
13 min read
Save for later

Upgrading VMware Virtual Infrastructure Setups

Packt
04 Jun 2015
13 min read
In this article by Kunal Kumar and Christian Stankowic, authors of the book VMware vSphere Essentials, you will learn how to correctly upgrade VMware virtual infrastructure setups. (For more resources related to this topic, see here.) This article will cover the following topics: Prerequisites and preparations Upgrading vCenter Server Upgrading ESXi hosts Additional steps after upgrading An example scenario Let's start with a realistic scenario that is often found in data centers these days. I assume that your virtual infrastructure consists of components such as: Multiple VMware ESXi hosts Shared storage (NFS or Fibre-channel) VMware vCenter Server and vSphere Update Manager In this example, a cluster consisting of two ESXi hosts (esxi1 and esxi2) is running VMware ESXi 5.5. On a virtual machine (vc1), a Microsoft Windows Server system is running vCenter Server and vSphere Update Manager (vUM) 5.5. This article is written as a step-by-step guide to upgrade these particular vSphere components to the most recent version, which is 6.0. Example scenario consisting of two ESXi hosts with shared storage and vCenter Server Prerequisites and preparations Before we start the upgrade, we need to fulfill the following prerequisites: Ensure ESXi version support by the hardware vendor Gurarantee ESXi version support on used hardware by VMware Create a backup of the ESXi images and vCenter Server First of all, we need to refer to our hardware vendor's support matrix to ensure that our physical hosts running VMware ESXi are supported in the new release. Hardware vendors evaluate their systems before approving upgrades to customers. As an example, Dell offers a comprehensive list for their PowerEdge servers at http://topics-cdn.dell.com/pdf/vmware-esxi-6.x_Reference%20Guide2_en-us.pdf. Here are some additional links for alternative hardware vendors: Hewlett-Packard: http://h17007.www1.hp.com/us/en/enterprise/servers/supportmatrix/vmware.aspx IBM: http://www-03.ibm.com/systems/info/x86servers/serverproven/compat/us/nos/vmware.html Cisco UCS: http://www.cisco.com/web/techdoc/ucs/interoperability/matrix/matrix.html When using Fibre-channel-based storage systems, you might also need to ensure fulfilling that vendor's support matrix. Please check out your vendor's website or contact support for this information. VMware also offers a comprehensive list of tested hardware setups at http://www.vmware.com/resources/compatibility/pdf/vi_systems_guide.pdf. In their Compatibility Guide portal, VMware enabled customers to browse for particular server systems—this information might be more recent than the aforementioned PDF file. Creating a backup of ESXi Before upgrading our ESXi hosts, we also need to make sure that we have a valid backup. In case things go wrong, we might need this backup to restore the previous ESXi version. For creating a backup of the hard disk ESXi is installed on, there are a plenty of tools in the market that implement image-based backups. One possible solution, which is free, is Clonezilla. Clonezilla is a Linux-based live medium that can easily create backup images of hard disks. To create a backup using Clonezilla, proceed with the following steps: Download the Clonezilla ISO image from their website. Make sure you select the AMD64 architecture and the ISO file format. Enable maintenance mode for the particular ESXi host. Make sure you migrate virtual machines to alternative nodes or power them off. Connect the ISO file to the ESXi host and boot from CD. Also, connect a USB drive to the host. This drive will be used to store the backup. Boot from CD and select Clonezilla live. Wait until the boot process completes. When prompted, select your keyboard layout (for example, en_US.utf8) and select Don't touch keymap. In the Start Clonezilla menu, select Start_Clonezilla and device-image. This mode creates an image of the medium ESXi is running on and stores it in the USB storage. Select local_dev and choose the USB storage connected to the host from the list in the next step. Select a folder for storing the backup (optional). Select Beginner and savedisk to store the entire disk ESXi resides on as an image. Enter a name for the backup. Select the hard disk containing the ESXi installation and proceed. You can also specify whether Clonezilla should check the image after creating it (highly recommended). Afterwards, confirm the backup process. The backup job will start immediately. Once the backup completes, select reboot from the menu to reboot the host. A running backup job in Clonezilla To restore a backup using Clonezilla, perform the following steps after booting the Clonezilla media: Complete steps 1 to 8 from the previous guide. Select Beginner and restoredisk to restore the entire disk. Select the image from the USB storage and the hard drive the image should be restored on. Acknowledge the restore process. Once the restoration completes, select reboot from the menu to reboot the host. For the system running vCenter Server, we can easily create a VM snapshot, or also use Clonezilla if a physical machine is used instead. The upgrade path It is very important to execute the particular upgrade tasks in the following order: Upgrade VMware vCenter Server Upgrade the particular ESXi hosts Reformat or upgrade the VMFS data stores (if applicable) Upgrading additional components, such as distributed virtual switches, or additional appliances The first step is to upgrade vCenter Server. This is necessary to ensure that we are able to manage our ESXi hosts after upgrading them. Newer vCenter Server versions are downward compatible with numerous ESXi versions. To double-check this, we can look up the particular version support by browsing VMware's Product Interoperability Matrix on their website. Click on Solution Interoperability, choose VMware vCenter Server from the drop-down menu, and select the version you want to upgrade to. In our example, we will choose the most recent release, 6.0, and select VMware ESX/ESXi from the Add Platform/Solution drop-down menu. VMware Product Interoperability Matrix for vCenter Server and ESXi vCenter Server 6.0 supports management of VMware ESXi 5.0 and higher. We need to ensure the same support agreement for any other used products, such as these: VMware vSphere Update Manager VMware vCenter Operations (if applicable) VMware vSphere Data Protection In other words, we need to upgrade all additional vSphere and vCenter Server components to ensure full functionality. Upgrading vCenter Server Upgrading vCenter Server is the most crucial step, as this is our central management platform. The upgrade process varies according to the chosen architecture. Upgrading Windows-based vCenter Server installations is quite easy, as the installation supports in-place upgrades. When using the vCenter Server Appliance (vCSA), there is no in-place upgrade; it is necessary to deploy a new vCSA and import the settings from the old installation. This process varies between the particular vCSA versions. For upgrading from vCSA 5.0 or 5.1 to 5.5, VMware offers a comprehensive article at http://kb.vmware.com/kb/2058441. To upgrade vCenter Server 5.x on Windows to 6.0 using the Easy Install method, proceed with the following steps: Mount the vCenter Server 6.x installation media (VMware-VIMSetup-all-6.0.0-xxx.iso) on the server running vCenter Server. Wait until the installation wizard starts; if it doesn't start, double-click on the CD/DVD icon in Windows Explorer. Select vCenter Server for Windows and click on Install to start the installation utility. Accept the End-User License Agreement (EULA). Enter the current vCenter Single-Sign-On password and proceed with the next step. The installation utility begins to execute pre-upgrade checks; this might take some time. If you're running vCenter Server along with Microsoft SQL Server Express Edition, the database will be migrated to VMware vPostgres. Review and change (if necessary) the network ports of your vCenter Server installation. If needed, change the directories for vCenter Server and the Embedded Platform Controller (ESC). Carefully review the upgrade information displayed in the wizard. Also verify that you have created a backup of your system and the database. Then click on Upgrade to start the upgrade. After the upgrade, vSphere Web Client can be used to connect to the upgraded vCenter Server system. Also note that the Microsoft SQL Server Express Edition database is not used anymore. Upgrading ESXi hosts Upgrading ESXi hosts can be done using two methods: Using the installation media from the VMware website vSphere Update Manager If you need to upgrade a large number of ESXi hosts, I recommend that you use vSphere Update Manager to save time, as it can automate the particular steps. For smaller landscapes, using the installation media is easier. For using vUM to upgrade ESXi hosts, VMware offers a guide on their knowledge base at http://kb.vmware.com/kb/1019545. In order to upgrade an ESXi host using the installation media, perform the following steps: First of all, enable maintenance mode for the particular ESXi host. Make sure you migrate the virtual machines to alternative nodes or power them off. Connect the installation media to the ESXi host and boot from CD. Once the setup utility becomes available, press Enter to start the installation wizard. Accept the End-User License Agreement (EULA) by pressing F11. Select the disk containing the current ESXi installation. In the ESXi found dialog, select Upgrade. Review the installation information and press F11 to start the upgrade. After the installation completes, press Enter to reboot the system. After the system has rebooted, it will automatically reconnect to vCenter Server. Select the particular ESXi host to see whether the version has changed. In this example, the ESXi host has been successfully upgraded to version 6.0: Version information of an updated ESXi host running release 6.0 Repeat all of these steps for all the remaining ESXi hosts. Note that running an ESXi cluster with mixed versions should only be a temporary solution. It is not recommended to mix various ESXi releases in production usage, as the various features of ESXi might not perform as expected in mixed clusters. Additional steps After upgrading vCenter Server and our ESXi hosts, there are additional steps that can be done: Reformating or upgrading VMFS data stores Upgrading distributed virtual switches Upgrading virtual machine's hardware versions Upgrading VMFS data stores VMware's VMFS (Virtual Machine Filesystem) is the most used filesystem for shared storage. It can be used along with local storage, iSCSI, or Fibre-channel storage. Particularly, ESX(i) releases support various versions of VMFS. Let's take a look at the major differences:   VMFS 2   VMFS 3   VMFS 5   Supported by ESX 2.x, ESXi 3.x/4.x (read-only) ESX(i) 3.x and higher ESXi 5.x and higher Block size(s) 1, 8, 64, or 256 MB 1, 2, 4, or 8 MB 1 MB (fixed) Maximum file size 1 MB block size: 456 MB 8 MB block size: 2.5 TB 64 MB block size: 28.5 TB 256 MB block size: 64 TB 1 MB block size: 256 MB 2 MB block size: 512 GB 4 MB block size: 1 TB 8 MB block size: 2 TB 62 TB Files per volume Ca. 256 (no directories supported) Ca. 37,720 Ca. 130,690 When migrating from an ESXi version such as 4.x or older, it is possible to upgrade VMFS data stores to version 5. VMFS 2 cannot be upgraded to VMFS 5; it first needs to be upgraded to VMFS 3. To enable the upgrade, a VMFS 2 volume must not have a block size more than 8 MB, as VMFS 3 only supports block sizes up to 8 MB. In comparison with older VMFS versions, VMFS 5 supports larger file sizes and more files per volume. I highly recommend that you reformat VMFS data stores instead of upgrading them, as the upgrade does not change the filesystem's block size. Because of this limitation, you won't benefit from all the new VMFS 5 features after an upgrade. To upgrade a VMFS 3 volume to VMFS 5, perform these steps: Log in to vSphere Web Client. Go to the Storage pane. Click on the data store to upgrade and go to Settings under the Manage tab. Click on Upgrade to VMFS5. Then click on OK to start the upgrade. VMware vNetwork Distributed Switch When using vNetwork Distributed Switches (also often called dvSwitches) it is recommended to perform an upgrade to the latest version. In comparison with vNetwork Standard Switches (also called vSwitches), dvSwitches are created at the vCenter Server level and replicated to all subscribed ESXi hosts. When creating a dvSwitch, the administrator can choose between various dvSwitch versions. After upgrading vCenter Server and the ESXi hosts, additional features can be unlocked by upgrading the dvSwitch. Let's take a look at some commonly used dvSwitch versions:   vDS 5.0   vDS 5.1   vDS 5.5   vDS 6.0   Compatible with ESXi 5.0 and higher ESXi 5.1 and higher ESXi 5.5 and higher ESXi 6.0 Common features Network I/O Control, load-based teaming, traffic shaping, VM port blocking, PVLANs (private VLANs), network vMotion, and port policies Additional features Network resource pools, NetFlow, and port mirroring VDS 5.0 +, management network rollback, network health checks, enhanced port mirroring, and LACP (Link Aggregation Control Protocol) VDS 5.1 +, traffic filtering, and enhanced LACP functionality VDS 5.5 +, multicast snooping, and Network I/O Control version 3 (bandwidth guarantee) It is also possible to use the old version furthermore, as vCenter Server is downward compatible with numerous dvSwitch versions. Upgrading a dvSwitch is a task that cannot be undone. During the upgrade, it is possible that virtual machines will lose their network connectivity for some seconds. After the upgrade, older ESXi hosts will not be able to participate in the distributed switch setup. To upgrade a dvSwitch, perform the following steps: Log in to vSphere Web Client. Go to the Networking pane and select the dvSwitch to upgrade. Lorem..... After upgrading the dvSwitch, you will notice that the version has changed: Version information of a dvSwitch running VDS 6.0 Virtual machine hardware version Every virtual machine is created with a virtual machine hardware version specified (also called VMHW or vHW). A vHW version defines a set of particular limitations and features, such as controller types or network cards. To benefit from the new virtual machine features, it is sufficient to upgrade vHW versions. ESXi hosts support a range of vHW versions, but it is always advisable to use the most recent vHW version. Once a vHW version is upgraded, particular virtual machines cannot be started on older ESXi versions that don't support the vHW version. Let's take a deeper look at some popular vHW versions:   vSphere 4.1   vSphere 5.1   vSphere 5.5   vSphere 6.0   Maximum vHW 7 9 10 11 Virtual CPUs 8 64 128 Virtual RAM 255 GB 1 TB 4 TB vDisk size 2 TB 62 TB SCSI adapters / targets 4/60 SATA adapters / targets Not supported 4/30 Parallel / Serial Ports 3/4 3/32 USB controllers / devices per VM 1/20 (USB 1.x + 2.x) 1/20 (USB 1.x, 2.x + 3.x) The upgrade cannot be undone. Also, it might be necessary to update VMware Tools and the drivers of the operating system running in the virtual machine. Summary In this article we learnt how to correctly upgrade VMware virtual infrastructure setups. If you want to know more about VMware vSphere and virtual infrastructure setups, go ahead and get your copy of Packt Publishing's book VMware vSphere Essentials. Resources for Article: Further resources on this subject: Networking [article] The Design Documentation [article] VMware View 5 Desktop Virtualization [article]
Read more
  • 0
  • 0
  • 3111