Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Virtualization

115 Articles
article-image-importance-windows-rds-horizon-view
Packt
30 Oct 2014
15 min read
Save for later

Importance of Windows RDS in Horizon View

Packt
30 Oct 2014
15 min read
In this article by Jason Ventresco, the author of VMware Horizon View 6 Desktop Virtualization Cookbook, has explained about the Windows Remote Desktop Services (RDS) and how they are implemented in Horizon View. He will discuss about configuring the Windows RDS server and also about creating RDS farm in Horizon View. (For more resources related to this topic, see here.) Configuring the Windows RDS server for use with Horizon View This recipe will provide an introduction to the minimum steps required to configure Windows RDS and integrate it with our Horizon View pod. For a more in-depth discussion on Windows RDS optimization and management, consult the Microsoft TechNet page for Windows Server 2012 R2 (http://technet.microsoft.com/en-us/library/hh801901.aspx). Getting ready VMware Horizon View supports the following versions of Window server for use with RDS: Windows Server 2008 R2: Standard, Enterprise, or Datacenter, with SP1 or later installed Windows Server 2012: Standard or Datacenter Windows Server 2012 R2: Standard or Datacenter The examples shown in this article were performed on Windows Server 2012 R2. Additionally, all of the applications required have already been installed on the server, which in this case included Microsoft Office 2010. Microsoft Office has specific licensing requirements when used with a Windows Server RDS. Consult Microsoft's Licensing of Microsoft Desktop Application Software for Use with Windows Server Remote Desktop Services document (http://www.microsoft.com/licensing/about-licensing/briefs/remote-desktop-services.aspx), for additional information. The Windows RDS feature requires a licensing server component called the Remote Desktop Licensing role service. For reasons of availability, it is not recommended that you install it on the RDS host itself, but rather, on an existing server that serves some other function or even on a dedicated server if possible. Ideally, the RDS licensing role should be installed on multiple servers for redundancy reasons. The Remote Desktop Licensing role service is different from the Microsoft Windows Key Management System (KMS), as it is used solely for Windows RDS hosts. Consult the Microsoft TechNet article, RD Licensing Configuration on Windows Server 2012 (http://blogs.technet.com/b/askperf/archive/2013/09/20/rd-licensing-configuration-on-windows-server-2012.aspx), for the steps required to install the Remote Desktop Licensing role service. Additionally, consult Microsoft document Licensing Windows Server 2012 R2 Remote Desktop Services (http://download.microsoft.com/download/3/D/4/3D42BDC2-6725-4B29-B75A-A5B04179958B/WindowsServerRDS_VLBrief.pdf) for information about the licensing options for Windows RDS, which include both per-user and per-device options. Windows RDS host – hardware recommendations The following resources represent a starting point for assigning CPU and RAM resources to Windows RDS hosts. The actual resources required will vary based on the applications being used and the number of concurrent users; so, it is important to monitor server utilization and adjust the CPU and RAM specifications if required. The following are the requirements: One vCPU for each of the 15 concurrent RDS sessions 2 GB RAM, base RAM amount equal to 2 GB per vCPU, plus 64 MB of additional RAM for each concurrent RDS session An additional RAM equal to the application requirements, multiplied by the estimated number of concurrent users of the application Sufficient hard drive space to store RDS user profiles, which will vary based on the configuration of the Windows RDS host: Windows RDS supports multiple options to control user profiles' configuration and growth, including a RD user home directory, RD roaming user profiles, and mandatory profiles. For information about these and other options, consult the Microsoft TechNet article, Manage User Profiles for Remote Desktop Services, at http://technet.microsoft.com/en-us/library/cc742820.aspx. This space is only required if you intend to store user profiles locally on the RDS hosts. Horizon View Persona Management is not supported and will not work with Windows RDS hosts. Consider native Microsoft features such as those described previously in this recipe, or third-party tools such as AppSense Environment Manager (http://www.appsense.com/products/desktop/desktopnow/environment-manager). Based on these values, a Windows Server 2012 R2 RDS host running Microsoft Office 2010 that will support 100 concurrent users will require the following resources: Seven vCPU to support upto 105 concurrent RDS sessions 45.25 GB of RAM, based on the following calculations: 20.25 GB of base RAM (2 GB for each vCPU, plus 64 MB for each of the 100 users) A total of 25 GB additional RAM to support Microsoft Office 2010 (Office 2010 recommends 256 MB of RAM for each user) While the vCPU and RAM requirements might seem excessive at first, remember that to deploy a virtual desktop for each of these 100 users, we would need at least 100 vCPUs and 100 GB of RAM, which is much more than what our Windows RDS host requires. By default, Horizon View allows only 150 unique RDS user sessions for each available Windows RDS host; so, we need to deploy multiple RDS hosts if users need to stream two applications at once or if we anticipate having more than 150 connections. It is possible to change the number of supported sessions, but it is not recommended due to potential performance issues. Importing the Horizon View RDS AD group policy templates Some of the settings configured throughout this article are applied using AD group policy templates. Prior to using the RDS feature, these templates should be distributed to either the RDS hosts in order to be used with the Windows local group policy editor, or to an AD domain controller where they can be applied using the domain. Complete the following steps to install the View RDS group policy templates: When referring to VMware Horizon View installation packages, y.y.y refers to the version number and xxxxxx refers to the build number. When you download packages, the actual version and build numbers will be in a numeric format. For example, the filename of the current Horizon View 6 GPO bundle is VMware-Horizon-View-Extras-Bundle-3.1.0-2085634.zip. Obtain the VMware-Horizon-View-GPO-Bundle-x.x.x-yyyyyyy.zip file, unzip it, and copy the en-US folder, the vmware_rdsh.admx file, and the vmware_rdsh_server.admx file to the C:WindowsPolicyDefinitions folder on either an AD domain controller or your target RDS host, based on how you wish to manage the policies. Make note of the following points while doing so: If you want to set the policies locally on each RDS host, you will need to copy the files to each server If you wish to set the policies using domain-based AD group policies, you will need to copy the files to the domain controllers, the group policy Central Store (http://support.microsoft.com/kb/929841), or to the workstation from which we manage these domain-based group policies. How to do it… The following steps outline the procedure to enable RDS on a Windows Server 2012 R2 host. The host used in this recipe has already been connected to the domain and has logged in with an AD account that has administrative permissions on the server. Perform the following steps: Open the Windows Server Manager utility and go to Manage | Add Roles and Features to open the Add Roles and Features Wizard. On the Before you Begin page, click on Next. On the Installation Type page, shown in the following screenshot, select Remote Desktop Services installation and click on Next. This is shown in the following screenshot: On the Deployment Type page, select Quick Start and click on Next. You can also implement the required roles using the standard deployment method outlined in the Deploy the Session Virtualization Standard deployment section of the Microsoft TechNet article, Test Lab Guide: Remote Desktop Services Session Virtualization Standard Deployment (http://technet.microsoft.com/en-us/library/hh831610.aspx). If you use this method, you will complete the component installation and proceed to step 9 in this recipe. On the Deployment Scenario page, select Session-based desktop deployment and click on Next. On the Server Selection page, select a server from the list under Server Pool, click the red, highlighted button to add the server to the list of selected servers, and click on Next. This is shown in the following screenshot: On the Confirmation page, check the box marked Restart the destination server automatically if required and click on Deploy. On the Completion page, monitor the installation process and click on Close when finished in order to complete the installation. If a reboot is required, the server will reboot without the need to click on Close. Once the reboot completes, proceed with the remaining steps. Set the RDS licensing server using the Set-RDLicenseConfiguration Windows PowerShell command. In this example, we are configuring the local RDS host to point to redundant license servers (RDS-LIC1 and RDS-LIC2) and setting the license mode to PerUser. This command must be executed on the target RDS host. After entering the command, confirm the values for the license mode and license server name by answering Y when prompted. Refer to the following code: Set-RDLicenseConfiguration -LicenseServer @("RDS-LIC1.vjason.local","RDS-LIC2.vjason.local") -Mode PerUser This setting might also be set using group policies applied either to the local computer or using Active Directory (AD). The policies are shown in the following screenshot, and you can locate them by going to Computer Configuration | Policies | Administrative Templates | Windows Components | Remote Desktop Services | Remote Desktop Session Host | Licensing when using AD-based policies. If you are using local group policies, there will be no Policies folder in the path: Use local computer or AD group policies to limit users to one session per RDS host using the Restrict Remote Desktop Services users to a single Remote Desktop Services session policy. The policy is shown in the following screenshot, and you can locate it by navigating to Computer Configuration | Policies | Administrative Templates | Windows Components | Remote Desktop Services | Remote Desktop Session Host | Connections: Use local computer or AD group policies to enable Timezone redirection. You can locate the policy by navigating to Computer Configuration | Policies | Administrative Templates | Windows Components | Horizon View RDSH Services | Remote Desktop Session Host | Device and Resource Redirection when using AD-based policies. If you are using local group policies, there will be no Policies folder in the path. To enable the setting, set Allow time zone redirection to Enabled. Use local computer or AD group policies to enable Windows Basic Aero-Styled Theme. You can locate the policy by going to User Configuration | Policies | Administrative Templates | Control Panel | Personalization when using AD-based policies. If you are using local group policies, there will be no Policies folder in the path. To configure the theme, set Force a specific visual style file or force Windows Classic to Enabled and set Path to Visual Style to %windir%resourcesThemesAeroaero.msstyles. Use local computer or AD group policies to start Runonce.exe when the RDS session starts. You can locate the policy by going to User Configuration | Policies | Windows Settings | Scripts (Logon/Logoff) when using AD-based policies. If you are using local group policies, there will be no Policies folder in the path. To configure the logon settings, double-click on Logon, click on Add, enter runonce.exe in the Script Name box, and enter /AlternateShellStartup in the Script Parameters box. On the Windows RDS host, double-click on the 64-bit Horizon View Agent installer to begin the installation process. The installer should have a name similar to VMware-viewagent-x86_64-y.y.y-xxxxxx.exe. On the Welcome to the Installation Wizard for VMware Horizon View Agent page, click on Next. On the License Agreement page, select the I accept the terms in the license agreement radio check box and click on Next. On the Custom Setup page, either leave all the options set to default, or if you are not using vCenter Operations Manager, deselect this optional component of the agent and click on Next. On the Register with Horizon View Connection Server page, shown in the following screenshot, enter the hostname or IP address of one of the Connection Servers in the pod where the RDS host will be used. If the user performing the installation of the agent software is an administrator in the Horizon View environment, leave the Authentication setting set to default; otherwise, select the Specify administrator credentials radio check box and provide the username and password of an account that has administrative rights in Horizon View. Click on Next to continue: On the Ready to Install the Program page, click on Install to begin the installation. When the installation completes, reboot the server if prompted. The Windows RDS service is now enabled, configured with the optimal settings for use with VMware Horizon View, and has the necessary agent software installed. This process should be repeated on additional RDS hosts, as needed, to support the target number of concurrent RDS sessions. How it works… The following resources provide detailed information about the configuration options used in this recipe: Microsoft TechNet's Set-RDLicenseConfiguration article at http://technet.microsoft.com/en-us/library/jj215465.aspx provides the complete syntax of the PowerShell command used to configure the RDS licensing settings. Microsoft TechNet's Remote Desktop Services Client Access Licenses (RDS CALs) article at http://technet.microsoft.com/en-us/library/cc753650.aspx explains the different RDS license types, which reveals that an RDS per-user Client Access License (CAL) allows our Horizon View clients to access the RDS servers from an unlimited number of endpoints while still consuming only one RDS license. The Microsoft TechNet article, Remote Desktop Session Host, Licensing (http://technet.microsoft.com/en-us/library/ee791926(v=ws.10).aspx) provides additional information on the group policies used to configure the RDS licensing options. The VMware document Setting up Desktop and Application Pools in View (https://pubs.vmware.com/horizon-view-60/index.jsp?topic=%2Fcom.vmware.horizon-view.desktops.doc%2FGUID-931FF6F3-44C1-4102-94FE-3C9BFFF8E38D.html) explains that the Windows Basic aero-styled theme is the only theme supported by Horizon View, and demonstrates how to implement it. The VMware document Setting up Desktop and Application Pools in View (https://pubs.vmware.com/horizon-view-60/topic/com.vmware.horizon-view.desktops.doc/GUID-443F9F6D-C9CB-4CD9-A783-7CC5243FBD51.html) explains why time zone redirection is required, as it ensures that the Horizon View RDS client session will use the same time zone as the client device. The VMware document Setting up Desktop and Application Pools in View (https://pubs.vmware.com/horizon-view-60/topic/com.vmware.horizon-view.desktops.doc/GUID-85E4EE7A-9371-483E-A0C8-515CF11EE51D.html) explains why we need to add the runonce.exe /AlternateShellStartup command to the RDS logon script. This ensures that applications which require Windows Explorer will work properly when streamed using Horizon View. Creating an RDS farm in Horizon View This recipe will discuss the steps that are required to create an RDS farm in our Horizon View pod. An RDS farm is a collection of Windows RDS hosts and serves as the point of integration between the View Connection Server and the individual applications installed on each RDS server. Additionally, key settings concerning client session handling and client connection protocols are set at the RDS farm level within Horizon View. Getting ready To create an RDS farm in Horizon View, we need to have at least one RDS host registered with our View pod. Assuming that the Horizon View Agent installation completed successfully in the previous recipe, we should see the RDS hosts registered in the Registered Machines menu under View Configuration of our View Manager Admin console. The tasks required to create the RDS pod are performed using the Horizon View Manager Admin console. How to do it… The following steps outline the procedure used to create a RDS farm. In this example, we have already created and registered two Window RDS hosts named WINRDS01 and WINRDS02. Perform the following steps: Navigate to Resources | Farms and click on Add, as shown in the following screenshot: On the Identification and Settings page, shown in the following screenshot, provide a farm ID, a description if desired, make any desired changes to the default settings, and then click on Next. The settings can be changed to On if needed: On the Select RDS Hosts page, shown in the following screenshot, click on the RDS hosts to be added to the farm and then click on Next: On the Ready to Complete page, review the configuration and click on Finish. The RDS farm has been created, which allows us to create application. How it works… The following RDS farm settings can be changed at any time and are described in the following points: Default display protocol: PCoIP (default) and RDP are available. Allow users to choose protocol: By default, Horizon View Clients can select their preferred protocol; we can change this setting to No in order to enforce the farm defaults. Empty session timeout (applications only): This denotes the amount of time that must pass after a client closes all RDS applications before the RDS farm will take the action specified in the When timeout occurs setting. The default setting is 1 minute. When timeout occurs: This determines which action is taken by the RDS farm when the session's timeout deadline passes; the options are Log off or Disconnect (default). Log off disconnected sessions: This determines what happens when a View RDS session is disconnected; the options are Never (default), Immediate, or After. If After is selected, a time in minutes must be provided. Summary We have learned about configuring the Windows RDS server for use in Horizon View and also about creating RDS farm in Horizon View. Resources for Article: Further resources on this subject: Backups in the VMware View Infrastructure [Article] An Introduction to VMware Horizon Mirage [Article] Designing and Building a Horizon View 6.0 Infrastructure [Article]
Read more
  • 0
  • 0
  • 5258

article-image-designing-and-building-horizon-view-60-infrastructure
Packt
22 Oct 2014
18 min read
Save for later

Designing and Building a Horizon View 6.0 Infrastructure

Packt
22 Oct 2014
18 min read
This article is written by Peter von Oven, the author of VMware Horizon View Essentials. In this article, we will start by taking a closer look at the design process. We will now look at the reference architecture and how we start to put together a design, building out the infrastructure for a production deployment. Proving the technology – from PoC to production In this section, we are going to discuss how to approach a VDI project. This is a key and very important piece of work that needs to be completed in the very early stages and is somewhat different from how you would typically approach an IT project. Our starting point is to focus on the end users rather than the IT department. After all, these are the people that will be using the solution on a daily basis and know what tools they need to get their jobs done. Rather than giving them what you think they need, let's ask them what they actually need and then, within reason, deliver this. It's that old saying of don't try and fit a square peg into a round hole. No matter how hard you try, it's just never going to fit. First and foremost we need to design the technology around the user requirements rather than building a backend infrastructure only to find that it doesn't deliver what the users require. Assessment Once you have built your business case and validated that against your EUC strategy and there is a requirement for delivering a VDI solution, the next stage is to run an assessment. It's quite fitting that this book is entitled "Essentials", as this stage of the project is exactly that, and is essential for a successful outcome. We need to build up a picture of what the current environment looks like, ranging from looking at what applications are being used to the type of access devices. This goes back to the earlier point about giving the users what they need and the only way to find that out is to conduct an assessment. By doing this, we are creating a baseline. Then, as we move into defining the success criteria and proving the technology, we have the baseline as a reference point to demonstrate how we have improved current working and delivered on the business case and strategy. There are a number of tools that can be used in the assessment phase to gather the information required, for example, Liquidware Labs Stratusphere FIT or SysTrack from Lakeside Software. Don't forget to actually talk to the users as well, so you are armed with the hard-and-fast facts from an assessment as well as the user's perspective. Defining the success criteria The key objective in defining the success criteria is to document what a "good" solution should look like for the project to succeed and become production-ready. We need to clearly define the elements that need to function correctly in order to move from proof of concept to proof of technology, and then into a pilot phase before deploying into production. You need to fully document what these elements are and get the end users or other project stakeholders to sign up to them. It's almost like creating a statement of work with a clearly defined list of tasks. Another important factor is to ensure that during this phase of the project, the criteria don't start to grow beyond the original scope. By that, we mean other additional things should not get added to the success criteria or at least not without discussion first. It may well transpire that something key was missed; however, if you have conducted your assessment thoroughly, this shouldn't happen. Another thing that works well at this stage is to involve the end users. Set up a steering committee or advisory panel by selecting people from different departments to act as sponsors within their area of business. Actively involve them in the testing phases, but get them on board early as well to get their input in shaping the solution. Too many projects fail when an end user tries something that didn't work. However, the thing that they tried is not actually a relevant use case or something that is used by the business as a critical line of business application and therefore shouldn't derail the project. If we have a set of success criteria defined up front that the end users have signed up to, anything outside that criteria is not in scope. If it's not defined in the document, it should be disregarded as not being part of what success should look like. Proving the technology Once the previous steps have been discussed and documented, we should be able to build a picture around what's driving the project. We will understand what you are trying to achieve/deliver and, based upon hard-and-fast facts from the assessment phase, be able to work on what success should look like. From there, we can then move into testing some form of the technology should that be a requirement. There are three key stages within the testing cycle to consider, and it might be the case that you don't need all of them. The three stages we are talking about are as follows: Proof of concept (PoC) Proof of technology (PoT) Pilot In the next sections, we are briefly going to cover what each of these stages mean and why you might or might not need them. Proof of concept A proof of concept typically refers to a partial solution, typically built on any old hardware kicking about, that involves a relatively small number of users usually within the confines of the IT department acting in business roles, to establish whether the system satisfies some aspect of the purpose it was designed for. Once proven, one or two things happen. Firstly nothing happens as it's just the IT department playing with technology and there wasn't a real business driver in the first place. This is usually down to the previous steps not having been defined. In a similar way, by not having any success criteria, it will also fail, as you don't know exactly what you are setting out to prove. The second outcome is that the project moves into a pilot phase that we will discuss in a later section. You could consider moving directly into this phase and bypassing the PoC altogether. Maybe a demonstration of the technology would suffice, and using a demo environment over a longer period would show you how the technology works. Proof of technology In contrast to the PoC, the objective of a proof of technology is to determine whether or not the proposed solution or technology will integrate into your existing environment and therefore demonstrate compatibility. The objective is to highlight any technical problems specific to your environment, such as how your bespoke systems might integrate. As with the PoC, a PoT is typically run by the IT department and no business users would be involved. A PoT is purely a technical validation exercise. Pilot A pilot refers to what is almost a small-scale roll out of the solution in a production-style environment that would target a limited scope of the intended final solution. The scope may be limited by the number of users who can access the pilot system, the business processes affected, or the business partners involved. The purpose of a pilot is to test, often in a production-like environment, whether the system is working, as it was designed while limiting business exposure and risk. It will also touch real users so as to gauge the feedback from what would ultimately become a live, production solution. This is a critical step in achieving success, as the users are the ones that have to interact with the system on a daily basis, and the reason why you should set up some form of working group to gather their feedback. That would also mitigate the project from failing, as the solution may deliver everything the IT department could ever wish for, but when it goes live and the first user logs on and reports a bad experience or performance, you may as well not be bothered. The pilot should be carefully scoped, sized, and implemented. We will discuss this in the next section. The pilot phase In this section, we are going to discuss the pilot phase in a bit more detail and break it down into three distinct stages. These are important, as the output from the pilot will ultimately shape the design of your production environment. The following diagram shows the workflow we will follow in defining our project: Phase 1 – pilot design The pilot infrastructure should be designed on the same hardware platforms that the production solution is going to be deployed, for example, the same servers and storage. This takes into account any anomalies between platforms and configuration differences that could affect things such as scalability or more importantly performance. Even at pilot stage, the design is absolutely key, and you should make sure you take into account the production design even at this stage. Why? Basically because many pilot solutions end up going straight into production and more and more users get added above and beyond those scoped for the pilot. It's great going live with the solution and not having to go back and rebuild it, but when you start to scale by adding more users and applications, you might have some issues due to the pilot sizing. It may sound obvious, but often with a successful pilot, the users just keep on using it and additional users get added. If it's only ever going to be a pilot, that's fine, but keep this in mind and ask the question; if you are planning on taking the pilot straight into production design it for production. It is always useful to work from a prerequisite document to understand the different elements that need consideration in the design. Key design elements include: Hardware sizing (servers – CPU, memory, and consolidation ratios) Pool design (based on user segmentation) Storage design (local SSD, SAN, and acceleration technologies) Image creation (rebuild from scratch and optimize for VDI) Network design (load balancing and external access) Antivirus considerations Application delivery (delivering virtually versus installing in core image) User profile management Floating or dedicated desktop assignments Persistent or non-persistent desktop builds (linked clone or full clone) Once you have all this information, you can start to deploy the pilot. Phase 2 – pilot deployment In the deployment phase of the pilot, we are going to start building out the infrastructure, deploying the test users, building the OS images, and then start testing. Phase 3 – pilot test During the testing phase, the key thing during this stage is to work closely with the end users and your sponsors, showing them the solution and how it works, closely monitoring the users, and assessing the solution as it's being used. This allows you to keep in contact with the users and give them the opportunity to continually provide real-time feedback. This also allows you to answer questions and make adjustments and enhancements on the fly rather than wait to the end of the project and then to be told it didn't work or they just simply didn't understand something. This then leads us onto the last section, the review. Phase 4 – pilot review This final stage sometimes tends to get forgotten. We have deployed the solution, the users have been testing it, and then it ends there for whatever reason. However, there is one very important last thing to do to enable the customer to move to production. We need to measure the user experience or the IT department's experience against the success criteria we set out at the start of this process. We need to get customer sign off and agreement that we have successfully met all the objectives and requirements. If this is not the case, we need to understand the reasons why. Have we missed something in the use case, have the user requirements changed, or is it simply a perception issue? Whatever the case, we need to cycle round the process again. Go back to the use case, understand and reevaluate the user requirements, (what it is that is seemingly failing or not behaving as expected), and then tweak the design or make the required changes and get them to test the solution again. We need to continue this process until we get acceptance and sign off; otherwise, we will not get to the final solution deployment phase. When the project has been signed off after a successful pilot test, there is no reason why you cannot deploy the technology in production. Now that we have talked about how to prove the technology and successfully demonstrated that it delivers against both our business case and user requirements, in the next sections, we are going to start looking at the design for our production environment. Designing a Horizon 6.0 architecture We are going to start this section by looking at the VMware reference architecture for Horizon View 6.0 before we go into more detail around the design considerations, best practice, and then sizing guidelines. The pod and block reference architecture VMware has produced a reference architecture model for deploying Horizon View, with the approach being to make it easy to scale the environment by adding set component pieces of infrastructure, known as View blocks. To scale the number of users, you add View blocks up to the maximum configuration of five blocks. This maximum configuration of five View blocks is called a View pod. The important numbers to remember are that each View block supports up to a maximum of 2,000 users, and a View pod is made up of up to five View blocks, therefore supporting a maximum of 10,000 users. The View block contains all the infrastructure required to host the virtual desktop machines, so appropriately sized ESXi hosts, a vCenter Server, and the associated networking and storage requirements. We will cover the sizing aspects later on in this article. The following diagram shows an individual View block: Apart from having a View block that supports the virtual desktop machines, there is also a management block for the supporting infrastructure components. The management block contains the management elements of Horizon View, such as the connection servers and security servers. These will also be virtual machines hosted on the vSphere platform but using separate ESXi hosts and vCenter servers from those being used to host the desktops. The following diagram shows a typical View management block: The management block contains the key Horizon View components to support the maximum configuration of 10,000 users or a View pod. In terms of connection servers, the management block consists of a maximum of seven connection servers. This is often written as 5 + 2, which can be misleading, but what it means is you can have five connection servers and two that serve as backups to replace a failed server. Each connection server supports one of the five blocks, with the two spare in reserve in the event of a failure. As we discussed previously, the View Security Servers are paired with one of the connection servers in order to provide external access to the users. In our example diagram, we have drawn three security servers meaning that these servers are configured for external access, while the others serve the internal users only. In this scenario, the View Connection Servers and View Security Servers are deployed as virtual machines, and are therefore controlled and managed by vCenter. The vCenter Server can run on a virtual machine, or you can use the vCenter Virtual Appliance. It can also run on a physical Windows Server, as it's just a Windows application. The entire infrastructure is hosted on a vSphere cluster that's separate, from that being used to host the virtual desktop machines. There are a couple of other components that are not shown in the diagram, and those are the databases required for View such as the events database and for View Composer. If we now look at the entire Horizon View pod and block architecture for up to 10,000 users, the architecture design would look something like the following diagram: One thing to note is that although a pod is limited to 10,000 users, you can deploy more than one pod should you need an environment that exceeds the 10,000 users. Bear in mind though that the pods do not communicate with each other and will effectively be completely separate deployments. As this is potentially a limitation in the scalability, but more so for disaster recovery purposes, where you need to have two pods across two sites for disaster recovery, there is a feature in Horizon View 6.0 that allows you to deploy pods across sites. This is called the Cloud Pod Architecture (CPA), and we will cover this in the next section. The Cloud Pod Architecture The Cloud Pod Architecture, also referred to as linked-mode View (LMV) or multidatacenter View (MDCV), allows you to link up to four View pods together across two sites, with a maximum number of supported users of up to 20,000. There are four key features available by deploying Horizon View using this architecture: Scalability: This hosts more than 10,000 users on a single site Multidatacenter support: This supports View across more than one data center Geo roaming: This supports roaming desktops for users moving across sites DR: This delivers resilience in the event of a data center failure Let's take a look at the Cloud Pod Architecture in the following diagram to explain the features and how it builds on the pod and block architecture we discussed previously: With the Cloud Pod Architecture, user information is replicated globally and the pods are linked using the View interpod API (VIPA)—the setup for which is command-line-based. For scalability, with the Cloud Pod Architecture model, you have the ability to entitle users across pools on both different pods and sites. This means that, if you have already scaled beyond a single pod, you can link the pods together to allow you to go beyond the 10,000 user limit and also administer your users from a single location. The pods can, apart from being located on the same site, also be on two different sites to deliver a mutlidatacenter configuration running as active/active. This also introduces DR capabilities. In the event of one of the data centers failing or losing connectivity, users will still be able to connect to a virtual desktop machine. Users don't need to worry about what View Connection Server they need to use to connect to their virtual desktop machine. The Cloud Pod Architecture supports a single namespace with access via a global URL. As users can now connect from anywhere, there are some configuration options that you need to consider as to how they access their virtual desktop machine and from where it gets delivered. There are three options that form part of the global user entitlement feature: Any: This is delivered from any pod as part of the global entitlement Site: This is delivered from any pod from the same site the user is connecting from Local: This is delivered only from the local pod that the user is connected to It's not just the users that get the global experience; the administrators can also be segregated in this way so that you can deliver delegated management. Administration of pods could be delegated to the local IT teams on a per region/geo basis, with some operations such as provisioning and patching performed locally on the local pods or maybe it's so that local language support can be delivered. It is only global policy that is managed globally, typically from an organizations global HQ. Now that we have covered some of the high-level architecture options, you should now be able to start to look at your overall design, factoring in locations and the number of users. In the next section, we will start to look at how to size some of these components. Sizing the infrastructure In this section, we are going to discuss the sizing of the components previously described in the architecture section. We will start by looking at the management blocks containing the connection servers, security servers, and then the servers that host the desktops before finishing off with the desktops themselves. The management block and the block hosting the virtual desktop machines should be run on separate infrastructure (ESXi hosts and vCenter Servers); the reason being due to the different workload patterns between servers and desktops and to avoid performance issues. It's also easier to manage, as you can determine what desktops are and what servers are, but more importantly it's also the way in which the products are licensed. With vSphere for desktop that comes with Horizon View, it only entitles you to run workloads that are hosting and managing the virtual desktop infrastructure. Summary In this article, you learned how to design a Horizon 6.0 architecture. Resources for Article: Further resources on this subject: Backups in the VMware View Infrastructure [Article] Setting up of Software Infrastructure on the Cloud [Article] Introduction to Veeam® Backup & Replication for VMware [Article]
Read more
  • 0
  • 0
  • 1995

article-image-planning-desktop-virtualization
Packt
16 Oct 2014
3 min read
Save for later

Planning Desktop Virtualization

Packt
16 Oct 2014
3 min read
 This article by Andy Paul, author of the book Citrix XenApp® 7.5 Virtualization Solutions, explains the VDI and its building blocks in detail. (For more resources related to this topic, see here.) The building blocks of VDI The first step in understanding Virtual Desktop Infrastructure (VDI) is to identify what VDI means to your environment. VDI is an all-encompassing term for most virtual infrastructure projects. For this book, we will use the definitions cited in the following sections for clarity. Hosted Virtual Desktop (HVD) Hosted Virtual Desktop is a machine running a single-user operating system such as Windows 7 or Windows 8, sometimes called a desktop OS, which is hosted on a virtual platform within the data center. Users remotely access a desktop that may or may not be dedicated but runs with isolated resources. This is typically a Citrix XenDesktop virtual desktop, as shown in the following figure:   Hosted Virtual Desktop model; each user has dedicated resources Hosted Shared Desktop (HSD) Hosted Shared Desktop is a machine running a multiuser operating system such as Windows 2008 Server or Windows 2012 Server, sometimes called a server OS, possibly hosted on a virtual platform within the data center. Users remotely access a desktop that may be using shared resources among multiple users. This will historically be a Citrix XenApp published desktop, as demonstrated in the following figure:   Hosted Shared Desktop model; each user shares the desktop server resources Session-based Computing (SBC) With Session-based Computing, users remotely access applications or other resources on a server running in the data center. These are typically client/server applications. This server may or may not be virtualized. This is a multiuser environment, but the users do not access the underlying operating system directly. This will typically be a Citrix XenApp hosted application, as shown in the following figure:   Session-based computing model; each user accesses applications remotely, but shares resources Application virtualization In application virtualization, applications are centrally managed and distributed, but they are locally executed. This may be in conjunction with, or separate from, the other options mentioned previously. Application virtualization typically involves application isolation, allowing the applications to operate independently of any other software. This will be an example of Citrix XenApp offline applications as well as Citrix profiled applications, Microsoft App-V application packages, and VMware ThinApp solutions. Have a look at the following figure:   Application virtualization model; the application packages execute locally The preceding list is not a definitive list of options, but it serves to highlight the most commonly used elements of VDI. Other options include client-side hypervisors for local execution of a virtual desktop, hosted physical desktops, and cloud-based applications. Depending on the environment, all of these components can be relevant. Summary In this article, we learned the VDI and understood its building blocks in detail. Resources for Article: Further resources on this subject: Installation and Deployment of Citrix Systems®' CPSM [article] Designing, Sizing, Building, and Configuring Citrix VDI-in-a-Box [article] Introduction to Citrix XenDesktop [article]
Read more
  • 0
  • 0
  • 2040

article-image-getting-started-with-vagrant
Timothy Messier
25 Sep 2014
6 min read
Save for later

Getting started with Vagrant

Timothy Messier
25 Sep 2014
6 min read
As developers, one of the most frustrating types of bugs are those that only happen in production. Continuous integration servers have gone a long way in helping prevent these types of configuration bugs, but wouldn't it be nice to avoid these bugs altogether? Developers’ machines tend to be a mess of software. Multiple versions of languages, random services with manually tweaked configs, and debugging tools all contribute to a cluttered and unpredictable environment. Sandboxes Many developers are familiar with sandboxing development. Tools like virtualenv and rvm were created to install packages in isolated environments, allowing for multiple versions of packages to be installed on the same machine. Newer languages like nodejs install packages in a sandbox by default. These are very useful tools for managing package versions in both development and production, but they also lead to some of the aforementioned clutter. Additionally, these tools are specific to particular languages. They do not allow for easily installing multiple versions of services like databases. Luckily, there is Vagrant to address all these issues (and a few more). Getting started Vagrant at its core is simply a wrapper around virtual machines. One of Vagrant's strengths is its ease of use. With just a few commands, virtual machines can be created, provisioned, and destroyed. First grab the latest download for your OS from Vagrant's download page (https://www.vagrantup.com/downloads.html). NOTE: For Linux users, although your distro may have a version of Vagrant via its package manager, it is most likely outdated. You probably want to use the version from the link above to get the latest features. Also install Virtualbox (https://www.virtualbox.org/wiki/Downloads) using your preferred method. Vagrant supports other virtualization providers as well as docker, but Virtualbox is the easiest to get started with. When most Vagrant commands are run, they will look for a file named Vagrantfile (or vagrantfile) in the current directory. All configuration for Vagrant is done in this file and it is isolated to a particular Vagrant instance. Create a new Vagrantfile: $ vagrant init hashicorp/precise64 This creates a Vagrantfile using the base virtual image, hashicorp/precise64, which is a stock Ubuntu 12.04 image provided by Hashicorp. This file contains many useful comments about the various configurations that can be done. If this command is run with the --minimal flag, it will create a file without comments, like the following: # -*- mode: ruby -*- # vi: set ft=ruby : # Vagrantfile API/syntax version. Don't touch unless you know what you're doing! VAGRANTFILE_API_VERSION = "2" Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.box = "hashicorp/precise64" end Now create the virtual machine: $ vagrant up This isn’t too difficult. You now have a nice clean machine in just two commands. What did Vagrant up do? First, if you did not already have the base box, hashicorp/precise64, it was downloaded from Vagrant Cloud. Then, Vagrant made a new machine based on the current directory name and a unique number. Once the machine was booted it created a shared folder between the host's current directory and the guest's /vagrant. To access the new machine run: $ vagrant ssh Provisioning At this point, additional software needs to be installed. While a clean Ubuntu install is nice, it does not have the software to develop or run many applications. While it may be tempting to just start installing services and libraries with apt-get, that would just start to cause the same old clutter. Instead, use Vagrant's provisioning infrastructure. Vagrant has support for all of the major provision tools like Salt (http://www.saltstack.com/), Chef (http://www.getchef.com/chef/) or Ansible (http://www.ansible.com/home). It also supports calling shell commands. For the sake of keeping this post focused on Vagrant, an example using the shell provisioner will be used. However, to unleash the full power of Vagrant, use the same provisioner used for production systems. This will enable the virtual machine to be configured using the same provisioning steps as production, thus ensuring that the virtual machine mirrors the production environment. To continue this example, add a provision section to the Vagrantfile: # -*- mode: ruby -*- # vi: set ft=ruby : # Vagrantfile API/syntax version. Don't touch unless you know what you're doing! VAGRANTFILE_API_VERSION = "2" Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.box = "hashicorp/precise64" config.vm.provision "shell", path: "provision.sh" end This tells Vagrant to run the script provision.sh. Create this file with some install commands: #!/bin/bash # Install nginx apt-get -y install nginx Now tell Vagrant to provision the machine: $ vagrant provision NOTE: When running Vagrant for the first time, Vagrant provision is automatically called. To force a provision, call vagrant up --provision. The virtual machine should now have nginx installed with the default page being served. To access this page from the host, port forwarding can be set in the Vagrantfile: # -*- mode: ruby -*- # vi: set ft=ruby : # Vagrantfile API/syntax version. Don't touch unless you know what you're doing! VAGRANTFILE_API_VERSION = "2" Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.box = "hashicorp/precise64" config.vm.network "forwarded_port", guest: 80, host: 8080 config.vm.provision "shell", path: "provision.sh" end For the forwarding to take effect, the virtual machine must be restarted: $ vagrant halt && vagrant up Now in a browser go to http://localhost:8080 to see the nginx welcome page. Next steps This simple example shows how easy Vagrant is to use, and how quickly machines can be created and configured. However, to truly battle issues like "it works on my machine", Vagrant needs to leverage the same provisioning tools that are used for production servers. If these provisioning scripts are cleanly implemented, Vagrant can easily leverage them and make creating a pristine production-like environment easy. Since all configuration is contained in a single file, it becomes simple to include a Vagrant configuration along with code in a repository. This allows developers to easily create identical environments for testing code. Final notes When evaluating any open source tool, at some point you need to examine the health of the community supporting the tool. In the case of Vagrant, the community seems very healthy. The primary developer continues to be responsive about bugs and improvements, despite having launched a new company in the wake of Vagrant's success. New features continue to roll in, all the while keeping a very stable product. All of this is good news since there seem to be no other tools that make creating clean sandbox environments as effortless as Vagrant. About the author Timothy Messier is involved in many open source devops projects, including Vagrant and Salt, and can be contacted at tim.messier@gmail.com.
Read more
  • 0
  • 0
  • 1989

article-image-docker-container-management-with-saltstack
Nicole Thomas
25 Sep 2014
8 min read
Save for later

Docker Container Management at Scale with SaltStack

Nicole Thomas
25 Sep 2014
8 min read
Every once in a while a technology comes along that changes the way work is done in a data center. It happened with virtualization and we are seeing it with various cloud computing technologies and concepts. But most recently, the rise in popularity of container technology has given us reason to rethink interdependencies within software stacks and how we run applications in our infrastructures. Despite all of the enthusiasm around the potential of containers, they are still just another thing in the data center that needs to be centrally controlled and managed...often at massive scale. This article will provide an introduction to how SaltStack can be used to manage all of this, including container technology, at web scale. SaltStack Systems Management Software The SaltStack systems management software is built for remote execution, orchestration, and configuration management and is known for being fast, scalable, and flexible. Salt is easy to set up and can easily communicate asynchronously with tens of thousands of servers in a matter of seconds. Salt was originally built as a remote execution engine relying on a secure, bi-directional communication system utilizing a Salt Master daemon used to control Salt Minion daemons, where the minions receive commands from the remote master. Salt’s configuration management capabilities are called the state system, or Salt States, and are built on SLS formulas. SLS files are data structures based on dictionaries, lists, strings, and numbers that are used to enforce the state that a system should be in, also known as configuration management. SLS files are easy to write, simple to implement, and are typically written in YAML. State file execution occurs on the Salt Minion. Therefore, once any states files that the infrastructure requires have been written, it is possible to enforce state on tens of thousands of hosts simultaneously. Additionally, each minion returns its status to the Salt Master. Docker containers Docker is an agile runtime and packaging platform specializing in enabling applications to be quickly built, shipped, and run in any environment. Docker containers allow developers to easily compartmentalize their applications so that all of the program’s needs are installed and configured in a single container. This container can then be dropped into clouds, data centers, laptops, virtual machines (VMs), or infrastructures where the app can execute, unaltered, without any cross-environment issues. Building Docker containers is fast, simple, and inexpensive. Developers can create a container, change it, break it, fix it, throw it away, or save it much like a VM. However, Docker containers are much more nimble, as the containers only possess the application and its dependencies. VMs, along with the application and its dependencies, also install a guest operating system, which requires much more overhead than may be necessary. While being able to slice deployments into confined components is a good idea for application portability and resource allocation, it also means there are many more pieces that need to be managed. As the number of containers scales, without proper management, inefficiencies abound. This scenario, known as container sprawl, is where SaltStack can help and the combination of SaltStack and Docker quickly proves its value. SaltStack + Docker When we combine SaltStack’s powerful configuration management armory with Docker’s portable and compact containerization tools, we get the best of both worlds. SaltStack has added support for users to manage Docker containers on any infrastructure. The following demonstration illustrates how SaltStack can be used to manage Docker containers using Salt States. We are going to use a Salt Master to control a minimum of two Salt Minions. On one minion we will install HAProxy. On the other minion, we will install Docker and then spin up two containers on that minion. Each container hosts a PHP app with Apache that displays information about each container’s IP address and exposed port. The HAProxy minion will automatically update to pull the IP address and port number from each of the Docker containers housed on the Docker minion. You can certainly set up more minions for this experiment, where additional minions would be additional Docker container minions, to see an implementation of containers at scale, but it is not necessary to see what is possible. First, we have a few installation steps to take care of. SaltStack’s installation documentation provides a thorough overview of how to get Salt up and running, depending on your operating system. Follow the instructions on Salt’s Installation Guide to install and configure your Salt Master and two Salt Minions. Go through the process of accepting the minion keys on the Salt Master and, to make sure the minions are connected, run: salt ‘*’ test.ping Note: Much of the Docker functionality used in this demonstration is brand new. You must install the latest supported version of Salt, which is 2014.1.6. For reference, I have my master and minions all running on Ubuntu 14.04. My Docker minion is named “docker1” and my HAProxy minion is named “haproxy”. Now that you have a Salt Master and two Salt Minions configured and communicating with each other, it’s time to get to work. Dave Boucha, a Senior Engineer at SaltStack, has already set up the necessary configuration files for both our haproxy and docker1 minions and we will be using those to get started. First, clone the Docker files in Dave’s GitHub repository, dock-apache, and copy the dock_apache and docker> directories into the /srv/salt directory on your Salt Master (you may need to create the /srv/salt directory): cp -r path/to/download/dock_apache /srv/salt cp -r path/to/download/docker /srv/salt The init.sls file inside the docker directory is a Salt State that installs Docker and all of its dependencies on your minion. The docker.pgp file is a public key that is used during the Docker installation. To fire off all of these events, run the following command: salt ‘docker1’ state.sls docker Once Docker has successfully installed on your minion, we need to set up our two Docker containers. The init.sls file in the dock_apache directory is a Salt State that specifies the image to pull from Docker’s public container repository, installs the two containers, and assigns the ports that each container should expose: 8000 for the first container and 8080 for the second container. Now, let’s run the state: salt ‘docker1’ state.sls dock_apache Let’s check and see if it worked. Get the IP address of the minion by running: salt ‘docker1’ network.ip_addrs Copy and paste the IP address into a browser and add one of the ports to the end. Try them both (IP:8000 and IP:8080) to see each container up and running! At this point, you should have two Docker containers installed on your “docker1” minion (or any other docker minions you may have created). Now we need to set up the “haproxy” minion. Download the files from Dave’s GitHub repository, haproxy-docker, and place them all in your Salt Master’s /srv/salt/haproxy directory: cp -r path/to/download/haproxy /srv/salt First, we need to retrieve the data about our Docker containers from the Docker minion(s), such as the container IP address and port numbers, by running the haproxy.config salt ‘haproxy’ state.sls haproxy.config By running haproxy.config, we are setting up a new configuration with the Docker minion data. The configuration file also runs the haproxy state (init.sls. The init.sls file contains a haproxy state that ensures that HAProxy is installed on the minion, sets up the configuration for HAProxy, and then runs HAProxy. You should now be able to look at the HAProxy statistics by entering the haproxy minion’s IP address in your browser with /haproxy?stats on the end of the URL. When the authentication pop-up appears, the username and password are both saltstack. While this is a simple example of running HAProxy to balance the load of only two Docker containers, it demonstrates how Salt can be used to manage tens of thousands of containers as your infrastructure scales. To see this same demonstration with a haproxy minion in action with 99 Docker minions, each with two Docker containers installed, check out SaltStack’s Salt Air 20 tutorial by Dave Boucha on YouTube. About the author Nicole Thomas is a QA Engineer at SaltStack, Inc. Before coming to SaltStack, she wore many hats from web and Android developer to contributing editor to working in Environmental Education. Nicole recently graduated Summa Cum Laude from Westminster College with a degree in Computer Science. Nicole also has a degree in Environmental Studies from the University of Utah.
Read more
  • 0
  • 0
  • 5511

article-image-backups-vmware-view-infrastructure-0
Packt
17 Sep 2014
18 min read
Save for later

Backups in the VMware View Infrastructure

Packt
17 Sep 2014
18 min read
In this article, by Chuck Mills and Ryan Cartwright, authors of the book VMware Horizon 6 Desktop Virtualization Solutions, we will study about the back up options available in VMware. It also provides guidance on scheduling appropriate backups of a Horizon View environment. (For more resources related to this topic, see here.) While a single point of failure should not exist in the VMware View environment, it is still important to ensure regular backups are taken for a quick recovery when failures occur. Also, if a setting becomes corrupted or is changed, a backup could be used to restore to a previous point in time. The backup of the VMware View environment should be performed on a regular basis in line with an organization's existing backup methodology. A VMware View environment contains both files and databases. The main backup points of a VMware View environment are as follows: VMware View Connection Server—ADAM database VMware View Security Server VMware View Composer Database Remote Desktop Service host servers Remote Desktop Service host templates and virtual machines Virtual desktop templates and parent VMs Virtual desktops Linked clones (stateless) Full clones (stateful) ThinApp repository Persona Management VMware vCenter Restoring the VMware View environment Business Continuity and Disaster Recovery With a backup of all of the preceding components, the VMware View Server infrastructure can be recovered during a time of failure. To maximize the chances of success in a recovery environment, it is advised to take backups of the View ADAM database, View Composer, and vCenter database at the same time to avoid discrepancies. Backups can be scheduled and automated or can be manually executed; ideally, scheduled backups will be used to ensure that they are performed and completed regularly. Proper design dictates that there should always be two or more View Connection Servers. As all View Connection Servers in the same replica pool contain the same configuration data, it is only necessary to back up one View Connection Server. This backup is typically configured for the first View Connection Server installed in standard mode in an environment. VMware View Connection Server – ADAM Database backup View Connection Server stores the View Connection Server configuration data in the View LDAP repository. View Composer stores the configuration data for linked clone desktops in the View Composer database. When you use View Administrator to perform backups, the Connection Server backs up the View LDAP configuration data and the View Composer database. Both sets of backup files will be stored in the same location. The LDAP data is exported in LDAP data interchange format (LDIF). If you have multiple View Connection Server(s) in a replicated group, you only need to export data from one of the instances. All replicated instances contain the same configuration data. It is a not good practice to rely on replicated instances of View Connection Server as your backup mechanism. When the Connection Server synchronizes data across the instances of Connection Server, any data lost on one instance might be lost in all the members of the group. If the View Connection Server uses multiple vCenter Server instances and multiple View Composer services, then the View Connection Server will back up all the View Composer databases associated with the vCenter Server instances. View Connection Server backups are configured from the VMware View Admin console. The backups dump the configuration files and the database information to a location on the View Connection Server. Then, the data must be backed up through normal mechanisms, like a backup agent and scheduled job. The procedure for a View Connection Server backup is as follows: Schedule VMware View backup runs and exports to C:View_Backup. Use your third-party backup solution on the View Connection Server and have it back up the System State, Program Files, and C:View_Backup folders that were created in step 1. From within the View Admin console, there are three primary options that must be configured to back up the View Connection Server settings: Automatic backup frequency: This is the frequency at which backups are automatically taken. The recommendation is as follows: Recommendation (every day): As most server backups are performed daily, if the automatic View Connection Server backup is taken before the full backup of the Windows server, it will be included in the nightly backup. This is adjusted as necessary. Backup time: This displays the time based on the automatic backup frequency. (Every day produces the 12 midnight time.) Maximum number of backups: This is the maximum number of backups that can be stored on the View Connection Server; once the maximum number has been reached, backups will be rotated out based on age, with the oldest backup being replaced by the newest backup. The recommendation is as follows: Recommendation—30 days: This will ensure that approximately one month of backups are retained on the server. This is adjusted as necessary. Folder location: This is the location on the View Connection Server, where the backups will be stored. Ensure that the third-party backup solution is backing up this location. The following screenshot shows the Backup tab: Performing a manual backup of the View database Use the following steps to perform a manual backup of your View database: Log in to the View Administrator console. Expand the Catalog option under Inventory (on the left-hand side of the console). Select the first pool and right-click on it. Select Disable Provisioning, as shown in the following screenshot: Continue to disable provisioning for each of the pools. This will assure that no new information will be added to the ADAM database. After you disable provisioning for all the pools, there are two ways to perform the backup: The View Administrator console Running a command using the command prompt The View Administrator console Follow these steps to perform a backup: Log in to the View Administrator console. Expand View Configuration found under Inventory. Select Servers, which displays all the servers found in your environment. Select the Connection Servers tab. Right-click on one of the Connection Servers and choose Backup Now, as shown in the following screenshot After the backup process is complete, enable provisioning to the pools. Using the command prompt You can export the ADAM database by executing a built-in export tool in the command prompt. Perform the following steps: Connect directly to the View Connection Server with a remote desktop utility such as RDP. Open a command prompt and use the cd command to navigate to C:Program FilesVMwareVMware ViewServertoolsbin. Execute the vdmexport.exe command and use the –f option to specify a location and filename, as shown in the following screenshot (for this example, C:View_Backup is the location and vdmBackup.ldf is the filename): Once a backup has been either automatically run or manually executed, there will be two types of files saved in the backup location: LDF files: These are the LDIF exports from the VMware View Connection Server ADAM database and store the configuration settings of the VMware View environment SVI files: These are the backups of the VMware View Composer database The backup process of the View Connection Server is fairly straightforward. While the process is easy, it should not be overlooked. Security Server considerations Surprisingly, there is no option to back up the VMware View Security Server via the VMware View Admin console. For View Connection Servers, backup is configured by selecting the server, selecting Edit, and then clicking on Backup. Highlighting the View Security Server provides no such functionality. Instead, the security server should be backed up via normal third-party mechanisms. The installation directory is of primary concern, which is C:Program FilesVMwareVMware ViewServer by default. The .config file is in the …sslgatewayconf directory, and it includes the following settings: pcoipClientIPAddress: This is the public address used by the Security Server pcoipClientUDPPort: This is the port used for UDP traffic (the default is 4172) In addition, the settings file is located in this directory, which includes settings such as the following: maxConnections: This is the maximum number of concurrent connections the View Security Server can have at one time (the default is 2000) serverID: This is the hostname used by the security server In addition, custom certificates and logfiles are stored within the installation directory of the VMware View Security Server. Therefore, it is important to back up the data regularly if the logfile data is to be maintained (and is not being ingested into a larger enterprise logfile solution). The View Composer database The View Composer database used for linked clones is backed up using the following steps: Log in to the View Administrator console. Expand the Catalog option under Inventory (left-hand side of the console). Select the first pool and right-click on it. Select Disable Provisioning. Connect directly to the server where the View Composer was installed, using a remote desktop utility such as RDP. Stop the View Composer service, as shown in the following screenshot. This will prevent provisioning request that would change the composer database. After the service is stopped, use the standard practice for backed up databases in the current environment. Restart the Composer service after the backup completes. Remote Desktop Service host servers VMware View 6 uses virtual machines to deliver hosted applications and desktops. In some cases, tuning and optimization, or other customer specific configurations to the environment or applications may be built on the Remote Desktop Service (RDS) host. Use the Windows Server Backup tool or the current backup software deployed in your environment. RDS Server host templates and virtual machines The virtual machine templates and virtual machines are an important part of the Horizon View infrastructure and need protection in the event that the system needs to be recovered. Back up the RDS host templates when changes are made and the testing/validation is completed. The production RDS host machines should be backed up if they contains user data or any other elements that require protection at frequent intervals. Third-party backup solutions are used in this case. Virtual desktop templates and parent VMs Horizon View uses virtual machine templates to create the desktops in pools for full virtual machines and uses parent VMs to create the desktops in a linked clone desktop pool. These virtual machine templates and the parent VMs are another important part of the View infrastructure that needs protection. These backups are a crucial part of being able to quickly restore the desktop pools and the RDS hosts in the event of data loss. While frequent changes occur for standard virtual machines, the virtual machine templates and parent VMs only need backing up after new changes have been made to the template and parent VM images. These backups should be readily available for rapid redeployment when required. For environments that use full cloning as the provisioning technique for the vDesktops, the gold template should be backed up regularly. The gold template is the master vDesktop that all other vDesktops are cloned from. The VMware KB article, Backing up and restoring virtual machine templates using VMware APIs, covers the steps to both back up and restore a template. In short, most backup solutions will require that the gold template is converted from a template to a regular virtual machine and it can then be backed up. You can find more information at http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2009395. Backing up the parent VM can be tricky as it is a virtual machine, often with many different point-in-time snapshots. The most common technique is to collapse the virtual machine snapshot tree at a given point-in-time snapshot, and then back up or copy the newly created virtual machine to a second datastore. By storing the parent VM on a redundant storage solution, it is quite unlikely that the parent VM will be lost. What's more likely is that a point-in-time snapshot of the parent VM may be created while it's in a nonfunctional or less-than-ideal state. Virtual desktops There are three types of virtual desktops in a Horizon View environment, which are as follows: Linked clone desktops Stateful desktops Stateless desktops Linked clone desktops Virtual desktops that are created by View Composer using the linked clone technology present special challenges with backup and restoration. In many cases, a linked clone desktop will also be considered as a stateless desktop. The dynamic nature of a linked clone desktop and the underlying structure of the virtual machine itself means the linked clone desktops are not a good candidate for backup and restoration. However, the same qualities that impede the use of a standard backup solution provide an advantage for rapid reprovisioning of virtual desktops. When the underlying infrastructure for things such as the delivery of applications and user data, along with the parent VMs, are restored, then linked clone desktop pools can be recreated and made available within a short amount of time, and therefore lessening the impact of an outage or data loss. Stateful desktops In the stateful desktop pool scenario, all of the virtual desktops retain user data when the user logs back in to the virtual desktop. So, in this case, backing up the virtual machines with third-party tools like any other virtual machine in vSphere is considered the optimal method for protection and recovery. Stateless desktops With the stateless desktop architecture, the virtual desktops do not retain the desktop state when the user logs back in to the virtual desktop. The nature of the stateless desktops does not require and nor do they directly contain any data that requires a backup. All the user data in a stateless desktop is stored on a file share. The user data includes any files the user creates, changes, or copies within the virtual infrastructure, along with the user persona data. Therefore, because no user data is stored within the virtual desktop, there will be no need to back up the desktop. File shares should be included in the standard backup strategy and all user data and persona information will be included in the existing daily backups. The ThinApp repository The ThinApp repository is similar in nature to the user data on the stateless desktops in that it should reside on a redundant file share that is backed up regularly. If the ThinApp packages are configured to preserve each user's sandbox, the ThinApp repository should likely be backed up nightly. Persona Management With the View Persona Management feature, the user's remote profile is dynamically downloaded after the user logs in to a virtual desktop. The secure, centralized repository can be configured in which Horizon View will store user profiles. The standard practice is to back up network shares on which View Persona Management stores the profile repository. View Persona Management will ensure that user profiles are backed up to the remote profile share, eliminating the need for additional tools to back up user data on the desktops. Therefore, backup software to protect the user profile on the View desktop is unnecessary. VMware vCenter Most established IT departments are using backup tools from the storage or backup vendor to protect the datastores where the VM's are stored. This will make the recovery of the base vSphere environment faster and easier. The central piece of vCenter is the vCenter database. If there is a total loss of database you will lose all your configuration information of vSphere, including the configuration specific to View (for examples, users, folders, and many more). Another important item to understand is that even if you rebuild your vCenter using the same folder and resource pool names, your View environment will not reconnect and use the new vCenter. The reason is that each object in vSphere has what is called a Managed object Reference (MoRef) and they are stored in the vSphere database. View uses the MoRef information to talk to vCenter. As View and vSphere rely on each other, making a backup of your View environment without backing up your vSphere environment doesn't make sense. Restoring the VMware View environment If your environment has multiple Connection Servers, the best thing to do would be delete all the servers but one, and then use the following steps to restore the ADAM database: Connect directly to the server where the View Connection Server is located using a remote desktop utility such as RDP. Stop the View Connection service, as shown in the following screenshot: Locate the backup (or exported) ADAM database file that has the .ldf extension. The first step of the import is to decrypt the file by opening a command prompt and use the cd command to navigate to C:Program FilesVMwareVMware ViewServertoolsbin. Use the following command: vdmimport –f View_BackupvdmBackup.ldf –d >View_BackupvmdDecrypt.ldf You will be prompted to enter the password from the account you used to create the backup file. Now use the vdmimport –f [decrypted file name] command (from the preceding example, the filename will be vmdDecrypt.ldf). After the ADAM database is updated, you can restart the View Connection Server service. Replace the delete Connection Servers by running the Connection Server installation and using the Replica option. To reinstall the View Composer database, you can connect to the server where Composer is installed. Stop the View Composer service and use your standard procedure for restoring a database. After the restore, start the View Composer service. While this provides the steps to restore the main components of the Connection server, the steps to perform a complete View Connection Server restore can be found in the VMware KB article, Performing an end-to-end backup and restore for VMware View Manager, at http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=1008046. Reconciliation after recovery One of the main factors to consider when performing a restore in a Horizon View infrastructure is the possibility that the Connection Server environment could be out of sync with the current View state and a reconciliation is required. After restoring the Connection Server ADAM database, there may be missing desktops that are shown in the Connection Server Admin user interface if the following actions are executed after the backup but before a restore: The administrator deleted pools or desktops The desktop pool was recomposed, which resulted in the removal of the unassigned desktops Missing desktops or pools can be manually removed from the Connection Server Admin UI. Some of the automated desktops may become disassociated with their pools due to the creation of a pool between the time of the backup and the restore time. View administrators may be able to make them usable by cloning the linked clone desktop to a full clone desktop using vCenter Server. They would be created as an individual desktop in the Connection Server and then assign those desktops to a specific user. Business Continuity and Disaster Recovery It's important to ensure that the virtual desktops along with the application delivery infrastructure is included and prioritized as a Business Continuity and Disaster Recovery plan. Also, it's important to ensure that the recovery procedures are tested and validated on a regular cycle, as well as having the procedures and mechanisms in place that ensure critical data (images, software media, data backup, and so on) is always stored and ready in an alternate location. This will ensure an efficient and timely recovery. It would be ideal to have a disaster recovery plan and business continuity plan that recovers the essential services to an alternate "standby" data center. This will allow the data to be backed up and available offsite to the alternate facility for an additional measure of protection. The alternate data center could have "hot" standby capacity for the virtual desktops and application delivery infrastructure. This site would then address 50 percent capacity in the event of a disaster and also 50 percent additional capacity in the event of a business continuity event that prevents users from accessing the main facility. The additional capacity will also provide a rollback option if there were failed updates to the main data center. Operational procedures should ensure the desktop and server images are available to the alternate facility when changes are made to the main VMware View system. Desktop and application pools should also be updated in the alternate data center whenever maintenance procedures are executed and validated in the main data center. Summary As expected, it is important to back up the fundamental components of a VMware View solution. While a resilient design should mitigate most types of failure, there are still occasions when a backup may be needed to bring an environment back up to an operational level. This article covered the major components of View and provided some of the basic options for creating backups of those components. The Connection Server and Composer database along with vCenter were explained. There was a good overview of the options used to protect the different types of virtual desktops. The ThinApp repository and Persona Management was also explained. The article also covered the basic recovery options and where to find information on the complete View recovery procedures. Resources for Article: Further resources on this subject: Introduction to Veeam® Backup & Replication for VMware [article] Design, Install, and Configure [article] VMware vCenter Operations Manager Essentials - Introduction to vCenter Operations Manager [article]
Read more
  • 0
  • 0
  • 3900
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-configuring-organization-network-services
Packt
22 Aug 2014
9 min read
Save for later

Configuring organization network services

Packt
22 Aug 2014
9 min read
This article by Lipika Pal, the author of the book VMware vCloud Director Essentials, teaches you to configure organization network services. Edge devices can be used as DNS relay hosts owing to the release of vCloud Networking and Security suite 5.1. However, before we jump onto how to do it and why you should do it, let us discuss the DNS relay host technology itself. (For more resources related to this topic, see here.) If your client machines want to send their DNS queries, they contact DNS relay, which is nothing but a host. The queries are sent by the relay host to the provider's DNS server or any other entity specified using the Edge device settings. The answer received by the Edge device is then sent back to the machines. The Edge device also stores the answer for a short period of time, so any other machine in your network searching for the same address receives the answer directly from the Edge device without having to ask internet servers again. In other words, the Edge device has this tiny memory called DNS cache that remembers the queries. The following diagram illustrates one of the setups and its workings: In this example, you see an external interface configured on Edge to act as a DNS relay interface. On the client side, we configured Client1 VM such that it uses the internal IP of the Edge device (192.168.1.1) as a DNS server entry. In this setup, Client1 requests DNS resolution (step 1) for the external host, google.com, from Edge's gateway internal IP. To resolve google.com, the Edge device will query its configured DNS servers (step 2) and return that resolution to Client1 (step 3). Typical uses of this feature are as follows: DMZ environment Multi-tenant environment Accelerated resolution time Configuring DNS relay To configure DNS relay in a vShield Edge device, perform the following steps. Configure DNS relay when creating an Edge device or when there is an Edge device available. This is an option for an organization gateway and not for a vApp or Org network. Now, let's develop an Edge gateway in an organization vDC while enabling DNS relay by executing the following steps: Open the vCloud Director URL in a supported browser, for example, https://serverFQDN/cloud. Log in to the cloud as the administrator. You will be presented with the Home screen. Click on the Organization VDCs link and on the right-hand side, you will see some organization vDCs created. Click on any organization vDC. Doing this will take you to the vDC page. Click on the Administration page and double-click on Virtual Datacenter. Then click on the Edge Gateways tab. Click on the green-colored + sign as shown in the following screenshot: On the Configure Edge Gateway screen, click on the Configure IP Settings section. Use the other default settings and click on Next. On the Configure External Networks screen, select the external network and click on Add. You will see a checkbox on this same screen. Use the default gateway for DNS relay. Once you do, select it and click on Next, as shown in the following screenshot: Select the default value on the Configure IP Settings page and click on Next. Specify a name for this Edge gateway and click on Next. Review the information and click on Finish. Let's look an alternative way to configure this, assuming you already have an Edge gateway and are trying to configure DNS Relay. Execute the following steps to configure it: Open the vCloud Director URL in a supported browser, for example, https://serverFQDN/cloud. Log in to the cloud as the administrator. You will be presented with the Home screen. On the Home screen, click on Edge Gateways. Select an appropriate Edge gateway, right-click, and select Properties, as shown in the following screenshot: Click on the Configure External Networks tab. Scroll down and select the Use default gateway for DNS Relay. checkbox, as shown in the following screenshot: Click on OK. In this section, we learned to configure DNS relay. In the next section, we discuss the configuration of a DHCP service in vCloud Director. DHCP services in vCloud Director vShield Edge devices support IP address pooling using the DHCP service. vShield Edge DHCP service listens on the vShield Edge internal interface for DHCP discovery. It uses the internal interface's IP address on vShield Edge as the default gateway address for all clients. The broadcast and subnet mask values of the internal interface are used for the container network. However, when you translate this with vCloud, not all types of networks support DHCP. That said, the Direct Connect network does not support DHCP. So, only routed and isolated networks support the vCNS DHCP service. The following diagram illustrates a routed organization vCD network: In the preceding diagram, the DHCP service provides an IP address from the Edge gateway to the Org networks connected to it. The following diagram shows how vApp is connected to a routed external network and gets a DHCP service: The following diagram shows a vApp network and a vApp connected to it, and DHCP IP address being obtained from the vShield Edge device: Configuring DHCP pools in vCloud Director The following actions are required to set up Edge DHCP: Add DHCP IP pools Enable Edge DHCP services As a prerequisite, you should know which Edge device is connected to which Org vDC network. Execute the following steps to configure DHCP pool: Open up a supported browser. Go to the URL of the vCD server; for example, https://serverFQDN/cloud. Log in to vCD by typing an administrator user ID and password. Click on the Edge Gateways link. Select the appropriate gateway, right-click on it, and select Edge Gateway Services, as shown in the following screenshot: The first service is DHCP, as shown in the following screenshot: Click on Add. From the drop-down combobox, select the network that you want the DHCP to applied be on. Specify the IP range. Select Enable Pool and click on OK, as shown in the following screenshot: Click on the Enable DHCP checkbox and then on OK. In this section, we learned about the DHCP pool, its functionality, and how to configure it. Understanding VPN tunnels in vCloud Director It's imperative that we first understand the basics of CloudVPN tunnels and then move on to a use case. We can then learn to configure a VPN tunnel. A VPN tunnel is an encrypted or more precisely, encapsulated network path on a public network. This is often used to connect two different corporate sites via the Internet. In vCloud Director, you can connect two organizations through an external network, which can also be used by other organizations. The VPN tunnel prevents users in other organizations from being able to monitor or intercept communications. VPNs must be anchored at both ends by some kind of firewall or VPN device. In vCD, the VPNs are facilitated by vShield Edge devices. When two systems are connected by a VPN tunnel, they communicate like they are on the same network. Let's have a look at the different types of VPN tunnels you can create in vCloud Director: VPN tunnels between two organization networks in the same organization VPN tunnels between two organization networks in two different organizations VPN tunnels between an organization network and a remote network outside of VMware vCloud While only a system administrator can create an organization network, organization administrators have the ability to connect organization networks using VPN tunnels. If the VPN tunnel connects two different organizations, then the organization administrator from each organization must enable the connection. A VPN cannot be established between two different organizations without the authorization of either both organization administrators or the system administrator. It is possible to connect VPN tunnels between two different organizations in two different instances of vCloud Director. The following is a diagram of a VPN connection between two different organization networks in a single organization: The following diagram shows a VPN tunnel between two organizations. The basic principles are exactly the same. vCloud Director can also connect VPN tunnels to remote devices outside of vCloud. These devices must be IPSec-enabled and can be network switches, routers, firewalls, or individual computer systems. This ability to establish a VPN tunnel to a device outside of vCD can significantly increase the flexibility of vCloud communications. The following diagram illustrates a VPN tunnel to a remote network: Configuring a virtual private network To configure an organization-to-organization VPN tunnel in vCloud Director, execute the following steps: Start a browser. Insert the URL of the vCD server into it, for example, https://serverFQDN/cloud. Log in to vCD using the administrator user ID and password. Click on the Manage & Monitor tab. Click on the Edge Gateways link in the panel on the left-hand side. Select an appropriate gateway, right-click, and select Edge Gateway Services. Click on the VPN tab. Click on Configure Public IPs. Specify a public IP and click on OK, as shown in the following screenshot: Click on Add to add the VPN endpoint. Click on Establish VPN to and specify an appropriate VPN type (in this example, it is the first option), as shown in the following screenshot: If this VPN is within the same organization, then select the Peer Edge Gateway option from the dropdown. Then, select the local and peer networks. Select the local and peer endpoints. Now click on OK. Click on Enable VPN and then on OK. This section assumes that either the firewall service is disabled or the default rule is set to accept all on both sides. In this section, we learned what VPN is and how to configure it within a vCloud Director environment. In the next section, we discuss static routing and various use cases and implementation.
Read more
  • 0
  • 0
  • 2187

article-image-virtual-machine-virtual-world
Packt
11 Jul 2014
15 min read
Save for later

A Virtual Machine for a Virtual World

Packt
11 Jul 2014
15 min read
(For more resources related to this topic, see here.) Creating a VM from a template Let us start by creating our second virtual machine from the Ubuntu template. Right-click on the template and select Clone, as shown in the following screenshot: Use the settings shown in the following screenshot for the new virtual machine. You can also use any virtual machine name you like. A VM name can only be alphanumeric without any special characters. You can also use any other VM you have already created in your own virtual environment. Access the virtual machine through the Proxmox console after cloning and setting up network connectivity such as IP address, hostname, and so on. For our Ubuntu virtual machine, we are going to edit interfaces in /etc/network/, hostname in /etc/, and hosts in /etc/. Advanced configuration options for a VM We will now look at some of the advanced configuration options we can use to extend the capability of a KVM virtual machine. The hotplugging option for a VM Although it is not a very common occurrence, a virtual machine can run out of storage unexpectedly whether due to over provisioning or improper storage requirement planning. For a physical server with hot swap bays, we can simply add a new hard drive and then partition it, and you are up and running. Imagine another situation when you have to add some virtual network interface to the VM right away, but you cannot afford shutting down the VM to add the vNICs. The hotplug option also allows hotplugging virtual network interfaces without shutting down a VM. Proxmox virtual machines by default do not support hotplugging. There are some extra steps needed to be followed in order to enable hotplugging for devices such as virtual disks and virtual network interfaces. Without the hotplugging option, the virtual machine needs to be completely powered off and then powered on after adding a new virtual disk or virtual interface. Simply rebooting the virtual machine will not activate the newly added virtual device. In Proxmox 3.2 and later, the hotplug option is not shown on the Proxmox GUI. It has to be done through CLI by adding options to the <vmid>.conf file. Enabling the hotplug option for a virtual machine is a three-step process: Shut down VM and add the hotplug option into the <vmid>.conf file. Power up VM and then load modules that will initiate the actual hotplugging. Add a virtual disk or virtual interface to be hotplugged into the virtual machine. The hotplugging option for <vmid>.conf Shut down the cloned virtual machine we created earlier and then open the configuration file from the following location. Securely log in to the Proxmox node or use the console in the Proxmox GUI using the following command: # nano /etc/pve/nodes/<node_name>/qemu-server/102.conf With default options added during the virtual machine creation process, the following code is what the VM configuration file looks like: ballon: 512 bootdisk: virtio0 cores: 1 ide2: none, media=cdrom kvm: 0 memory: 1024 name: pmxUB01 net0: e1000=56:63:C0:AC:5F:9D,bridge=vmbr0 ostype: l26 sockets: 1 virtio0: vm-nfs-01:102/vm-102-disk-1.qcow2,format=qcow2,size=32G Now, at the bottom of the 102.conf configuration file located under /etc/pve/nodes/<node_name>/qemu-server/, we will add the following option to enable hotplugging in the virtual machine: hotplug: Save the configuration file and power up the virtual machine. Loading modules After the hotplug option is added and the virtual machine is powered up, it is now time to load two modules into the virtual machine, which will allow hotplugging a virtual disk anytime without rebooting the VM. Securely log in to VM or use the Proxmox GUI console to get into the command prompt of the VM. Then, run the following commands to load the acpiphp and pci_hotplug modules. Do not load these modules to the Proxmox node itself: # sudo modprobe acpiphp # sudo modprobe pci_hotplug   The acpiphp and pci_hotplug modules are two hot plug drivers for the Linux operating system. These drivers allow addition of a virtual disk image or virtual network interface card without shutting down the Linux-based virtual machine. The modules can also be loaded automatically during the virtual machine boot by inserting them in /etc/modules. Simply add acpiphp and pci_hotplug on two separate lines in /etc/modules. Adding virtual disk/vNIC After loading both the acpiphp and pci_hotplug modules, all that remains is adding a new virtual disk or virtual network interface in the virtual machine through a web GUI. On adding a new disk image, check that the virtual machine operating system recognizes the new disk through the following command: #sudo fdisk -l For a virtual network interface, simply add a new virtual interface from a web GUI and the operating system will automatically recognize a new vNIC. After adding the interface, check that the vNIC is recognized through the following command: #sudo ifconfig –a Please note that while the hotplugging option works great with Linux-based virtual machines, it is somewhat problematic on Windows XP/7-based VMs. Hotplug seems to work great with both 32- and 64-bit versions of the Windows Server 2003/2008/2012 VMs. The best practice for a Windows XP/7-based virtual machine is to just power cycle the virtual machine to activate newly added virtual disk images. Forcing the Windows VM to go through hotplugging will cause an unstable operating environment. This is a limitation of the KVM itself. Nested virtual environment In simple terms, a virtual environment inside another virtual environment is known as a nested virtual environment. If the hardware resource permits, a nested virtual environment can open up whole new possibilities for a company. The most common scenario of a nested virtual environment is to set up a fully isolated test environment to test software such as hypervisor, or operating system updates/patches before applying them in a live environment. A nested environment can also be used as a training platform to teach computer and network virtualization, where students can set up their own virtual environment from the ground without breaking the main system. This eliminates the high cost of hardware for each student or for the test environment. When an isolated test platform is needed, it is just a matter of cloning some real virtual machines and giving access to authorized users. A nested virtual environment has the potential to give the network administrator an edge in the real world by allowing cost cutting and just getting things done with limited resources. One very important thing to keep in mind is that a nested virtual environment will have a significantly lower performance than a real virtual environment. If the nested virtual environment also has virtualized storage, performance will degrade significantly. The loss of performance can be offset by creating a nested environment with an SSD storage backend. When a nested virtual environment is created, it usually also contains virtualized storage to provide virtual storage for nested virtual machines. This allows for a fully isolated nested environment with its own subnet and virtual firewall. There are many debates about the viability of a nested virtual environment. Both pros and cons can be argued equally. But it will come down to the administrator's grasp on his or her existing virtual environment and good understanding of the nature of requirement. This allowed us to build a fully functional Proxmox cluster from the ground up without using additional hardware. The following screenshot is a side-by-side representation of a nested virtual environment scenario: In the previous comparison, on the right-hand side we have our basic cluster we have been building so far. On the left-hand side we have the actual physical nodes and virtual machines used to create the nested virtual environment. Our nested cluster is completely isolated from the rest of the physical cluster with a separate subnet. Internet connectivity is provided to the nested environment by using a virtualized firewall 1001-scce-fw-01. Like the hotplugging option, nesting is also not enabled in the Proxmox cluster by default. Enabling nesting will allow nested virtual machines to have KVM hardware virtualization, which increases the performance of nested virtual machines. To enable KVM hardware virtualization, we have to edit the modules in /etc/ of the physical Proxmox node and <vmid>.conf of the virtual machine. We can see that the option is disabled for our cloned nested virtual machine in the following screenshot: Enabling KVM hardware virtualization KVM hardware virtualization can be added just by performing the following few additional steps: In each Proxmox node, add the following line in the /etc/modules file: kvm-amd nested=1 Migrate or shut down all virtual machines of Proxmox nodes and then reboot. After the Proxmox nodes reboot, add the following argument in the <vmid>.conf file of the virtual machines used to create a nested virtual environment: args: -enable-nesting Enable KVM hardware virtualization from the virtual machine option menu through GUI. Restart the nested virtual machine. Network virtualization Network virtualization is a software approach to set up and maintain network without physical hardware. Proxmox has great features to virtualize the network for both real and nested virtual environments. By using virtualized networking, management becomes simpler and centralized. Since there is no physical hardware to deal with, the network ability can be extended within a minute's notice. Especially in a nested virtual environment, the use of virtualized network is very prominent. In order to set up a successful nested virtual environment, a better grasp of the Proxmox network feature is required. With the introduction of Open vSwitch (www.openvswitch.org) in Proxmox 3.2 and later, network virtualization is now much more efficient. Backing up a virtual machine A good backup strategy is the last line of defense against disasters, such as hardware failure, environmental damages, accidental deletions, and misconfigurations. In a virtual environment, a backup strategy turns into a daunting task because of the number of machines needed to be backed up. In a busy production environment, virtual machines may be created and discarded whenever needed or not needed. Without a proper backup plan, the entire backup task can go out of control. Gone are those days when we only had few server hardware to deal with and backing them up was an easy task. Today's backup solutions have to deal with several dozens or possibly several hundred virtual machines. Depending on the requirement, an administrator may have to backup all the virtual machines regularly instead of just the files inside them. Backing up an entire virtual machine takes up a very large amount of space after a while depending on how many previous backups we have. A granular file backup helps to quickly restore just the file needed but sure is a bad choice if the virtual server is badly damaged to a point that it becomes inaccessible. Here, we will see different backup options available in Proxmox, their advantages, and disadvantages. Proxmox backup and snapshot options Proxmox has the following two backup options: Full backup: This backs up the entire virtual machine. Snapshot: This only creates a snapshot image of the virtual machine. Proxmox 3.2 and above can only do a full backup and cannot do any granular file backup from inside a virtual machine. Proxmox also does not use any backup agent. Backing up a VM with a full backup All full backups are in the .tar format containing both the configuration file and virtual disk image file. The TAR file is all you need to restore the virtual machine on any nodes and on any storage. Full backups can also be scheduled on a daily and weekly basis. Full virtual backup files are named based on the following format: vzdump-qemu-<vm_id>-YYYY_MM_DD-HH_MM_SS.vma.lzo The following screenshot shows what a typical list of virtual machine backups looks like: Proxmox 3.2 and above cannot do full backups on LVM and Ceph RBD storage. Full backups can only occur on local, Ceph FS, and NFS-based storages, which are defined as backup during storage creation. Please note that Ceph FS and RBD are not the same type of storage even though they both coexist on the same Ceph cluster. The following screenshot shows the storage feature through the Proxmox GUI with backup-enabled attached storages: The backup menu in Proxmox is a true example of simplicity. With only three choices to select, it is as easy as it can get. The following screenshot is an example of a Proxmox backup menu. Just select the backup storage, backup mode, and compression type and that's it: Creating a schedule for Backup Schedules can be created from the virtual machine backup option. We will see each option box in detail in the following sections. The options are shown in the following screenshot: Node By default, a backup job applies to all nodes. If you want to apply the backup job to a particular node, then select it here. With a node selected, backup job will be restricted to that node only. If a virtual machine on node 1 was selected for backup and later on the virtual machine was moved to node 2, it will not be backed up since only node 1 was selected for this backup task. Storage Select a backup storage destination where all full backups will be stored. Typically an NFS server is used for backup storage. They are easy to set up and do not require a lot of upfront investment due to their low performance requirements. Backup servers are much leaner than computing nodes since they do not have to run any virtual machines. Backups are supported on local, NFS, and Ceph FS storage systems. Ceph FS storages are mounted locally on Proxmox nodes and selected as a local directory. Both Ceph FS and RBD coexist on the same Ceph cluster. Day of Week Select which day or days the backup task applies to. Days' selection is clickable in a drop-down menu. If the backup task should run daily, then select all the days from the list. Start Time Unlike Day of Week, only one time slot can be selected. Multiple selections of time to backup different times of the day are not possible. If the backup must run multiple times a day, create a separate task for each time slot. Selection mode The All selection mode will select all the virtual machines within the whole Proxmox cluster. The Exclude selected VMs mode will back up all VMs except the ones selected. Include selected VMs will back up only the ones selected. Send email to Enter a valid e-mail address here so that the Proxmox backup task can send an e-mail upon backup task completion or if there was any issue during backup. The e-mail includes the entire log of the backup tasks. It is highly recommended to enter the e-mail address here so that an administrator or backup operator can receive backup task feedback e-mails. This will allow us to find out if there was an issue during backup or how much time it actually takes to see if any performance issue occurred during backup. The following screenshot is a sample of a typical e-mail received after a backup task: Compression By default, the LZO compression method is selected. LZO (http://en.wikipedia.org/wiki/Lempel–Ziv–Oberhumer) is based on a lossless data compression algorithm, designed with the decompression ratio in mind. LZO is capable to do fast compression and even faster decompressions. GZIP will create smaller backup files at the cost of high CPU usage to achieve a higher compression ratio. Since higher compression ratio is the main focal point, it is a slow backup process. Do not select the None compression option, since it will create large backups without compression. With the None method, a 200 GB RAW disk image with 50 GB used will have a 200 GB backup image. With compression turned on, the backup image size will be around 70-80 GB. Mode Typically, all running virtual machine backups occur with the Snapshot option. Do not confuse this Snapshot option with Live Snapshots of VM. The Snapshot mode allows live backup while the virtual machine is turned on, while Live Snapshots captures the state of the virtual machine for a certain point in time. With the Suspend or Stop mode, the backup task will try to suspend the running virtual machine or forcefully stop it prior to commencing full backup. After backup is done, Proxmox will resume or power up the VM. Since Suspend only freezes the VM during backup, it has less downtime than the Stop mode because VM does not need to go through the entire reboot cycle. Both the Suspend and Stop modes backup can be used for VM, which can have partial or full downtime without disrupting regular infrastructure operation, while the Snapshot mode is used for VMs that can have a significant impact due to their downtime.
Read more
  • 0
  • 0
  • 3137

article-image-introduction-microsoft-remote-desktop-services-and-vdi
Packt
09 Jul 2014
10 min read
Save for later

An Introduction to Microsoft Remote Desktop Services and VDI

Packt
09 Jul 2014
10 min read
(For more resources related to this topic, see here.) Remote Desktop Services and VDI What Terminal Services did was to provide simultaneous access to the desktop running on a server/group of servers. The name was changed to Remote Desktop Services (RDS) with the release of Windows Server 2008 R2 but actually this encompasses both VDI and what was Terminal Services and now referred to as Session Virtualization. This is still available alongside VDI in Windows Server 2012 R2; each user who connects to the server is granted a Remote Desktop session (RD Session) on an RD Session Host and shares this same server-based desktop. Microsoft refers to any kind of remote desktop as a Virtual Desktop and these are grouped into Collections made available to specific group users and managed as one, and a Session Collection is a group of virtual desktop based on session virtualization. It's important to note here that what the users see with Session Virtualization is the desktop interface delivered with Windows Server which is similar to but not the same as the Windows client interface, for example, it does have a modern UI by default. We can add/remove the user interface features in Windows Server to change the way it looked, one of these the Desktop Experience option is there specifically to make Windows Server look more like Windows client and in Windows Server 2012 R2 if you add this feature option in you'll get the Windows store just as you do in Windows 8.1. VDI also provides remote desktops to our users over RDP but does this in a completely different way. In VDI each user accesses a VM running Windows Client and so the user experience looks exactly the same as it would on a laptop or physical desktop. In Windows VDI, these Collections of VMs run on Hyper-V and our users connect to them with RDP just as they can connect to RD Sessions described above. Other parts of RDS are common to both as we'll see shortly but what is important for now is that RDS manages the VDI VMs for us and organises which users are connected to which desktops and so on. So we don't directly create VMs in a Collection or directly set up security on them. Instead a collection is created from a template which is a special VM that is never turned on as the Guest OS is sysprepped and turning it on would instantly negate that. The VMs in a Collection inherit this master copy of the Guest OS and any installed applications as well. The settings for the template VM are also inherited by the virtual desktops in a collection; CPU memory, graphics, networking, and so on, and as we'll see there are a lot of VM settings that specifically apply to VDI rather than for VMs running a server OS as part of our infrastructure. To reiterate VDI in Windows Server 2012 R2, VDI is one option in an RDS deployment and it's even possible to use VDI alongside RD Sessions for our users. For example, we might decide to give RD Sessions for our call center staff and use VDI for our remote workforce. Traditionally, the RD Session Hosts have been set up on physical servers as older versions of Hyper-V and VMware weren't capable of supporting heavy workloads like this. However, we can put up to 4 TB RAM and 64 logical processors into one VM (physical hardware permitting) and run large deployment RD Sessions virtually. Our users connect to our Virtual Desktop Collections of whatever kind with a Remote Desktop Client which connects to the RDS servers using the Remote Desktop Protocol (RDP). When we connect to any server with the Microsoft Terminal Services Client (MSTSC.exe), we are using RDP for this but without setting up RDS there are only two administrative sessions available per server. Many of the advantages and disadvantages of running any kind of remote desktop apply to both solutions. Advantages of Remote Desktops Given that the desktop computing for our users is now going to be done in the data center we only need to deploy powerful desktop and laptops to those users who are going to have difficulty connecting to our RDS infrastructure. Everyone else could either be equipped with thin client devices, or given access in from other devices they already have if working remotely, such as tablets or their home PCs and laptops. Thin client computing has evolved in line with advances in remote desktop computing and the latest devices from 10Zig, Dell, HP among others support multiple hi-res monitors, smart cards, and web cams for unified communications which are also enabled in the latest version of RDP (8.1). Using remote desktops can also reduce overall power consumption for IT as an efficiently cooled data center will consume less power than the sum of thin clients and servers in an RDS deployment; in one case, I saw this saving resulted in a green charity saving 90 percent of its IT power bill. Broadly speaking, managing remote desktops ought to be easier than managing their physical equivalents, for a start they'll be running on standard hardware so installing and updating drivers won't be so much of an issue. RDS has specific tooling for this to create a largely automatic process for keeping our collections of remote desktops in a desired state. VDI doesn't exist in a vacuum and there are other Microsoft technologies to make any desktop management easier with other types of virtualization: User profiles: They have long been a problem for desktop specialists. There are dedicated technologies built into session virtualization and VDI to allow user's settings and data to be stored away from the VHD with the Guest OS on. Techniques such as folder redirection can also help here and new for Windows Server 2012 is User Environment Virtualization (UE-V) which provides a much richer set of tools that can work across VDI, RD Sessions and physical desktops to ensure the user has the same experience no matter what they are using for a Windows Client desktop. Application Virtualization (App-V): This allows us to deploy application to the user who need them when and where they need them, so we don't need to create desktops for different type of users who need special applications; we just need a few of these and only deploy generic applications on these and those that can't make use of App-V. Even if App-V is not deployed VDI administrator have total control over remote desktops, as any we have any number of security techniques at our disposal and if the worst comes to the worst we can remove any installed applications every time a user logs off! The simple fact that the desktop and applications we are providing to our users are now running on servers under our direct control also increase our IT security. Patching is now easy to enable particularly for our remote workforce as when they are on site or working remotely their desktop is still in the data center. RDS in all its forms is then an ideal way of allowing a Bring Your Own Device (BYOD) policy. Users can bring in whatever device they wish into work or work at home on their own device (WHOYD is my own acronym for this!) by using an RD Client on that device and securely connecting with it. Then there are no concerns about partially or completely wiping users' own devices or not being able to because they aren't in a connected state when they are lost or stolen. VDI versus Session Virtualization So why are there these two ways of providing Remote Desktops in Windows and what are the advantages and disadvantages of each? First and foremost Session Virtualization is always going to be much more efficient than VDI as it's going to provide more desktops on less hardware. This makes sense if we look at what's going on in each scenario if we have a hundred remote desktop users to provide for: In Session Virtualization, we are running and sharing one operating system and this will comfortably sit on 25 GB of disk. For memory we need roughly 2 GB per CPU, we can then allocate 100 MB memory to each RD Session. So on a quad CPU Server for a hundred RD Sessions we'll need 8 GB + (100MB*100), so less than 18 GB RAM. If we want to support that same hundred users on VDI, we need to provision a hundred VMs each with its own OS. To do this we need 2.5 TB of disk and if we give each VM 1 GB of RAM then 100 GB of RAM is needed. This is a little unfair on VDI in Windows Server 2012 R2 as we can cut down on the disk space need and be much more efficient in memory than this but even with these new technologies we would need 70 GB of RAM and say 400 GB of disk. Remember that with RDS our users are going to be running the desktop that comes with Windows Server 2012 R2 not Windows 8.1. Our RDS users are sharing the one desktop so they cannot be given administrative rights to that desktop. This may not be that important for them but can also affect what applications can be put onto the server desktop and some applications just won't work on a Server OS anyway. Users' sessions can't be moved from one session host to another while the user is logged in so planned maintenance has to be carefully thought through and may mean working out of hours to patch servers and update applications if we want to avoid interrupting our users' work. VDI on the other hand means that our users are using Windows 8.1 which they will be happier with. And this may well be the deciding factor regardless of cost as our users are paying for this and their needs should take precedence. VDI can be easier to manage without interrupting our users as we can move running VMs around our physical hosts without stopping them. Remote applications in RDS Another part of the RDS story is the ability to provide remote access to individual applications rather than serving up a whole desktop. This is called RD Remote App and it simply provides a shortcut to an individual application running either on a virtual or remote session desktop. I have seen this used for legacy applications that won't run on the latest versions of Windows and to provide access to secure application as it's a simple matter to prevent cut and paste or any sharing of data between a remote application and the local device it's executing on. RD Remote Apps work by publishing only the specified applications installed on our RD Session Hosts or VDI VMs. Summary This article discusses the advantages of Remote Desktops. We also learned how they can be provided in Windows. This article also compares VDI to Session Virtualization. Resources for Article: Further resources on this subject: Designing, Sizing, Building, and Configuring Citrix VDI-in-a-Box [article] Installing Virtual Desktop Agent – server OS and desktop OS [article] VMware View 5 Desktop Virtualization [article]
Read more
  • 0
  • 0
  • 2275

article-image-upgrading-previous-versions
Packt
23 Jun 2014
8 min read
Save for later

Upgrading from Previous Versions

Packt
23 Jun 2014
8 min read
(For more resources related to this topic, see here.) This article is about guiding you through the requirements and steps necessary to upgrade your VMM 2008 R2 SP1 to VMM 2012 R2. There is no direct upgrade path from VMM 2008 R2 SP1 to VMM 2012 R2. You must first upgrade to VMM 2012 and then to VMM 2012 R2. VMM 2008 R2 SP1-> VMM 2012-> SCVMM 2012 SP1 -> VMM 2012 R2 is the correct upgrade path. Upgrade notes: VMM 2012 cannot be upgraded directly to VMM 2012 R2. Upgrading it to VMM 2012 SP1 is required VMM 2012 can be installed on a Windows 2008 Server VMM 2012 SP1 requires Windows 2012 VMM 2012 R2 requires minimum Windows 2012 (Windows 2012 R2 is recommended) Windows 2012 hosts can be managed by VMM 2012 SP1 Windows 2012 R2 hosts require VMM 2012 R2 System Center App Controller versions must match the VMM version To debug a VMM installation, the logs are located in %ProgramData%VMMLogs, and you can use the CMTrace.exe tool to monitor the content of the files in real time, including SetupWizard.log and vmmServer.log. VMM 2012 Architecture, VMM 2012 is a huge product upgrade, and there have been many improvements. This article only covers the VMM upgrade. If you have a previous version of System Center family components installed on your environment, make sure you follow the upgrade and installation. System Center 2012 R2 has some new components, in which the installation order is also critical. It is critical that you take the steps documented by Microsoft in Upgrade Sequencing for System Center 2012 R2 at http://go.microsoft.com/fwlink/?LinkId=328675 and use the following upgrade order: Service Management Automation Orchestrator Service Manager Data Protection Manager (DPM) Operations Manager Configuration Manager Virtual Machine Manager (VMM) App Controller Service Provider Foundation Windows Azure Pack for Windows Server Service Bus Clouds Windows Azure Pack Service Reporting Reviewing the upgrade options This recipe will guide you through the upgrade options for VMM 2012 R2. Keep in mind that there is no direct upgrade path from VMM 2008 R2 to VMM 2012 R2. How to do it... Read through the following recommendations in order to upgrade your current VMM installation. In-place upgrade from VMM 2008 R2 SP1 to VMM 2012 Use this method if your system meets the requirements for a VMM 2012 upgrade and you want to deploy it on the same server. The supported VMM version to upgrade from is VMM 2008 R2 SP1. If you need to upgrade VMM 2008 R2 to VMM 2008 R2 SP1, refer to http://go.microsoft.com/fwlink/?LinkID=197099. In addition, keep in mind that if you are running the SQL Server Express version, you will need to upgrade SQL Server to a fully supported version beforehand as the Express version is not supported in VMM 2012. Once the system requirements are met and all of the prerequisites are installed, the upgrade process is straightforward. To follow the detailed recipe, refer to the Upgrading to VMM 2012 R2 recipe. Upgrading from 2008 R2 SP1 to VMM 2012 on a different computer Sometimes, you may not be able to do an in-place upgrade to VMM 2012 or even to VMM 2012 SP1. In this case, it is recommended that you use the following instructions: Uninstall the current VMM that retains the database and then restore the database on a supported version of SQL Server. Next, install the VMM 2012 prerequisites on a new server (or on the same server, as long it meets the hardware and OS requirements). Finally, install VMM 2012, providing the retained database information on the Database configuration dialog, and the VMM setup will upgrade the database. When the install process is finished, upgrade the Hyper-V hosts with the latest VMM agents. The following figure illustrates the upgrade process from VMM 2008 R2 SP1 to VMM 2012: When performing an upgrade from VMM 2008 R2 SP1 with a local VMM database to a different server, the encrypted data will not be preserved as the encryption keys are stored locally. The same rule applies when upgrading from VMM 2012 to VMM 2012 SP1 and from VMM 2012 SP1 to VMM 2012 R2 and not using Distributed Key Management (DKM) in VMM 2012. Upgrading from VMM 2012 to VMM 2012 SP1 To upgrade to VMM 2012 SP1, you should already have VMM 2012 up and running. VMM 2012 SP1 requires a Windows Server 2012 and Windows ADK 8.0. If planning an in-place upgrade, back up the VMM database; uninstall VMM 2012 and App Controller (if applicable), retaining the database; perform an OS upgrade; and then install VMM 2012 SP1 and App Controller. Upgrading from VMM 2012 SP1 to VMM 2012 R2 To upgrade to VMM 2012 R2, you should already have VMM 2012 SP1 up and running. VMM 2012 R2 requires minimum Windows Server 2012 as the OS (Windows 2012 R2 is recommended) and Windows ADK 8.1. If planning an in-place upgrade, back up the VMM database; uninstall VMM 2012 SP1 and App Controller (if applicable), retaining the database; perform an OS upgrade; and then install VMM 2012 R2 and App Controller. Some more planning considerations are as follows: Virtual Server 2005 R2: VMM 2012 does not support Microsoft Virtual Server 2005 R2 anymore. If you have Virtual Server 2005 R2 or an unsupported ESXi version running and have not removed these hosts before the upgrade, they will be removed automatically during the upgrade process. VMware ESX and vCenter: For VMM 2012, the supported versions of VMware are from ESXi 3.5 to ESXi 4.1 and vCenter 4.1. For VMM 2012 SP1/R2, the supported VMware versions are from ESXi 4.1 to ESXi 5.1, and vCenter 4.1 to 5.0. SQL Server Express: This is not supported since VMM 2012. A full version is required. Performance and Resource Optimization (PRO): The PRO configurations are not retained during an upgrade to VMM 2012. If you have an Operations Manager (SCOM) integration configured, it will be removed during the upgrade process. Once the upgrade process is finished, you can integrate SCOM with VMM. Library server: Since VMM 2012, VMM does not support a library server on Windows Server 2003. If you have it running and continue with the upgrade, you will not be able to use it. To use the same library server in VMM 2012, move it to a server running a supported OS before starting the upgrade. Choosing a service account and DKM settings during an upgrade: During an upgrade to VMM 2012, on the Configure service account and distributed key management page of the setup, you are required to create a VMM service account (preferably a domain account) and choose whether you want to use DKM to store the encryption keys in Active Directory (AD). Make sure to log on with the same account that was used during the VMM 2008 R2 installation: This needs to be done because, in some situations after the upgrade, the encrypted data (for example, the passwords in the templates) may not be available depending on the selected VMM service account, and you will be required to re-enter it manually. For the service account, you can use either the Local System account or a domain account: This is the recommended option, but when deploying a highly available VMM management server, the only option available is a domain account. Note that DKM is not available with the versions prior to VMM 2012. Upgrading to a highly available VMM 2012: If you're thinking of upgrading to a High Available (HA) VMM, consider the following: Failover Cluster: You must deploy the failover cluster before starting the upgrade. VMM database: You cannot deploy the SQL Server for the VMM database on highly available VMM management servers. If you plan on upgrading the current VMM Server to an HA VMM, you need to first move the database to another server. As a best practice, it is recommended that you have the SQL Server cluster separated from the VMM cluster. Library server: In a production or High Available environment, you need to consider all of the VMM components to be High Available as well, and not only the VMM management server. After upgrading to an HA VMM management server, it is recommended, as a best practice, that you relocate the VMM library to a clustered file server. In order to keep the custom fields and properties of the saved VMs, deploy those VMs to a host and save them to a new VMM 2012 library. VMM Self-Service Portal: This is not supported since VMM 2012 SP1. It is recommended that you install System Center App Controller instead. How it works... There are two methods to upgrade to VMM 2012 from VMM 2008 R2 SP1: an in-place upgrade and upgrading to another server. Before starting, review the initial steps and the VMM 2012 prerequisites and perform a full backup of the VMM database. Uninstall VMM 2008 R2 SP1 (retaining the data) and restore the VMM database to another SQL Server running a supported version. During the installation, point to that database in order to have it upgraded. After the upgrade is finished, upgrade the host agents. VMM will be rolled back automatically in the event of a failure during the upgrade process and reverted to its original installation/configuration. There's more... The names of the VMM services have been changed in VMM 2012. If you have any applications or scripts that refer to these service names, update them accordingly as shown in the following table: VMM version VMM service display name Service name 2008 R2 SP1 Virtual Machine Manager vmmservice   Virtual Machine Manager Agent vmmagent 2012 / 2012 SP1/ 2012 R2 System Center Virtual Machine Manager scvmmservice   System Center Virtual Machine Manager Agent scvmmagent See also To move the file-based resources (for example, ISO images, scripts, and VHD/VHDX), refer to http://technet.microsoft.com/en-us/library/hh406929 To move the virtual machine templates, refer to Exporting and Importing Service Templates in VMM at http://go.microsoft.com/fwlink/p/?LinkID=212431
Read more
  • 0
  • 0
  • 1202
article-image-virtual-machine-design
Packt
21 May 2014
8 min read
Save for later

Virtual Machine Design

Packt
21 May 2014
8 min read
(For more resources related to this topic, see here.) Causes of virtual machine performance problems In a perfect virtual infrastructure, you will never experience any performance problems and everything will work well within the budget that you allocated. But should there be circumstances that happen in this perfect utopian datacenter you've designed, hopefully this section will help you to identify and resolve the problems easier. CPU performance issues The following is a summary list of some the common CPU performance issues you may experience in your virtual infrastructure. While the following is not an exhaustive list of every possible problem you can experience with CPUs, it can help guide you in the right direction to solve CPU-related performance issues: High ready time: When your ready time is above 10 percent, this could indicate CPU contention and could be impacting the performance of any CPU-intensive applications. This is not a guarantee of a problem; however, applications which are not nearly as sensitive can still report high values and perform well within guidelines. Whether your application is CPU-ready is measured in milliseconds to get percentage conversion; see KB 2002181. High costop time: The costop time will often correlate to contention in multi-vCPU virtual machines. Costop time exceeding 10 percent could cause challenges when vSphere tries to schedule all vCPUs in your multi-vCPU servers. CPU limits: As discussed earlier, you will often experience performance problems if your virtual machine tries to use more resources than have been configured in your limits. Host CPU saturation: When the vSphere host utilization runs at above 80 percent, you may experience host saturation issues. This can introduce performance problems across the host as the CPU scheduler tries to assign resources to virtual machines. Guest CPU saturation: This is experienced on high utilization of vCPU resources within the operating system of your virtual machines. This can be mitigated, if required, by adding additional vCPUs to improve the performance of the application. Misconfigured affinity: Affinity is enabled by default in vSphere; however, if manually configured to be assigned to a specific physical CPU, problems can be encountered. This can often be experienced when creating a VM with affinity settings and then cloning the VM. VMware advises against manually configuring affinity. Oversizing vCPUs: When assigning multiple vCPUs to a virtual machine, you would want to ensure that the operating system is able to take advantage of the CPUs, threads, and your applications can support them. The overhead associated with unused vCPUs can impact other applications and resource scheduling within the vSphere host. Low guest usage: Sometimes poor performance problems with low CPU utilization will help identify the problem existing as I/O or memory. This is often a good guiding indicator that your CPU being underused can be caused by additional resources or even configuration. Memory performance issues Additionally, the following list is a summary of some common memory performance issues you may experience in your virtual infrastructure. The way VMware vSphere handles memory management, there is a unique set of challenges with troubleshooting and resolving performance problems as they arise: Host memory: Host memory is both a finite and very limited resource. While VMware vSphere incorporates some creative mechanisms to leverage and maximize the amount of available memory through features such as page sharing, memory management, and resource-allocation controls, there are several memory features that will only take effect when the host is under stress. Transparent page sharing: This is the method by which redundant copies of pages are eliminated. TPS, enabled by default, will break up regular pages into 4 KB chunks for better performance. When virtual machines have large physical pages (2 MB instead of 4 KB), vSphere will not attempt to enable TPS for these as the likelihood of multiple 2 MB chunks being similar is less than 4 KB. This can cause a system to experience memory overcommit and performance problems may be experienced; if memory stress is then experienced, vSphere may break these 2 MB chunks into 4 KB chunks to allow TPS to consolidate the pages. Host memory consumed: When measuring utilization for capacity planning, the value of host memory consumed can often be deceiving as it does not always reflect the actual memory utilization. Instead, the active memory or memory demand should be used as a better guide of actual memory utilized as features such as TPS can reflect a more accurate picture of memory utilization. Memory over-allocation: Memory over-allocation will usually be fine for most applications in most environments. It is typically safe to have over 20 percent memory allocation especially with similar applications and operating systems. The more similarity you have between your applications and environment, the higher you can take that number. Swap to disk: If you over-allocate your memory too high, you may start to experience memory swapping to disk, which can result in performance problems if not caught early enough. It is best, in those circumstances, to evaluate which guests are swapping to disk to help correct either the application or the infrastructure as appropriate. For additional details on vSphere Memory management and monitoring, see KB 2017642. Storage performance issues When it comes to storage performance issues within your virtual machine infrastructure, there are a few areas you will want to pay particular attention to. Although most storage-related problems you are likely to experience will be more reliant upon your backend infrastructure, the following are a few that you can look at when identifying if it is the VM's storage or the SAN itself: Storage latency: Latency experienced at the storage level is usually expressed as a combination of the latency of the storage stack, guest operating system, VMkernel virtualization layer, and the physical hardware. Typically, if you experience slowness and are noticing high latencies, one or more aspects of your storage could be the cause. Three layers of latency: ESXi and vCenter typically report on three primary latencies. These are Guest Average Latency (GAVG), Device Average Latency (DAVG), and Kernel Average Latency (KAVG). Guest Average Latency (GAVG): This value is the total amount of latency that ESXi is able to detect. This is not to say that it is the total amount of latency being experienced but is just the figure of what ESXi is reporting against. So if you're experiencing a 5 ms latency with GAVG and a performance application such as Perfmon is identifying a storage latency of 50 ms, something within the guest operating system is incurring a penalty of 45 ms latency. In circumstances such as these, you should investigate the VM and its operating system to troubleshoot. Device Average Latency (DAVG): Device Average Latency tends to focus on the more physical of things aligned with the device; for instance, if the storage adapters, HBA, or interface is having any latency or communication backend to the storage array. Problems experienced here tend to fall more on the storage itself and less so as a problem which can be easily troubleshooted within ESXi itself. Some exceptions to this being firmware or adapter drivers, which may be introducing problems or queue depth in your HBA. More details on queue depth can be found at KB 1267. Kernel Average Latency (KAVG): Kernel Average Latency is actually not a specific number as it is a calculation of "Total Latency - DAVG = KAVG"; thus, when using this metric you should be wary of a few values. The typical value of KAVG should be zero, anything greater may be I/O moving through the kernel queue and can be generally dismissed. When your latencies are 2 ms or consistently greater, this may indicate a storage performance issue with your VMs, adapters, and queues should be reviewed for bottlenecks or problems. The following are some KB articles that can help you further troubleshoot virtual machine storage: Using esxtop to identify storage performance issues (KB1008205) Troubleshooting ESX/ESXi virtual machine performance issues (KB2001003) Testing virtual machine storage I/O performance for VMware ESX and ESXi (KB1006821) Network performance issues Lastly, when it comes to addressing network performance issues, there are a few areas you will want to consider. Similar to the storage performance issues, a lot of these are often addressed by the backend networking infrastructure. However, there are a few items you'll want to investigate within the virtual machines to ensure network reliability. Networking error, IP already assigned to another adapter: This is a common problem experienced post V2V or P2V migrations, which results in ghosted network adapters. VMware KB 1179 guides you through the steps to go about removing these ghosted network adapters. Speed or duplex mismatch within the OS: Left at defaults, the virtual machine will use auto-negotiation to get maximum network performance; if configured down from that speed, this can introduce virtual machine limitations. Choose the correct network adapter for your VM: Newer operating systems should support the VMXNET3 adapter while some virtual machines, either legacy or upgraded from previous versions, may run older network adapter types. See KB 1001805 to help decide which adapters are correct for your usage. The following are some KB articles that can help you further troubleshoot virtual machine networking: Troubleshooting virtual machine network connection issues (KB 1003893) Troubleshooting network performance issues in a vSphere environment (KB 1004097) Summary With this article, you should be able to inspect existing VMs while following design principles that will lead to correctly sized and deployed virtual machines. You also should have a better understanding of when your configuration is meeting your needs, and how to go about identifying performance problems associated with your VMs. Resources for Article: Further resources on this subject: Introduction to vSphere Distributed switches [Article] Network Virtualization and vSphere [Article] Networking Performance Design [Article]
Read more
  • 0
  • 0
  • 1697

article-image-configuring-placeholder-datastores
Packt
20 May 2014
1 min read
Save for later

Configuring placeholder datastores

Packt
20 May 2014
1 min read
(For more resources related to this topic, see here.) Assuming that each of these paired sites is geographically separated, each site will have its own placeholder datastore. The following figure shows the site and placeholder datastore relationship: This is how you configure placeholder datastores: Navigate to vCenter Server's inventory home page and click on Site Recovery. Click on Sites in the left pane and select a site. Navigate to the Placeholder Datastores tab and click on Configure Placeholder Datastore, as shown in the following screenshot: In the Configure Placeholder Datastore window, select an appropriate datastore and click on OK. To confirm the selection, exit the window. Now, the Placeholder Datastores tab should show the configured placeholder. Refer to the following screenshot: If you plan to configure a Failback, repeat the procedure in the recovery site. Summary In this article, we covered the steps to be followed in order to configure placeholder datastores. Resources for Article: Further resources on this subject: Disaster Recovery for Hyper-V [Article] VMware vCenter Operations Manager Essentials - Introduction to vCenter Operations Manager [Article] Disaster Recovery Techniques for End Users [Article]
Read more
  • 0
  • 0
  • 1264

article-image-installing-vertica
Packt
13 May 2014
9 min read
Save for later

Installing Vertica

Packt
13 May 2014
9 min read
(For more resources related to this topic, see here.) Massively Parallel Processing (MPP) databases are those which partition (and optionally replicate) data into multiple nodes. All meta-information regarding data distribution is stored in master nodes. When a query is issued, it is parsed and a suitable query plan is developed as per the meta-information and executed on relevant nodes (nodes that store related user data). HP offers one such MPP database called Vertica to solve pertinent issues of Big Data analytics. Vertica differentiates itself from other MPP databases in many ways. The following are some of the key points: Column-oriented architecture: Unlike traditional databases that store data in a row-oriented format, Vertica stores its data in columnar fashion. This allows a great level of compression on data, thus freeing up a lot of disk space. Design tools: Vertica offers automated design tools that help in arranging your data more effectively and efficiently. The changes recommended by the tool not only ease pressure on the designer, but also help in achieving seamless performance. Low hardware costs: Vertica allows you to easily scale up your cluster using just commodity servers, thus reducing hardware-related costs to a certain extent. This article will guide you through the installation and creation of a Vertica cluster. This article will also cover the installation of Vertica Management Control, which is shipped with the Vertica Enterprise edition only. It should be noted that it is possible to upgrade Vertica to a higher version but vice versa is not possible. Before installing Vertica, you should bear in mind the following points: Only one database instance can be run per cluster of Vertica. So, if you have a three-node cluster, then all three nodes will be dedicated to one single database. Only one instance of Vertica is allowed to run per node/host. Each node requires at least 1 GB of RAM. Vertica can be deployed on Linux only and has the following requirements: Only the root user or the user with all privileges (sudo) can run the install_vertica script. This script is very crucial for installation and will be used at many places. Only ext3/ext4 filesystems are supported by Vertica. Verify whether rsync is installed. The time should be synchronized in all nodes/servers of a Vertica cluster; hence, it is good to check whether NTP daemon is running. Understanding the preinstallation steps Vertica has various preinstallation steps that are needed to be performed for the smooth running of Vertica. Some of the important ones are covered here. Swap space Swap space is the space on the physical disk that is used when primary memory (RAM) is full. Although swap space is used in sync with RAM, it is not a replacement for RAM. It is suggested to have 2 GB of swap space available for Vertica. Additionally, Vertica performs well when swap-space-related files and Vertica data files are configured to store on different physical disks. Dynamic CPU frequency scaling Dynamic CPU frequency scaling, or CPU throttling, is where the system automatically adjusts the frequency of the microprocessor dynamically. The clear advantage of this technique is that it conserves energy and reduces the heat generated. It is believed that CPU frequency scaling reduces the number of instructions a processor can issue. Additional theories state that when frequency scaling is enabled, the CPU doesn't come to full throttle promptly. Hence, it is best that dynamic CPU frequency scaling is disabled. CPU frequency scaling can be disabled from Basic Input/Output System (BIOS). Please note that different hardware might have different settings to disable CPU frequency scaling. Understanding disk space requirements It is suggested to keep a buffer of 20-30 percent of disk space per node. Vertica uses buffer space to store temporary data, which is data coming from the merge out operations, hash joins, and sorts, and data arising from managing nodes in the cluster. Steps to install Vertica Installing Vertica is fairly simple. With the following steps, we will try to understand a two-node cluster: Download the Vertica installation package from http://my.vertica.com/ according to the Linux OS that you are going to use. Now log in as root or use the sudo command. After downloading the installation package, install the package using the standard command: For .rpm (CentOS/RedHat) packages, the command will be: rpm -Uvh vertica-x.x.x-x.x.rpm For .deb (Ubuntu) packages, the command will be: dpkg -i vertica-x.x.x-x.x.deb Refer to the following screenshot for more details: Running the Vertica package In the previous step, we installed the package on only one machine. Note that Vertica is installed under /opt/vertica. Now, we will setup Vertica on other nodes as well. For that, run on the same node: /opt/vertica/sbin/install_vertica -s host_list -r rpm_package -u dba_username Here –s is the hostname/IP of all the nodes of the cluster including the one on which Vertica is already installed. –r is the path of Vertica package and –u is the username that we wish to create for working on Vertica. This user has sudo privileges. If prompted, provide a password for the new user. If we do not specify any username, then Vertica creates dbadmin as the user, as shown in the following example: [impetus@centos64a setups]$ sudo /opt/vertica/sbin/install_vertica -s 192.168.56.101,192.168.56.101,192.168.56.102 -r "/ilabs/setups/vertica-6.1.3-0.x86_64.RHEL5.rpm" -u dbadmin Vertica Analytic Database 6.1.3-0 Installation Tool Upgrading admintools meta data format.. scanning /opt/vertica/config/users Starting installation tasks... Getting system information for cluster (this may take a while).... Enter password for impetus@192.168.56.102 (2 attempts left): backing up admintools.conf on 192.168.56.101 Default shell on nodes: 192.168.56.101 /bin/bash 192.168.56.102 /bin/bash Installing rpm on 1 hosts.... installing node.... 192.168.56.102 NTP service not synchronized on the hosts: ['192.168.56.101', '192.168.56.102'] Check your NTP configuration for valid NTP servers. Vertica recommends that you keep the system clock synchronized using NTP or some other time synchronization mechanism to keep all hosts synchronized. Time variances can cause (inconsistent) query results when using Date/Time Functions. For instructions, see: * http://kbase.redhat.com/faq/FAQ_43_755.shtm * http://kbase.redhat.com/faq/FAQ_43_2790.shtm Info: the package 'pstack' is useful during troubleshooting. Vertica recommends this package is installed. Checking/fixing OS parameters..... Setting vm.min_free_kbytes to 37872 ... Info! The maximum number of open file descriptors is less than 65536 Setting open filehandle limit to 65536 ... Info! The session setting of pam_limits.so is not set in /etc/pam.d/su Setting session of pam_limits.so in /etc/pam.d/su ... Detected cpufreq module loaded on 192.168.56.101 Detected cpufreq module loaded on 192.168.56.102 CPU frequency scaling is enabled. This may adversely affect the performance of your database. Vertica recommends that cpu frequency scaling be turned off or set to 'performance' Creating/Checking Vertica DBA group Creating/Checking Vertica DBA user Password for dbadmin: Installing/Repairing SSH keys for dbadmin Creating Vertica Data Directory... Testing N-way network test. (this may take a while) All hosts are available ... Verifying system requirements on cluster. IP configuration ... IP configuration ... Testing hosts (1 of 2).... Running Consistency Tests LANG and TZ environment variables ... Running Network Connectivity and Throughput Tests... Waiting for 1 of 2 sites... ... Test of host 192.168.56.101 (ok) ==================================== Enough RAM per CPUs (ok) -------------------------------- Test of host 192.168.56.102 (ok) ==================================== Enough RAM per CPUs (FAILED) -------------------------------- Vertica requires at least 1 GB per CPU (you have 0.71 GB/CPU) See the Vertica Installation Guide for more information. Consistency Test (ok) ========================= Info: The $TZ environment variable is not set on 192.168.56.101 Info: The $TZ environment variable is not set on 192.168.56.102 Updating spread configuration... Verifying spread configuration on whole cluster. Creating node node0001 definition for host 192.168.56.101 ... Done Creating node node0002 definition for host 192.168.56.102 ... Done Error Monitor 0 errors 4 warnings Installation completed with warnings. Installation complete. To create a database: 1. Logout and login as dbadmin.** 2. Run /opt/vertica/bin/adminTools as dbadmin 3. Select Create Database from the Configuration Menu ** The installation modified the group privileges for dbadmin. If you used sudo to install vertica as dbadmin, you will need to logout and login again before the privileges are applied. After we have installed Vertica on all the desired nodes, it is time to create a database. Log in as a new user (dbadmin in default scenarios) and connect to admin panel. For that we have to run following command: /opt/vertica/bin/adminTools If you are connecting to admin tools for the first time, you will be prompted for a license key. If you have the license file, then enter its path; if you want to use the community edition, then just click on OK. License key prompt After the previous step, you will be asked to review and accept the End-user License Agreement (EULA). Prompt for EULA After reviewing and accepting the EULA, you will be presented with the main menu of the admin tools of Vertica. Admin Tools Main Menu Now, to create a database, navigate to Administration Tools | Configuration Menu | Create Database. Create database option in the Configuration menu Now, you will be asked to enter a database name and a comment that you will like to associate with the database. Name and Comment of the Database After entering the name and comment, you will be prompted to enter a password for this database. Password for the New database After entering and re-entering (for confirmation) the password, you need to provide pathnames where the files related to user data and catalog data will be stored. Catalog and Data Pathname After providing all the necessary information related to the database, you will be asked to select hosts on which the database needs to be deployed. Once all the desired hosts are selected, Vertica will ask for one final check. Final confirmation for a database creation Now, Vertica will be creating and deploying the database. Database Creation Once the database is created, we can connect to it using the VSQL tool or perform admin tasks. Summary As you can see and understand this article explains briefly about, Vertica installation. One can check further by creating sample tables and performing basic CRUD operations. For a clean installation, it is recommended to serve all the minimum requirements of Vertica. It should be noted that installation of client API(s) and Vertica Management console needs to be done separately and is not included in the basic package. Resources for Article: Further resources on this subject: Visualization of Big Data [Article] Limits of Game Data Analysis [Article] Learning Data Analytics with R and Hadoop [Article]
Read more
  • 0
  • 0
  • 2477
article-image-load-balancing-mssql
Packt
18 Apr 2014
5 min read
Save for later

Load balancing MSSQL

Packt
18 Apr 2014
5 min read
NetScaler is the only certified load balancer that can load balance the MySQL and MSSQL services. It can be quite complex and there are many requirements that need to be in place in order to set up a proper load-balanced SQL server. Let us go through how to set up a load-balanced Microsoft SQL Server running on 2008 R2. Now, it is important to remember that using load balancing between the end clients and SQL Server requires that the databases on the SQL server are synchronized. This is to ensure that the content that the user is requesting is available on all the backend servers. Microsoft SQL Server supports different types of availability solutions, such as replication. You can read more about it at http://technet.microsoft.com/en-us/library/ms152565(v=sql.105).aspx. Using transactional replication is recommended, as this replicates changes to different SQL servers as they occur. As of now, the load balancing solution for MSSQL, also called DataStream, supports only certain versions of SQL Server. They can be viewed at http://support.citrix.com/proddocs/topic/netscaler-traffic-management-10-map/ns-dbproxy-reference-protocol-con.html. Also, only certain authentication methods are supported. As of now, only SQL authentication is supported for MSSQL. The steps to set up load balancing are as follows: We need to add the backend SQL servers to the list of servers. Next, we need to create a custom monitor that we are going to use against the backend servers. Before we create the monitor, we can create a custom database within SQL Server that NetScaler can query. Open Object Explorer in the SQL Management Studio, and right-click on the Database folder. Then, select New Database, as shown in the following screenshot: We can name it ns and leave the rest at their default values, and then click on OK. After that is done, go to the Database folder in Object Explorer. Then, right-click on Tables, and click on Create New Table. Here, we need to enter a column name (for example, test), and choose nchar(10) as the data type. Then, click on Save Table and we are presented with a dialog box, which gives us the option to change the table name. Here, we can type test again. We have now created a database called ns with a table called test, which contains a column also called test. This is an empty database that NetScaler will query to verify connectivity to the SQL server. Now, we can go back to NetScaler and continue with the set up. First, we need to add a DBA user. This can be done by going to System | User Administration | Database Users, and clicking on Add. Here, we need to enter a username and password for a SQL user who is allowed to log in and query the database. After that is done, we can create a monitor. Go to Traffic Management | Load Balancing | Monitors, and click on Add. As the type, choose MSSQL-ECV, and then go to the Special Parameters pane. Here, we need to enter the following information: Database: This is ns in this example. Query: This is a SQL query, which is run against the database. In our example, we type select * from test. User Name: Here we need to enter the name of the DBA user we created earlier. In my case, it is sa. Rule: Here, we enter an expression that defines how NetScaler will verify whether the SQL server is up or not. In our example, it is MSSQL.RES. ATLEAST_ROWS_COUNT(0), which means that when NetScaler runs the query against the database, it should return zero rows from that table. Protocol Version: Here, we need to choose the one that works with the SQL Server version we are running. In my case, I have SQL Server 2012. So, the monitor now looks like the following screenshot: It is important that the database we created earlier should be created on all the SQL servers we are going to load balance using NetScaler. So, now that we are done with the monitor, we can bind it to a service. When setting up the services against the SQL servers, remember to choose MSSQL as the protocol and 1433 as the port, and then bind the custom-made monitor to it. After that, we need to create a virtual load-balanced service. An important point to note here is that we choose MSSQL as the protocol and use the same port nr as we used before 1433. We can use NetScaler to proxy connections between different versions of SQL Server. As our backend servers are not set up to connect the SQL 2012 version, we can present the vServer as a 2008 server. For example, if we have an application that runs on SQL Server 2008, we can make some custom changes to the vServer. To create the load-balanced vServer, go to Advanced | MSSQL | Server Options. Here, we can choose different versions, as shown in the following screenshot: After we are done with the creation of the vServer, we can test it by opening a connection using the SQL Management Server to the VIP address. We can verify whether the connection is load balancing properly by running the following CLI command: Stat lb vserver nameofvserver Summary In this article, we followed steps for setting up a load-balanced Microsoft SQL Server running on 2008 R2 remembering that using load balancing between the end clients and SQL Server requires that the databases on the SQL server are synchronized to ensure that the content that the user is requesting is available on all the backend servers. Resources for Article: Understanding Citrix Provisioning Services 7.0 Getting Started with the Citrix Access Gateway Product Family Managing Citrix Policies
Read more
  • 0
  • 0
  • 6231

article-image-designing-xendesktop-site
Packt
17 Apr 2014
10 min read
Save for later

Designing a XenDesktop® Site

Packt
17 Apr 2014
10 min read
(For more resources related to this topic, see here.) The core components of a XenDesktop® Site Before we get started with the designing of the XenDesktop Site, we need to understand the core components that go into building it. XenDesktop can support all types of workers—from task workers who run Microsoft Office applications to knowledge users who host business applications, to mobile workshifting users, and to high-end 3D application users. It scales from small businesses that support five to ten users to large enterprises that support thousands of users. Please follow the steps in the guide in the order in which they are presented; do not skip steps or topics for a successful implementation of XenDesktop. The following is a simple diagram to illustrate the components that make up the XenDesktop architecture: If you have the experience of using XenDesktop and XenApp, you will be pleased to learn that XenDesktop and XenApp now share management and delivery components to give you a unified management experience. Now that you have a visual of how a simple Site will look when it is completed, let's take a look at each individual component so that you can understand their roles. Terminology and concepts We will cover some commonly used terminology and concepts used with XenDesktop. Server side It is important to understand the terminology and concepts as they apply to the server side of the XenDesktop architecture, so we will cover them. Hypervisor A Hypervisor is an operating system that hosts multiple instances of other operating systems. XenDesktop is supported by three Hypervisors—Citrix XenServer, VMware ESX, and Microsoft Hyper-V. Database In XenDesktop, we use the Microsoft SQL Server. The database is sometimes referred to as the data store. Almost everything in XenDesktop is database driven, and the SQL database holds all state information in addition to the session and configuration information. The XenDesktop Site is only available if the database is available. If the database server fails, existing connections to virtual desktops will continue to function until the user either logs off or disconnects from their virtual desktop; new connections cannot be established if the database server is unavailable. There is no caching in XenDesktop 7.x, so Citrix recommends that you implement SQL mirroring and clustering for High Availability. The IMA data store is no longer used, and everything is now done in the SQL database for both session and configuration information. The data collector is shared evenly across XenDesktop controllers. Delivery Controller The Delivery Controller distributes desktops and applications, manages user access, and optimizes connections to applications. Each Site has one or more Delivery Controllers. Studio Studio is the management console that enables you to configure and manage your XenDesktop and XenApp deployment, eliminating the need for two separate management consoles to manage the delivery of desktops and applications. Studio provides you with various wizards to guide you through the process of setting up your environment, creating your workloads to host and assign applications and desktops, and assigning applications and desktops to users. Citrix Studio replaces the Delivery Services Console and the Citrix AppCenter from previous XenDesktop versions. Director Director is used to monitor and troubleshoot the XenDesktop deployment. StoreFront StoreFront authenticates users to Site(s) hosting the XenApp and XenDesktop resources and manages the stores of desktops and applications that users access. Virtual machines A virtual machine (VM) is a software-implemented version of the hardware. For example, Windows Server 2012 R2 is installed as a virtual machine running in XenServer. In fact, every server and desktop will be installed as a VM with the exception of the Hypervisor, which obviously needs to be installed on the server hardware before we can install any VMs. The Virtual Desktop Agent The Virtual Desktop Agent (VDA) has to be installed on the VM to which users will connect. It enables the machines to register with controllers and manages the ICA/HDX connection between the machines and the user devices. The VDA is installed on the desktop operating system VM, such as Windows 7 or Windows 8, which is served to the client. The VDA maintains a heartbeat with the Delivery Controller, updates policies, and registers the controllers with the Delivery Controller. Server OS machines VMs or physical machines based on the Windows Server operating system are used to deliver applications or host shared desktops to users. Desktop OS machines VMs or physical machines based on the Windows desktop operating system are used to deliver personalized desktops to users or applications from desktop operating systems. Active Directory Microsoft Active Directory is required for authentication and authorization. Active Directory can also be used for controller discovery by desktops to discover the controllers within a Site. Desktops determine which controllers are available by referring to information that controllers publish in Active Directory. Active Directory's built-in security infrastructure is used by desktops to verify whether communication between controllers comes from authorized controllers in the appropriate Site. Active Directory's security infrastructure also ensures that the data exchanged between desktops and controllers is confidential. Installing XenDesktop or SQL Server on the domain controller is not supported; in fact, it is not even possible. Desktop A desktop is the instantiation of a complete Windows operating system, typically Windows 7 or Windows 8. In XenDesktop, we install the Windows 7 or Windows 8 desktop in a VM and add the VDA to it so that it can work with XenDesktop and can be delivered to clients. This will be the end user's virtual desktop. XenApp® Citrix XenApp is an on-demand application delivery solution that enables any Windows application to be virtualized, centralized, and managed in the data center and instantly delivered as a service. Prior to XenDesktop 7.x, XenApp delivered applications and XenDesktop delivered desktops. Now, with the release of XenDesktop 7.x, XenApp delivers both desktops and applications. Edgesight® Citrix Edgesight is a performance and availability management solution for XenDesktop, XenApp, and endpoint systems. Edgesight monitors applications, devices, sessions, license usage, and the network in real time. Edgesight will be phased out as a product. FlexCast® Don't let the term FlexCast confuse you. FlexCast is just a marketing term designed to encompass all of the different architectures that XenDesktop can be deployed in. FlexCast allows you to deliver virtual desktops and applications according to the needs of diverse performance, security, and flexibility requirements of every type of user in your organization. FlexCast is a way of describing the different ways to deploy XenDesktop. For example, task workers who use low-end thin clients in remote offices will use a different FlexCast model than a group of HDX 3D high-end graphics users. The following table lists the FlexCast models you may want to consider; these are available at http://flexcast.citrix.com: FlexCast model Use case Citrix products used Local VM Local VM desktops extend the benefit of a centralized, single-instance management to mobile workers who need to use their laptops offline. Changes to the OS, apps, and data are synchronized when they connect to the network. XenClient Streamed VHD Streamed VHDs leverage the local processing power of rich clients, which provides a centralized, single-image management of the desktop. It is an easy, low-cost way to get started with desktop virtualization (rarely used). Receiver XenApp Hosted VDI Hosted VDI desktops offer a personalized Windows desktop experience typically required by office workers, which can be delivered to any device. This combines the central management of the desktop with complete user personalization. The user's desktop runs in a virtual machine. Users get the same high-definition experience that they had with a local PC but with a centralized management. The VDI approach provides the best combination of security and customization. Personalization is stored in the Personal vDisk. VDI desktops can be accessed from any device, such as thin clients, laptops, PCs, and mobile devices (most common). Receiver XenDesktop Personal vDisk Hosted shared Hosted shared desktops provide a locked-down, streamlined, and standardized environment with a core set of applications. This is ideal for task workers where personalization is not required. All the users share a single desktop image. These desktops cannot be modified, except by the IT personnel. It is not appropriate for mobile workers or workers who need personalization, but it is appropriate for task workers who use thin clients. Receiver XenDesktop On-demand applications This allows any Windows application to be centralized and managed in the data center, which is hosted on either multiuser terminal servers or virtual machines, and delivered as a service to physical and virtual desktops. Receiver XenApp and XenDesktop App Edition Storage All of the XenDesktop components use storage. Storage is managed by the Hypervisor, such as Citrix XenServer. There is a personalization feature to store personal data from virtual desktops called the Personal vDisk (PvD). The client side For a complete end-to-end solution, an important part of the architecture that needs to be mentioned is the end user device or client. There isn't much to consider here; however, the client devices can range from a high-powered Windows desktop to low-end thin clients and to mobile devices. Receiver Citrix Receiver is a universal software client that provides a secure, high-performance delivery of virtual desktops and applications to any device anywhere. Receiver is platform agnostic. Citrix Receiver is device agnostic, meaning that there is a Receiver for just about every device out there, from Windows to Linux-based thin clients and to mobile devices including iOS and Android. In fact, some thin-client vendors have performed a close integration with the Citrix Ready program to embed the Citrix Receiver code directly into their homegrown operating system for seamless operation with XenDesktop. Citrix Receiver must be installed on the end user client device in order to receive the desktop and applications from XenDesktop. It must also be installed on the virtual desktop in order to receive applications from the application servers (XenApp or XenDesktop), and this is taken care of for you automatically when you install the VDA on the virtual desktop machine. System requirements Each component has its requirements in terms of operating system and licensing. You will need to build these operating systems on VMs before installing each component. For help in creating VMs, look at the relevant Hypervisor documentation. We have used Citrix XenServer as the Hypervisor. Receiver Citrix Receiver is a universal software client that provides a secure, high-performance delivery of virtual desktops and applications. Receiver is available for Windows, Mac, mobile devices such as iOS and Android, HTML5, Chromebook, and Java 10.1. You will need to install the Citrix Receiver twice for a complete end-to-end connection to be made. Once on the end user's client device—there are many supported devices including iOS and Android—and once on the Windows virtual desktop (for Windows) that you will serve your users. This is done automatically when you install the Virtual Desktop Agent (VDA) on the Windows virtual desktop. You need this Receiver to access the applications that are running on a separate application server (XenApp or XenDesktop). StoreFront 2.1 StoreFront replaces the web interface. StoreFront 2.1 can also be used with XenApp and XenDesktop 5.5 and above. The operating systems that are supported are as follows: Windows Server 2012 R2, Standard or Data center Windows Server 2012, Standard or Data center Windows Server 2008 R2 SP1, Standard or Enterprise System requirements are as follows: RAM: 2 GB Microsoft Internet Information Services (IIS) Microsoft Internet Information Services Manager .NET Framework 4.0 Firewall ports – external: As StoreFront is the gateway to the Site, you will need to open specific ports on the firewall to allow connections in, mentioned as follows: Ports: 80 (http) and 443 (https) Firewall ports – internal: By default, StoreFront communicates with the internal XenDesktop Delivery Controller servers using the following ports: 80 (for StoreFront servers) and 8080 (for HTML5 clients) You can specify different ports. For more information on StoreFront and how to plug it into the architecture, refer to http://support.citrix.com/article/CTX136547.
Read more
  • 0
  • 0
  • 1395