Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Virtualization

115 Articles
article-image-windows-8-vmware-view
Packt
10 Sep 2013
3 min read
Save for later

Windows 8 with VMware View

Packt
10 Sep 2013
3 min read
(For more resources related to this topic, see here.) Deploying VMware View on Windows 8 (Advanced) If you want to get a hands-on experience with Windows 8 on VMware View and get ready for future View deployments, this should be a must-take guide for deploying it. Getting ready Let's keep the following requirements ready: You should have VMware vSphere 5.1 deployed You should have VMware View 5.1 Connection Server deployed You should have Windows 8 Release Preview installer and license keys for 32-bit version How to do it... Let's list the steps required to complete the task. To create a Windows 8 Virtual Machine, perform the following steps: Create a standard virtual hardware version 9 VM with Windows 8 as the guest operating system. As this is a testing phase, keep the memory and disk size optimal. Edit the settings in VM and under Video card , select the Enable 3D support checkbox (this step is to make sure graphics, and adobe Flash content works with Windows 8 using VMware driver). Mount the ISO image of Windows 8 in the virtual machine and proceed with the Windows 8 Installation. Enter Windows license keys appropriately available with you. Install VMware tools in the VM; Shutdown and restart the VM. Install a VMware View 5.1 agent in VM, uncheck Persona Management during agent installation. Power-on the VM, set the network option to DHCP, and disable Windows Defender. Create a snapshot of VM and power the VM down. Now, we are ready with Windows 8 Parent virtual machine with Snapshot. To create a pool for Windows 8 in View Admin console, perform the following steps: Launch the Connection Server Admin console and navigate to the Pool Creation wizard. Select the Automated pool type (you can use either Dedicated or Floating ). Choose a View Composer linked-clone-based pool. Navigate through the rest of the wizard accepting all defaults, and choosing the snapshot of your Windows 8 VM with the View Agent installed. Use QuickPrep for instant customization. You may require to manually restart the VM if Quickprep doesn't get initiated by itself once the VM boots. Allow provisioning to proceed. Make sure you set Allow users to choose protocol: to No , else 3D rendering gets disabled automatically. If you want to set Allow users to choose protocol: to Yes , make sure you stick to the RDP protocol and not PCoIP in the Default display protocol: field, else you will end up with a black screen. To install View Client, perform the following step: Install View Client 5.1 on any device with iOS/Android/Linux/Windows. To view a list of supported clients, visit http://www.vmware.com/support/viewclients/doc/viewclients_pubs.html. How it works... Once all the preceding steps are performed, you should have Windows 8 with VMware View 5.x ready. You should be able to see the VM in ready status under Desktop Resources in VMware View Admin console. You should be able to launch Windows 8 with View Client now. Please note that you have to entitle the users to the respective pools before users can access the VM. More information You can even refer to http://kb.vmware.com/kb/2033640 to know more on how to install Windows 8 in a VM. Resources for Article : Further resources on this subject: Cloning and Snapshots in VMware Workstation [Article] Creating an Image Profile by cloning an existing one [Article] Building a bar graph cityscape [Article]
Read more
  • 0
  • 0
  • 1711

article-image-introduction-xenconvert
Packt
04 Sep 2013
3 min read
Save for later

Introduction to XenConvert

Packt
04 Sep 2013
3 min read
(For more resources related to this topic, see here.) System requirements Since XenConvert can only convert Windows-based hosts and installs on the same host, the requirements are pretty much the same, as follows: Operating system: Windows XP, Windows Vista, Windows 7, Windows Server 2003 (SP1 or later), Windows Server 2008 (R2) .Net Framework 4.0 Disk Space: 40 MB free disk space XenServer version 6.0 or 6.1 Converting a physical machine to a virtual machine Let's take a quick look at how to convert a physical machine to a virtual machine. First we need to install XenConvert on the source physical machine. We can download XenConvert from this link: http://www.citrix.com/downloads/xenserver/tools/conversion.html. Once the standard Windows installation process is complete, launch the XenConvert tool; but before that we need to prepare the host machine for the conversion. To know more about XenConvert, refer to the XenConvert guide at http://support.citrix.com/article/CTX135017. Preparing the host machine For best results, prepare the host machine as follows: Enable Windows Automount on Windows Server operating systems. Disable Windows Autoplay. Remove any virtualization software before performing a conversion. Ensure that adequate free space exists at the destination, which is approximately 101 percent of used space of all source volumes. Remove any network interface teams; they are not applicable to a virtual machine. We need to run the XenConvert tool on the host machine to start the physical-to-virtual conversion. We can convert the physical machine directly to our XenServer if this host machine is accessible. The other options are to convert to VHD, OVF, or vDisk, which can be imported later on to XenServer using some methods. These options are more useful if we don't have enough disk space or connectivity with XenServer. I chose XenServer and clicked on Next . We can select multiple partitions to be included in the conversion, or select none from the drop-down menu in Source Volume and those disks won't be included in the conversion. We can also increase or decrease the size of the new virtual partition to be allocated for this virtual machine. Click on Next . We'll be asked to provide the details of the XenServer host. The hostname needs either an IP address or a FQDN of the XenServer; a username and password are standard login requirements. In the Workspace field, enter the path to the folder to store the intermediate OVF package that XenConvert will use during the conversion process. XenConvert will store the OVF package in the path we give. Click on Next to select the storage repositories found with XenServer and continue to the last step, in which we'll be provided with the summary of the conversion. Soon after the conversion is completed, we'll be able to have this new machine in our XenCenter. We'll need to have XenServer Tools installed on this new virtual machine. Summary In this article we covered an advanced topic that explained the process of converting a physical Windows server to a virtual machine using XenConvert. Resources for Article : Further resources on this subject: Citrix XenApp Performance Essentials [Article] Defining alerts [Article] Publishing applications [Article]
Read more
  • 0
  • 0
  • 4768

article-image-cloning-and-snapshots-vmware-workstation
Packt
26 Aug 2013
10 min read
Save for later

Cloning and Snapshots in VMware Workstation

Packt
26 Aug 2013
10 min read
In this article by Sander van Vugt, author of VMware Workstation – No Experience Necessary, you'll learn to work with cloning and snapshot tools. In a test environment, it is often necessary to deploy virtual machines rapidly and to revert to a previous state in an easy way. VMware Workstation provides all the tools that are required for this purpose. In this article, you'll learn to work with cloning and snapshot tools that enable you to perform these tasks. (For more resources related to this topic, see here.) Understanding when to apply which tools A snapshot is a photo of a state of a virtual machine. As a virtual machine often requires a lot of work before a desired state of the machine is reached, it is a good idea to take a picture of that exact state. If something goes wrong at a later stage, having a snapshot makes it possible to easily revert to the previous state of the virtual machine. So the base concept of working with snapshots is to make it easier to revert to a previous state. A clone is a copy of a virtual machine. Using clones is convenient if several virtual machines are needed, with more or less the same configuration on each virtual machine. By cloning a virtual machine, you'll make a copy of the actual state of a machine. After making the clone, you'll just have to modify the properties of the virtual machine that need to be unique on that machine. In some ways, clones and snapshots are closely related. This is because you can create a clone of a snapshot of a virtual machine, but you can also clone the current state of a virtual machine, which in fact creates a snapshot of the virtual machine. To understand this, you need to understand the difference between a linked clone and a full clone. In a linked clone, only modifications are stored. This means that if something happens to the original state of the virtual machine (for example, the VM files get corrupted), the linked clone gets corrupted as well. It is, however, an approach that is very efficient with regard to available disk space. As only modifications are stored in a linked clone, the disk space requirement is minimal. Creating a linked clone is also a very fast process. A full clone is like a complete copy of a virtual machine. Creating a full clone is a much longer process as the entire virtual machine disk has to be copied over. It requires more disk space as well, but the benefit is that a full clone creates an independent virtual machine. Therefore, you're better off with full clones if you need a maximum amount of flexibility. You'll also learn how to work with snapshots and clones. Working with snapshots In this article, you'll learn how to create snapshots of virtual machines. You'll also learn how to use the Snapshot Manager to manage a setup where different snapshots are used. Creating snapshots To create a snapshot, you don't have to do anything with the virtual machine. You can create a snapshot irrespective of the actual state of the virtual machine, so it doesn't matter if it is currently active or not. If the machine is powered on, the current state of the virtual machine memory is included in the snapshot as well. This is a useful feature as it allows you to return to the exact state the machine was in while taking the snapshot. To create a snapshot of a virtual machine, select the virtual machine first. Then from the VM menu, navigate to Snapshot | Take snapshot. Here you are presented with a small dialog box where you can enter a short description of the snapshot. You should always enter some description as it may be clear now what the snapshot is being used for, but you probably won't know it anymore if you have a look at the virtual machine a few months later. Also, having a clear description for a snapshot makes it easier to identify the right snapshot in the Snapshot Manager. To make identifying the snapshot easier at a later stage, enter a clear description of what it is being used for Once the snapshot process has started, it will take a while to complete. You will see a progress bar in the lower-left part of the virtual machine window if it has been activated. In theory, you can just continue working in the virtual machine; in practice, you'll notice that it is slow and sometimes even very unresponsive. It's better to wait a while and give the snapshot process a few minutes to complete. The actual files of the snapshots will be created in the directory where the VMDK files of the virtual machine are stored. For each VMDK file, you'll find a snapshot file as well. You'll notice that the snapshot file is smaller as it only contains the modifications that were made since the last snapshot was taken; or if this is the first snapshot you have taken, it will contain the differences from the original virtual machine. For each VMDK file, a corresponding snapshot file is created Working with the Snapshot Manager The Snapshot Manager allows you to work with snapshots in the most flexible way. You can use it to revert to any virtual machine state, start from there, and build a completely different configuration so that you can create two development branches based on a specific snapshot state and decide which solution fits you best. You can find the Snapshot Manager by opening the virtual machine you want to manage and navigating to VM | Snapshot | Snapshot Manager. You'll now see the Snapshot Manager with all the snapshots that have been created for this virtual machine. The Snapshot Manager allows you to revert to any state of a virtual machine Working with snapshots from the Snapshot Manager is not hard to do at all. You'll just select the snapshot that you want to start from and restore it (irrespective of whether there are snapshots that have already been created based on the selected snapshot). Once restored, you can continue working on the virtual machine from the selected snapshot state. By default, in the Snapshot Manager you don't see autoprotect snapshots. Even if the Snapshot Manager shows an option that displays autoprotect snapshots as well, it is probably not a good idea to do that. You'll typically use the snapshots in Snapshot Manager to walk back a clearly defined path in the snapshots on your system. In autoprotect, there isn't really a plan, and even more importantly, the snapshots are removed automatically. Therefore, you should make sure to never create a snapshot that is based on an autoprotect snapshot. Creating clones A snapshot is a virtual machine that is in a specific state. When working with snapshots, there will still be one virtual machine that can easily be restored to a specific state. The major difference between a clone and a snapshot is that a clone is a new virtual machine that is independent of the original virtual machine. Even if there is some relation between a snapshot and a clone (for instance, you can create a clone based on a snapshot), a clone basically is a new virtual machine. This means that once you have created a clone of a virtual machine, you can even start creating new snapshots of that virtual machine. When creating clones, you really need to think well about what you want to use them for. If you just want to use them for your own convenience, a linked clone is the best solution. It can be created very fast, and it takes the least possible amount of disk space while it still offers full functionality. The major difference though is that you'll never be able to copy it as an independent virtual machine to another computer. If you need to do that, you'll need a full clone. There are different ways to create clones. No matter which method you use, you will have to make sure that the virtual machine is shut down before you can make the clone. This requirement exists because the virtual machine files cannot be modified while the cloning is in process. The most important reason for this is that VMware Workstation is used on a Linux host or Windows platform where filesystems are used that do not allow virtual machine files to be modified by different processes at the same time. You'll need a VMFS filesystem in a VMware ESXi environment if you want to be able to clone a virtual machine without shutting it down first. The most direct way of creating a clone is by using the Manage option in the VM menu. From this menu, select the Clone option to start the Clone wizard. The first step of the clone wizard asks where you want to create the clone from. This can either be from the actual state of the virtual machine or from a snapshot, if it has been created. If no fit snapshot exists, the wizard will show an error, indicating that it's not possible to create a clone based on a snapshot for the selected virtual machine. Selecting on the basis of what you want to clone After selecting what you want to create the clone on, you'll need to select between either a linked clone or a full clone. You have to realize that a full clone is a complete copy of a virtual machine, so you'll need the same amount of disk space that is used by the virtual machine as available disk space on the host computer. So if the virtual machine uses 60 gigabytes of disk space, you'll need at least 60 gigabytes of disk space on the host as well! After verifying that you have the required amount of free disk space, you can start the cloning process. In the last step of the wizard, specify the name that you want to assign to the clone as well as the location on the host operating system where the clone has to be stored. Once created, the clone will show in the VMware console as a new virtual machine, and that is also how it is to be considered. A cloned virtual machine appears as a completely new virtual machine in the VMware Workstation library Another way of creating clones is using the Snapshot Manager. The advantage is that using Snapshot Manager, you can easily select a snapshot that you want to create a clone of. Just select the snapshot state you want to use and click on the Clone button. Summary In this article, you learned how to work with cloning and snapshots of tools. You learned how to create snapshots and clones, and work with snapshot manager. Resources for Article: Further resources on this subject: VMware View 5 Desktop Virtualization [Article] Creating an Image Profile by cloning an existing one [Article] Installing Zenoss [Article]
Read more
  • 0
  • 0
  • 11486

article-image-citrix-xenapp-performance-essentials
Packt
21 Aug 2013
18 min read
Save for later

Citrix XenApp Performance Essentials

Packt
21 Aug 2013
18 min read
(For more resources related to this topic, see here.) Optimizing Session Startup The most frequent complaint that system administrators receive from users about XenApp is definitely that the applications start slowly. They certainly do not consider that at least the first time you launch an application published by XenApp, an entire login process takes place. In this article you'll learn: Which steps form the login process and which systems are involved The most common causes of logon delays and how to mitigate them The use of some advanced XenApp features, like session pre-launch The logon process Let's briefly review the logon process that starts when a user launches an application through the Web Interface or through a link created by the Receiver. The following diagram explains the logon process: The logon process Resolution The user launches an application (A) and the Web Interface queries the Data Collector (B) that returns the least-loaded server for the requested application (C). The Web Interface generates an ICA file and passes it to the client (D). Connection The Citrix client running on the user's PC establishes a connection to the session-host server specified in the ICA file. In the handshake process, client and server agree on the security level and capabilities. Remote Desktop Services (RDS) license Windows Server validates that an RDS/Terminal Server (TS) license is available. AD authentication Windows Server authenticates the user against the Active Directory (AD). If the authentication is successful, the server queries account details from the AD, including Group Policies (GPOs) and roaming profiles. Citrix license XenApp validates that a Citrix license is available. Session startup If the user has a roaming profile, Windows downloads it from the specified location (usually a file server). Windows then applies any GPOs and XenApp applies any Citrix policies. Windows executes applications included in the Startup menu and finally launches the requested application. Some other steps may be necessary if other Citrix components (for example, the Citrix Access Gateway) are included in your infrastructure. Analysing the logon process Users perceive the overall duration of the process from the time when they click on the icon until the appearance of the application on their desktops. To troubleshoot slowness, a system administrator must know the duration of the individual steps. Citrix EdgeSight Citrix EdgeSight is a performance and availability management solution for XenApp and XenDesktop. If you own an Enterprise or Platinum XenApp license, you're entitled to install EdgeSight Basic (for Enterprise licensing) or Advanced (for Platinum licensing). You can also license it as a standalone product. If you deployed Citrix EdgeSight in your farm, you can run the Session Startup Duration Detail report, which includes information on both, the duration of server-side and client-side steps. This report is available only with EdgeSight Advanced. For each session, you can drill down the report to display information about server-side and client-side startup processes. An example is shown in the following screenshot: EdgeSight's Session Startup Duration Detail report The columns report the time (in milliseconds) spent by the startup process in the different steps. SSD is the total server-side time, while CSD the total client-side time. You can find a full description of the available reports and the meaning of the different acronyms in the EdgeSight Report List at http://community.citrix.com/display/edgesight/EdgeSight+5.4+Report+List. In the preceding example most of the time was spent in the Profile Load (PLSD) and Login Script Execution (LSESD) steps on the server and in the Session Creation (SCCD) step on the client. EdgeSight is a very valuable tool to analyze your farm. The available reports cover all the critical areas and gives detailed information about the "hidden" work of Citrix XenApp. With the Session Startup Duration Detail report you can identify which steps cause a slow session startup. If you want to understand why server-side steps, like the Profile Load step in the preceding example that lasted more than 15 seconds, take too much time, you need a different tool. Windows Performance Toolkit Windows Performance Toolkit (WPT) is a tool included in the Windows ADK, freely downloadable from the Microsoft website (http://www.microsoft.com/en-us/download/details.aspx?id=30652). You need an Internet connection to install Windows ADK. You can run the setup on a client with Internet access and configure it to download all the required components in a folder. Move the folder on your server and perform an offline installation. WPT has two components: Windows Performance Recorder, which is used to record all the performance data in an .etl file Windows Performance Analyzer, a graphical program to analyze the recorded data Run the following command from the WPT installed folder to capture slow logons: C:WPT>xperf -on base+latency+dispatcher+NetworkTrace+Registry+File IO -stackWalk CSwitch+ReadyThread+ThreadCreate+Profile -BufferSize 128 -start UserTrace -on "Microsoft-Windows-Shell-Core+Microsoft-Windows-Wininit+Microsoft-Windows-Folder Redirection+Microsoft-Windows-User Profiles Service+Microsoft-Windows-GroupPolicy+Microsoft-Windows-Winlogon+Microsoft-Windows-Security-Kerberos+Microsoft-Windows-User Profiles General+e5ba83f6-07d0-46b1-8bc7-7e669a1d31dc+63b530f8-29c9-4880-a5b4-b8179096e7b8+2f07e2ee-15db-40f1-90ef-9d7ba282188a" -BufferSize 1024 -MinBuffers 64 -MaxBuffers 128 -MaxFile 1024 After having recorded at least one slow logon, run the following command to stop recording and save the performance data to an .etl file: C:WPT>xperf -stop -stop UserTrace -d merged.etl This command creates a file called merged.etl in the WPT install folder. You can open this file with Windows Performance Analyzer. The Windows Performance Analyzer timeline is shown in the following screenshot: Windows Performance Analyzer timeline Windows Performance Analyzer shows a timeline with the duration of each step; for any point in time you can view the running processes, the usage of CPU and memory, the number of I/O operations, and the bytes sent or received through the network. WPT is a great tool to identify the reason for delays in Windows; it, however, has no visibility of Citrix processes. This is why EdgeSight is still necessary for complete troubleshooting. Common causes of logon delays After having analyzed many logon problems, I found that the slowness was usually caused by one or more of the following reasons: Authentication issues Profile issues GPO and logon script issues In the next paragraphs, you'll learn how to identify those issues and how to mitigate them. Even if you can't use the advanced tools (EdgeSight, WPT, and so on) described in the preceding sections, you can follow the guidelines in the next sections and best practices to solve or prevent most of the problems related to the logon process. Authentication issues During the logon process, authentication happens at multiple stages; at minimum when a user logs on to the Web Interface and when the session-host server creates a session for launching the requested application. Citrix XenApp integrates with Active Directory. The authentication is therefore performed by a Domain Controller (DC) server of your domain. Slowness in the Domain Controller response, caused by an overloaded server, can slow down the entire process. Worse, if the Domain Controller is unavailable, a domain member server may try to connect for 30 seconds before timing out and choosing a different DC. Domain member servers choose the Domain Controller for authenticating users based on their membership to Active Directory Sites. If sites are not correctly configured or don't reflect the real topology of your network, a domain member server may decide to use a remote Domain Controller, through a slow WAN link, instead of using a Domain Controller on the same LAN. Profile issues Each user has a profile that is a collection of personal files and settings. Windows offers different types of profiles, with advantages and disadvantages as shown in the following table: Type Description Local The profile folder is local to each server. Roaming The profile folder is saved on a central storage (usually a file server). Mandatory A read-only profile is assigned to users; changes are not saved across sessions. From the administrator's point of view, mandatory profiles are the best option because they are simple to maintain, allow fast logon, and users can't modify Windows or application settings. This option however is not often feasible. I could use mandatory profiles only in specific cases, for example; when users have to run only a single application without the need to customize it. Local profiles are almost never used in a XenApp environment because even if they offer the fastest logon time, they are not consistent across servers and sessions. Furthermore, you'll end up with all your session-host servers storing local profiles for all your users, and that is a waste of disk space. Finally, if you're provisioning your servers with Provisioning Server, this option, if not applicable as local profiles. would be saved in the local cache, which is deleted every time the server reboots. System administrators usually choose roaming profiles for their users. Roaming profiles indeed allow consistency across servers and sessions and preserve user. Roaming profiles are, however, the most significant cause of slow logons. Without a continuous control, they can rapidly grow to a large size. A small profile with a large number of files, for example, a profile with many cookies, can cause delays too. Roaming profiles also suffer of the last write wins problem. In a distributed environment like a XenApp farm, it is not unlikely that users are connected to different servers at the same time. Profiles are updated when users log off, so with different sessions on different servers, some settings could be overwritten, or worse, the profile could be corrupted. Folder redirection To reduce the size of roaming profiles, you can redirect most of the user folders to a different location. Instead of saving files in the user's profile, you can configure Windows to save them on a file sharing system. The advantages of using folder redirection are: The data in the redirected folders is not included in the synchronization job of the roaming profile, making the user logon and logoff processes faster Using disk quotas and redirecting folders to different disks, you can limit how much space is taken up by single folders instead of the whole profile Windows Offline Files technology allows users to access their files even when no network connection is available You can redirect some folders (for example, the Start Menu) to a read-only share, giving all your users the same content Folder redirection is configured through group policies as shown in the following screenshot: Configuring Folder Redirection For each folder, you can choose to redirect it to a fixed location (useful if you want to provide the same content to all your users), to a subfolder (named as the username) under a fixed root path to the user's home directory, or to the local user profile location. You may also apply different redirections based on group membership and define advanced settings for the Documents folder. In my experience, folder redirection plays a key role in speeding up the logon process. You should enable it at least for the Desktop and My Documents folder if you're using roaming profiles. Background upload With Windows 2008 R2, Microsoft added the ability to perform periodic upload of the user's profile registry file (NTUSER.DAT) on the file share. Even if this option wasn't added to address the last write wins problem, it may help to avoid profile corruption and Microsoft recommends enabling it through a GPO as shown in the following screenshot: Enabling Background upload Citrix Profile Management Citrix developed its own solution for managing profiles, Citrix Profile Management. You're entitled to use Citrix Profile Management if you have an active Subscription Advantage for the following products: XenApp Enterprise and Platinum edition XenDesktop Advanced, Enterprise, and Platinum edition You need to install the software on each computer whose user profiles you want to manage. In a XenApp farm install it on your session-host servers. Features Citrix Profile Management was designed specifically to solve some of the problems Windows roaming profiles introduced. Its main features are: Support for multiple sessions, without the last write wins problem Ability to manage large profiles, without the need to perform a full sync when the user logs on Support for v1 (Windows XP/2003) and v2 (Windows Vista/Seven/2008) profiles Ability to define inclusion/exclusion lists Extended synchronization can include files and folders external to the profile to support legacy applications Configuring Citrix Profile Management is configured using Windows Group Policy. In the Profile Management package, downloadable from the Citrix website, you can find the administrative template (.admx) and its language file (.adml). Copy the ADMX file in C:WindowsPolicyDefintions and the ADML file in C:WindowsPolicyDefintionslang (for example, on English operating systems the lang folder is en-US). A new Profile Management folder in Citrix is then available in your GPOs as shown in the following screenshot: Profile Management's settings in Windows GPOs Profile Management settings are in the Computer section, therefore, link the GPO to the Organizational Unit (OU) that contains your session-host servers. Profiles priority order If you deployed Citrix Profile Management, it takes precedence over any other profile assignment method. The priority order on a XenApp server is the following: Citrix Profile Management Remote Desktop Services profile assigned by a GPO Remote Desktop Services profile assigned by a user property Roaming profile assigned by a GPO Roaming profile assigned by a user property Troubleshooting Citrix Profile Management includes a logging functionality, you can enable via GPO as shown in the following screenshot: Enabling the logging functionality With the Log settings setting, you can also enable verbose logging for specific events or actions. Logs are usually saved in %SystemRoot%System32LogfilesUserProfileManager but you can change the path with the Path to log file property. Profile Management's logs are also useful to troubleshoot slow logons. Each step is logged with a timestamp so analyzing those logs you can find which steps last for a long time. GPO and logon script issues In a Windows environment, it's common to apply settings and customizations via Group Policy Objects (GPOs) or using logon scripts. Numerous GPOs and long-running scripts can significantly impact the speed of the logon process. It's sometimes hard to find which GPOs or scripts are causing delays. A suggestion is to move the XenApp server or a test user account in a new Organizational Unit, with no policies applied and blocked inheritance. Comparing the logon time in this scenario with the normal logon time can help you understand how GPOs and scripts affect the logon process. The following are some of the best practices when working with GPOs and logon scripts: Reduce the number of GPOs by merging them when possible. The time Windows takes to apply 10 GPOs is much more than the time to apply a single GPO including all their settings. Disable unused GPOs sections. It's common to have GPOs with only computer or user settings. Explicitly disabling the unused sections can speed up the time required to apply the GPOs. Use GPOs instead of logon scripts. Windows 2008 introduced Group Policy Preferences, which can be used to perform common tasks (map network drives, change registry keys, and so on) previously performed by logon scripts. The following screenshot displays configuring drive maps through GPOs. Configuring drive maps through GPO Session pre-launch, sharing, and lingering Setting up a session is the most time-consuming task Citrix and Windows have to perform when a user requests an application. In the latest version of XenApp, Citrix added some features to anticipate the session setup (pre-launch) and to improve the sharing of the same session between different applications (lingering). Session pre-launch Session pre-launch is a new feature of XenApp 6.5. Instead of waiting for the user to launch an application, you can configure XenApp to set up a session as soon as the user logs on to the farm. At the moment, session pre-launch works only if the user logs on using the Receiver, not through the Web Interface. This means that when the user requests an application, a session is already loaded and all the steps of the logon process you've learned have already taken place. The application can start without any delay. From my experience, this is a feature much appreciated by users and I use it in the production farms. Please note that if you enable session pre-launch, a license is consumed at the time the user logs on. Configuring A session pre-launch is based on a published application on your farm. A common mistake is thinking that when you configure a pre-launch application, Citrix effectively launches that application when the user logs on. The application is actually used as a template for the session. Citrix uses some of its settings, like users, servers/worker groups, color depth, and so on. To create a pre-launch session, right-click on the application and choose Other Tasks | Create pre-launch application as shown in the following screenshot: Creating pre-launch application To avoid confusion, I suggest renaming the configured pre-launch session removing the actual application name, for example, you can name it Pre-launch WGProd. A pre-launched session can be used to run applications that have the same settings of the application you chose when you created the session. For example, it can be used by applications that run on the same servers. If you published different groups of applications, usually assigned to different worker groups, you should create pre-launch sessions choosing one application for each group to be sure that all your users' benefit from this feature. Life cycle of a session If you configured a pre-launch session, when the Receiver passes the user credentials to the XenApp server a new session is created. The server that will host the session is chosen in the usual way by the Data Collector. In Citrix AppCenter, you can identify pre-launched sessions from the value in the Application State column as shown in the following screenshot: Pre-launched session Using Citrix policies, you can set the maximum time a pre-launch session is kept: Pre-launch Disconnect Timer Interval, is the time after which the pre-launch session is put in disconnected state Pre-launch Terminate Timer Interval, is the time after which the pre-launch session is terminated Session sharing Session sharing occurs when a user has an open session on a server and launches an application that is published on the same server. The launch time for the second application is quicker because Citrix doesn't need to set up a new session for it. Session sharing is enabled by default if you publish your applications in seamless window mode. In this mode, applications appear in their own windows without containing an ICA session window. They seem physically installed on the client. Session sharing fails if applications are published with different settings (for example, color depth, encryption, and so on). Make sure to publish your applications using the same settings if possible. Session sharing takes precedence over load balancing; the only exception is if the server reports full load. You can force XenApp to override the load check and to also use fully loaded servers for session sharing. Refer to CTX126839 for the requested registry changes. This is, however, not a recommended configuration; a fully loaded server can lead to poor performance. Session lingering If a user closes all the applications running in a session, the session is ended too. Sometimes it would be useful to keep the session open to speed up the start of new applications. With XenApp 6.5 you can configure a lingering time. XenApp waits before closing a session even if all the running applications are closed. Configuring With Citrix user policies, you can configure two timers for session lingering: Linger Disconnect Timer Interval, is the time after which a session without applications is put in disconnected state LingerTerminate Timer Interval, is the time after which a session without applications is terminated If you're running an older version of XenApp, you can keep a session open even if users close all the running applications with the KeepMeLoggedIn tool; refer to CTX128579. Summary The optimization of the logon process can greatly improve the user experience. With EdgeSight and Windows Performance Toolkit you can perform a deep analysis and detect any causes of delay. If you can't use those tools, you are still able to implement some guidelines and best practices that will surely make users' logon faster. Setting up a session is a time-consuming task. With XenApp 6.5, Citrix implemented some new features to improve session management. With session pre-launch and session lingering you can maximize the reuse of existing sessions when users request an application, speeding up its launch time. Resources for Article: Further resources on this subject: Managing Citrix Policies [Article] Getting Started with XenApp 6 [Article] Getting Started with the Citrix Access Gateway Product Family [Article]
Read more
  • 0
  • 0
  • 4330

article-image-publishing-applications
Packt
02 Aug 2013
6 min read
Save for later

Publishing applications

Packt
02 Aug 2013
6 min read
(For more resources related to this topic, see here.) Step 1 – connecting with AppCenter AppCenter is installed along with the XenApp server; however, if required, it can be installed onto a client machine. Often AppCenter is run remotely by administrators who connect to the XenApp server's desktop. Use the following path to run AppCenter: Start|Administrative Tools | Citrix | Management Consoles | Citix AppCenter The first time the console is run, it is necessary to discover the farm. We can do this by selecting to add the local computer (this is usual if we are running AppCenter on the XenApp Server). Ensure that we only discover XenApp, and not Single Sign-on (SSO) servers. If connecting remotely from AppCenter to XenApp, instead of discovering the local computer, we would add the IP address of any XenApp Server in the farm and ensure we have connectivity on TCP port 2513. Step 2 – publishing an application using AppCenter When users connect to the Web Interface server, they are presented with a list of applications that they may access. To create published applications with AppCenter: If you want an application to appear in the list, it must be "published". Navigating to the Application node of AppCenter, we can use the right-click menu and select the option Publish Application. As we run through the wizard, for our example, we will choose to publish notepad.exe. Set the application name. Set the application type by selecting Application | Accessed from a Server | Installed application . Applications can be installed or streamed to servers. We can see from the following screenshot the application type page from the wizard: As the wizard continues, we choose the remaining options as follows: Set the executable's name and location. Set the servers that will host the application. This could be a list of server names from the farm, or the assignment could be to a group of servers known as a worker group . Assign user access to the application by selecting, most usually, domain groups on the users page of the wizard. Finally, we can choose to add the application to folders either on the web page or start menu depending on the type of access. The properties that we set here are the basic properties, and once set, the application can be published immediately. Step 3 – publishing applications using PowerShell Besides using the GUI AppCenter, it is also possible to add and configure published applications with PowerShell. The following steps will guide you to do the same: The PowerShell modules are installed along with the XenApp Server, but as with AppCenter, it can be added to client machines if required. If it is required to install the snap-ins independently to XenApp, we would navigate the installation DVD to the :AdministrationDelivery Services Consolesetup folder. From the Citrix.XenApp.Commands.Client.Install_x86 folder we can install the snap-ins for a 32-bit platform and from Citrix.XenApp.Commands.Client.Install_x64 for the 64-bit architecture. Once we have a PowerShell prompt open, we will need to load the required snap-in: Add-PSSnapin Citrix.XenApp.Commands With the snap-in loaded, we create the new application: New-XAApplication -BrowserName Notepad -ApplicationTypeServerInstalled -DisplayName Notepad -CommandLineExecutable"notepad.exe" -ServerNames XA1 The application can be associated with users with the following PowerShell command: Add-XAApplicationAccount Notepad "ExampleDomain Users" Finally, the application will need to be enabled with this last line of code: Set-XAApplication Notepad -Enabled $true Step 4 – publishing server desktops If required, users can access a server desktop. For this, the application type would be set to Server Desktop. By default, only administrators are allowed access to published desktops in XenApp. For other users, this can be enabled using Citrix Group Policies as shown in the following screenshot from a Citrix User Policy (Group Policies are covered in more detail later): Step 5 – publishing content The published content can be web URL or files on network shares accessible by the client from the XenApp Server. The content may be in the form of PDF documentation or in the form of access to intranet sites that the users will need access to. We can see from the following screenshot that we can provide shortcuts to web resources such as the licensing server web console: Step 6 – prelaunching applications You will soon note that launching an application for the first time takes a little time as the client device has to establish a session on the server. This involves running login scripts, evaluating policies, and loading the user's profile. Once the session is established, other applications launch more quickly on the same server as they share the existing session. The session exists until the user logs out from all applications. When accessing applications using the Web Interface Services Site with the Citrix Receiver, it is possible to configure session prelaunch. This is achieved by creating a prelaunch application from one of our published applications. When a user logs onto the receiver, a session is then created for him/her immediately or at a scheduled time. This gives faster access to the applications as the session pre-exists. A user license is used as soon as the user logs on onto the Receiver, not just when they launch apps. To create a prelaunch application, right-click on an existing application and from the context menu select Other Tasks | Create pre-launch application, as shown in the following screenshot: Step 7 – accessing published applications We are now ready to test the connection and we will connect to the Web Interface Web Site. The Web Interface Services Site is for direct connections from the Citrix Receiver; the Web Interface Web Site is used by the web browser and online plugins. The default Web Site will be http://<yourwebinterfaceserver>/Citrix/XenApp. You will be prompted for your username and password, and possibly your domain name. Once authenticated, you can see a list of your applications associated with your login, content, and desktops. From the following screenshot, we can see the published application, that is, Notepad: Summary We saw you how to perform one of the core tasks of XenApp: making applications available to remote users. We will do this by using the consoles and the command line using PowerShell. Resources for Article : Further resources on this subject: Getting Started with the Citrix Access Gateway Product Family [Article] Designing a XenApp 6 Farm [Article] Creating a sample C#.NET application [Article]
Read more
  • 0
  • 0
  • 1467

article-image-creating-image-profile-cloning-existing-one
Packt
19 Jul 2013
5 min read
Save for later

Creating an Image Profile by cloning an existing one

Packt
19 Jul 2013
5 min read
(For more resources related to this topic, see here.) How to do it The following procedure will guide you through the steps required to clone a predefined ESXi Image Profile available from an ESXi Offline Bundle. It is a four step process: Verifying the existence of a Software Depot in the current session. Adding a Software Depot. Listing available Image Profiles. Cloning an Image Profile to form a new one. Verifying the existence of a Software Depot in the current session To verify whether there are any existing Software Depots defined in the current PowerCLI session, issue the following command: $DefaultSoftwareDepots Note that the command has not returned any values. Meaning, there are no Software Depots defined in the current session. If the needed Software Depot was already added then the command output will list the depot. In that case, you can skip step 2, Add a Software Depot, and start with step 3, List available Image Profiles. Adding a Software Depot Before you add a Software Depot, make sure that you have the Offline Bundle saved on to your local disk. The Offline Bundle can be downloaded from VMware's website or from the OEM's website. The bundle can either be an ESXi Image or a device driver bundle. We already have the Offline Bundle downloaded to the C:AutoDeploy-VIBS directory. Now, let's add this to the current PowerCLI session. To add the downloaded Software Depot, issue the following command: Add-EsxSoftwareDepot -DepotUrl C:AutoDeploy-VIBSESXi500-201111001.zip Once the Software Depot has been successfully added to the PowerCLI session, the command $DefaultSoftwareDepots should list the newly added Software Depot. You could also just issue the command Get-EsxSoftwareDepot to list all the added depots (Offline Bundles). Listing available Image Profiles Once the Software Depot has been added, the next step will be to list all the currently available Image Profiles from the depot by issuing the following command: Get-EsxImageProfile We see that there are two image profiles that the ESXi Offline Bundle offers. One is an ESXi Image, with no VMware Tools ISOs bundled with it, and the other is the standard image, with the VMware Tools ISOs bundled with it. Cloning an Image Profile to form a new one Now that we know there are two Image Profiles available, the next step will be to clone a needed Image Profile to form a new one. This is done by using the New-ESXImageProfile cmdlet. The cmdlet can be supplied with the name of the Image Profile as an argument. However, in most cases remembering the names of the Image Profiles available would be difficult. So the best way to work around this difficulty is to define an array variable to hold the names of the Image Profiles and then the array elements (Image Profile names) can be easily and individually addressed in the command. In this example, we will be using a user defined array variable $profiles to hold the output of the command Get-EsxImageProfile. The following expression will save the output of the Get-ESXImageProfile command to a variable $profiles. $profiles = Get-EsxImageProfile The $profiles variable now holds the two Image Profile names as array elements [0] and [1] sequentially. The following command can be issued to clone the array element [1] ESXi-5.1.10-799733-standard to form a new Image Profile, with a user defined name Profile001. New-EsxImageProfile -CloneProfile $profiles[1] -Name "Profile001" -Vendor VMware Once the command has been successfully executed, you can issue the Get-EsxImageProfile command to list the newly created Image Profile. How it works The PowerCLI session will have a list of Image Profiles available from the added Offline Bundle. During the process of creating a new Image profile, you verify whether a Software Depot is already added to the PowerCLI session using the $DefaultSoftwareDepots command. If there are no Software Depots added then the command will silently exit to the PowerCLI prompt. If there are Software Depots added then it would list the depots added showing the path to its XML file. This is referred to as a depot URL. The process of adding the Software Depot is pretty straightforward. First you need to make sure that you have downloaded the needed Offline Bundles to the server where you have PowerCLI installed. In this case it was downloaded and saved to the C:AutoDeploy-VIBs folder. Once the Offline Bundle is downloaded and saved to an accessible location, you can then issue the command Add-EsxSoftwareDepot to add the Offline Bundle as a depot to the PowerCLI session. Once the software has been added, you can then list all the Image Profiles available from the Offline Bundle. Then the chosen Image Profile is cloned to form a new Image Profile, which can then be customized by adding/removing VIBs. It can then be published as an Offline Bundle or an ISO. Summary We saw that all the predefined Image Profiles were available from an Offline Bundle that were read-only. To customize such Image Profiles, you needed to clone them to form new Image Profiles. We learned how to create a new Image Profile by cloning an existing one. Resources for Article : Further resources on this subject: Supporting hypervisors by OpenNebula [Article] Integration with System Center Operations Manager 2012 SP1 [Article] VMware View 5 Desktop Virtualization [Article]
Read more
  • 0
  • 0
  • 2086
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-integration-system-center-operations-manager-2012-sp1
Packt
17 May 2013
9 min read
Save for later

Integration with System Center Operations Manager 2012 SP1

Packt
17 May 2013
9 min read
(For more resources related to this topic, see here.) This article provides tips and techniques to allow administrators to integrate Operations Manager 2012 with Virtual Machine Manager 2012 to monitor the health and performance of virtual machine hosts and their virtual machines, as well as to use the Operations Manager reporting functionality. In a hybrid hypervisor environment (for example, Hyper-V, VMware), using Operations Manager management packs ( MPs ) (for example, Veeam MP), you can monitor the Hyper-V hosts and the VMware hosts, which allow you to use only the System Center Console to manage and monitor the hybrid hypervisor environment. You can also monitor the health and availability of the VMM infrastructure, management, database, and library servers. The following screenshot will show you the diagram views of the virtualized environment through the Operations Manager: Installing System Center Operations Manager 2012 SP1 This recipe will guide you through the process of installing a System Center Operations Manager for the integration with VMM. Operations Manager has an integrated product and company knowledge for proactive tuning. It also allows the user to compute the OS, applications, services, and out-of-the-box network monitoring, reporting, and many more features extensibility through management packs, thus providing a cross-platform visibility. The deployment used on this recipe assumes a small environment with all components being installed on the same server. For datacenters and enterprise deployments, it is recommended to distribute the features and services across multiple servers to allow for scalability. For a complete design reference and complex implementation of SCOM 2012, follow up the Microsoft Operations Manager deployment guide available at http://go.microsoft.com/fwlink/?LinkId=246682. When planning, use Operations Guide for System Center 2012—Operations Manager (http://go.microsoft.com/fwlink/p/?LinkID=207751) to determine the hardware requirements. Getting ready Before starting, check out the system requirements and design planning for System Center Operations Manager 2012 SP1 at http://technet.microsoft.com/en-us/library/jj656654.aspx My recommendation is to deploy on a Windows Server 2012 and the SQL Server 2012 SP1 version. How to do it... Carry out the following steps to install Operations Manager 2012 SP1: Browse to the SCOM installation folder and click on Setup. Click on Install. On the Select the features to install page, select the components that apply to your environment, and then click on Next as shown in the following screenshot: The recommendation is to have a dedicated server, but it all depends on the size of the deployment. You can select all of the components to be installed on the same server for a small deployment. Type in the location where you'd install Operations Manager 2012 SP1, or accept the default location and click on Next. The installation will check if your system has passed all of the requirements. A screen showing the issues will be displayed if any of the requirements are not met, and you will be asked to fix and verify it again before continuing with the installation, as shown in the following screenshot: If all of the prerequisites are met, click on Next to proceed with the setup. On the Specify an installation option page, if this is the first Operations Manager, select the Create the first Management Server in a new management group option and provide a value in the Management group name field. Otherwise, select the Add a management server to an existing management group option as shown in the following screenshot: Click on Next to continue, accept the EULA, and click on Next. On the Configure the operational database page, type the server and instance name of the server and the SQL Server port number. It is recommended to keep the default values in the Database name, Database size (MB), Data file folder, and Log file folder boxes. Click on Next. The installation account needs DB owner rights on the database. On the SQL Server instance for Reporting Services page, select the instance where you want to host the Reporting Services (SSRS). Make sure the SQL Server has the SQL Server Full-Text Search and Analysis server component installed. On the Configure Operations Manager accounts page, provide the domain account credentials (for example, labsvc-scom) for the Operations Manager services. You can use a single domain account. For account requirements, see the Microsoft Operations Manager deployment guide at http://go.microsoft.com/fwlink/?LinkId=246682. On the Help improve System Center 2012 – Operations Manager page, select the desired options and click on Next. On the Installation Summary page, review the options and click on Install, and then on click on Close. The Operations Manager console will open. How it works... When deploying SCOM 2012, it is important to consider the placement of the components. Work on the SCOM design before implementing it. See the OpsMgr 2012 Design Guide available at http://blogs.technet.com/b/momteam/archive/2012/04/13/ opsmgr-2012-design-guide.aspx. On the Configure Operational Database page, if you are installing the first management server, a new operational database will be created. If you are installing additional management servers, an existing database will be used. On the SQL Server instance for Reporting Services page, make sure you have previously configured the Reporting Services at SQL setup using the Reporting Services Configuration Manager tool, and that the SQL Server Agent is running. During the OpsMgr setup, you will be required to provide the Management Server Action Account credentials and the System Center Configuration service and System Center Data Access service account credentials too. The recommendation is to use a domain account so that you can use the same account for both the services. The setup will automatically assign the local computer Administrators group to the Operations Manager administrator's role. The single-server scenario combines all roles onto a single instance and supports the following services: monitoring and alerting, reporting, audit collection, agentless-exception management, and data. If you are planning to monitor the network, it is recommended to move the SQL Server tempdb database to a separate disk that has multiple spindles. There's more... To confirm the health of the management server, carry out the following steps: In the OpsMgr console, click on the Administration workspace. In Device Management, select Management Servers to confirm that the installed server has a green check mark in the Health State column. See also The Deploying System Center 2012 – Operations Manager article available at http://technet.microsoft.com/en-us/library/hh278852.aspx Installing management packs After installing Operations Manager, you need to install some management packs and agents on the Hyper-V servers and on the VMM server. This recipe will guide you through the installation, but first make sure you have installed the Operations Manager Operations console on the VMM management server. You need to import the following management packs for the VMM 2012 SP1 integration: Windows Server operating system Windows Server 2008 operating system (Discovery) Internet Information Services 2003 Internet Information Services 7 Internet Information Services library SQL Server Core Library Getting ready Before you begin, make sure the correct version of PowerShell is installed, that is, PowerShell v2 for SC 2012 and PowerShell v3 for SC2012 SP1. How to do it... Carry out the following steps to install the required MPs in order to integrate with VMM 2012 SP1: In the OpsMgr console, click on the Administration workspace on the bottom-left pane. On the left pane, right-click on Management Packs and click on Import Management Packs. In the Import Management Packs wizard, click on Add, and then click on Add from catalog. In the Select Management Packs from Catalog dialog box, for each of the following management packs, repeat the steps 5 to 7: Windows Server operating system Windows Server 2008 operating system (Discovery) Internet Information Services 2003 Internet Information Services 7 Internet Information Services library SQL Server Core Library There are numerous management packs for Operations Manager. You can use this recipe to install other OpsMgr MPs from the catalog web service. You can also download the MPs from the Microsoft System Center Marketplace, which contains the MPs and documentation from Microsoft and some non-Microsoft companies. Save them to a shared folder and then import. See http://systemcenter.pinpoint. microsoft.com/en-US/home. In the Find field, type in the management pack to search in the online catalog and click on Search. The Management packs in the catalog list will show all of the packs that match the search criterion. To import, select the management pack, click on Select, and then click on Add as shown in the following screenshot: In the View section, you can refine the search by selecting, for example, to show only those management packs released within the last three months. The default view lists all of the management packs in the catalog. Click on OK after adding the required management packs. On the Select Management Packs page, the MPs will be listed with either a green icon, a yellow icon, or a red icon. The green icon indicates that the MP can be imported. The yellow information icon means that it is dependent on other MPs that are available in the catalog, and you can fix the dependency by clicking on Resolve. The red error icon indicates that it is dependent on other MPs, but the dependent MPs are not available in the catalog. Click on Import if all management packs have their icon statuses as green. On the Import Management Packs page, the progress for each management pack will be displayed. Click on Close when the process is finished. How it works... You can import the management packs available for Operations Manager using the following: The OpsMgr console: You can perform the following actions in the Management Packs menu of the Administration workspace: Import directly from Microsoft's online catalog Import from disk/share Download the management pack from the online catalog to import at a later time The Internet browser: You can download the management pack from the online catalog to import at a later time, or to install on an OpsMgr that is not connected to the Internet While using the OpsMgr console, verify whether all management packs show a green status. Any MP displaying the yellow information icon or the red error icon in the import list will not be imported. If there is no Internet connection on the OpsMgr, use an Internet browser to locate and download the management pack to a folder/share. Then copy the management pack to the OpsMgr server and use the option to import from disk/share. See also The Installing System Center Operations Manager 2012 SP1 recipe Visit Microsoft System Center Marketplace available at http://go.microsoft.com/fwlink/?LinkId=82105
Read more
  • 0
  • 0
  • 2270

article-image-content-switching-using-citrix-security
Packt
10 Apr 2013
8 min read
Save for later

Content Switching using Citrix Security

Packt
10 Apr 2013
8 min read
(For more resources related to this topic, see here.) Getting ready We will start with the packet flow of NetScaler and where content switching comes into play. The following diagram is self-explanatory (it is not the entire packet flow to the receiver's endpoint; the focus here is only to CS and LB): The content switching vserver can be used for HTTP/HTTPS/TCP and UDP protocols, and it can direct it only to another vserver, not to the backend service directly. The content switching vserver doesn't need an LB vserver to be bound to it for its status to be UP. Even with nothing bound to the CS vserver, the status would show UP (this comes in handy when you want to blackhole unwanted traffic).Hence, it is always recommended to check whether the load balancing vservers that are bound to the content switching vserver are up and running. If you want to avoid the preceding condition, the following CLI command will help you achieve it (by default, the value is disabled): root@ NetScaler> add cs vserver <name> <serviceType> (<IPAddress>) [-stateupdate ( ENABLED | DISABLED )] Content switching can be done based on the following client attributes: Mobile user/PC Images/videos Dynamic/static content Client with/without cookies Geographical locations. Per VLAN Similarly, server-side differentiations can also be made based on the following attributes: Server speed and capacity Source/destination port Source/destination IP SSL/HTTP Citrix also has an additional feature (starting from NetScaler version 9.3) that dynamically selects the load balancing feature based on any criteria or condition provided in the CS action/policy: >add cs action <name> -targetLBVserver <string-expression> >add cs policy <policyName> -rule <RULEValue> -action <actionName> The policy is then bound to the CS vserver CS vservers can be configured to process URLs in a case-sensitive manner. By default, this option is ON: >set cs vserver CSVserver -caseSensitive ON The load balancing vserver bound to the CS vserver need not have any IP address configured unless it is used in a different access as well. How to do it... We shall focus on a few case studies that we commonly come across, and that can be solved with the help of content switching: Case 1: Customer ABC accesses an online shopping portal and gets redirected to a secure connection at the payment gateway. For this scenario, an HTTP LB vserver is used and is bound to the CS vserver, which is on HTTPS: The configuration in the preceding screenshot shows that a CS policy as well as a responder policy is bound to the CS vserver named testVserver. The CS policy works on directing the traffic to the target LB vserver (if there are no CS policies bound at all, it goes to the default LB vserver; this default LB vserver should be configured on the CS Vserver). The responder policy, if bound to the CS vserver works on HTTP requests before matching any CS policy. The configuration is verified by using show cs vserver <vserver name>. A packet capture taken on NetScaler will clearly show the redirect from HTTP to HTTPS as <HTTP 302>. If there is any traffic that doesn't match any specific CS policies that are bound, then it uses the default policy. If there is no default policy, the user will get an error – HTTP 1.1 Service Unavailable error message. Case 2: The customer Star Networks has a single web application that contains two domains, namely www.starnetworks.com and www.starnetworks.com.edu and has a content switching setup, which works fine when accessing www.starnetworks.com, but throws an error when accessing www.starnetworks.com.edu. This happens because the peceding domains are not the same; they are different and the certificate that is bound to the CS vserver would be of type www.starnetworks.com only. To resolve this issue, we can bind multiple certificates to the CS vserver with the Server Name Indication (SNI) option enabled. The SNI option can be enabled in the SSL Parameters tab (this would pop up only if the SSL protocol is chosen while creating the vserver). The CLI command to enable SNI is as follows: >bind sslvserver star_cs_vserver -certkeyname -SNICert > bind sslvserver star_cs_vserver -certkeyname -SNICert For each domain added, NetScaler will establish a secure channel between itself and the client. With this solution, you can avoid configuring multiple CS vservers. Case 3: A Customer has a large pool of IP subnets that needs categorizing, and it would be a next to impossible task to configure that number of content switching policies; how does he go about deploying this scenario? The solution is as follows: A database file should be created that includes the IP address range and the domain: >shell #cd /var/ NetScaler/locdb # vi test.db Run the following command to apply the changes made to the database file: > add locationfile aol.db Bind the CS policy with an expression stating, for example, as follows: "CLIENT.IP.SRC.MATCHES_LOCATION ("star.*.*.*.*.*")"" How it works... The working of NetScaler in all three preceding scenarios is that it analyzes the incoming traffic directed to the CS VIP and parses through the bound CS policies, if any. If a match is found, it goes to the target LB vserver. If there are any other policies that are bound (for example, a responder policy or a rewrite policy), then the responder policy gets executed even before the CS policy is executed (since responder policies are usually applied to the HTTP requests).However, rewrite policies can be bound either at the CS or LB level, depending on whether the request or response needs to be modified. To recap what we have seen in the case studies mentioned before, the first case helps us to do a simple redirect from HTTP to HTTPS using a responder policy bound at the CS level. The second case shows us how multiple certificates with the SNI option are used to solve domain differences that would otherwise cause issues. The final case study shows us the basic but handy setting to map IP address ranges to target load balancing vservers. An important thing to note – there are scenarios where the vserver and the services that are bound to them may be different ports altogether (for example, HTTP LB VIP would be listening on port 80, but the services would be on port 8080). In such cases, the redirectPortRewrite feature should be enabled. There's more... This section concentrates on tidbits and troubleshooting techniques: Tips and troubleshooting We can start with checking the output of show cs and show lb vservers, to see if the services bound to them are up and running: root@ns > show cs vserver cs_star_vserver 1) cs_star_vserver (IP_ADDRESS_HERE:80) - HTTP Type: CONTENT State: UP Client Idle Timeout: 180 sec Down state flush: ENABLED Port Rewrite : DISABLED Default: lb_vserver Content Precedence: RULE Vserver IP and Port insertion: OFF Case Sensitivity: OFF If there are responder and rewrite policies, then we can check whether the number of hits on that policy are incrementing or not. Packet captures (using Wireshark) on the server and NetScaler. In some cases, the client would show us the packet flow in depth. The Down state flush feature of the NetScaler is useful for admins planning their downtimes in advance. This feature is enabled, by default, on the vserver and service level. When the feature is enabled, the connections that are already open and established will be terminated and the users will have to retry their connections again. The requests that are already being processed alone would be honored. When the feature is disabled, the open and established connections are honored, and no new connections will be accepted at this time. If enabled at the vserver level, and if the state of the vserver is DOWN, then the vserver will flush the client and server connections that are linked. Otherwise, it would terminate only the client facing connections. At the server level, if the service is marked as DOWN, then only the server facing connections would be flushed. There is another option on the Advanced tab of the CS/LB vserver to direct the excess traffic to a backup vserver. In cases where the backup server also overflows, there is an option to use the redirect URL, which is also found in the Advanced tab of the CS/LB vserver. Summary This article has explained the implementation of content switching using Citrix Security. Resources for Article : Further resources on this subject: Managing Citrix Policies [Article] Getting Started with XenApp 6 [Article] Getting Started with the Citrix Access Gateway Product Family [Article]
Read more
  • 0
  • 1
  • 3921

article-image-xendesktop-architecture
Packt
08 Mar 2013
12 min read
Save for later

The XenDesktop architecture

Packt
08 Mar 2013
12 min read
(For more resources related to this topic, see here.) Virtual desktops can reside in two locations: Hosted at the data center: In this case, you host the desktop/application at the data center and provide users with these desktops through different types of technologies (remote desktop client and Citrix ReceiverXenDesktop). Hosted at the client-side machine: In this case, users access the virtual desktop or application through a VM that is loaded at the client side through different types of client-side virtualizations (level 1, level 2, or streaming). Hosted desktops can be provided using: VM or physical PC : In this type, users are accessing the desktop on an OS that is installed to a dedicated VM or physical machine. You typically install the OS to the VM and clone the VM using hypervisor or storage cloning technology, or install the OS to a physical machine using automatic installation technology, enabling users to access the desktop. The problem with this method is that storage consumption is very high since you will have to load the VM/PC with the OS. Additionally, adding/removing/managing the infrastructure is sometimes complex and time consuming. Streamed OS to VM or physical PC : This type is the same as the VM or physical PC type; however, an OS is not installed on the VM or the physical machine but rather streamed to them. Machines (virtual or physical) will boot from the network, and load the required virtual hosted desktop (VHD) over the network and boot from the VHD again. It has a lot of benefits, including simplicity and storage optimization since an OS is not installed. A single VHD can support thousands of users. Hosted shared desktop : In this model, you provide users with access to a desktop that is hosted on the server and published to users. This method is implemented with Microsoft Terminal Services and XenApp plus Terminal services. This provides high scalability in terms of the number of users per server scale but provides limited security and limited environment isolation for the user. Now that we have visited the different types of DV and how they are delivered and located, let's see what the pros and cons are of each type: Virtualization Type Pros Cons Usage Hosted PC - not streamed PCs are located at the DC, allowing better control. Can be used in a consolidation scenario along with refurbished PCs. Superior performance, since a physical HW provides HW resources to the OS and the OS is preinstalled. Management/deployment is difficult, especially in large-scale deployment scenarios. Operating system deployment and maintenance is an overwhelming process. Low storage optimization and utilization since each OS is preinstalled. High electricity consumption since all of those PCs (normal or blade) are consuming electricity and HVAC resources. Requires users to be connected to the DC if they wish to access the VD. Suitable for refurbished PCs and reusing old PCs. Suitable for task workers, call center agents, and individuals with similar workloads. Hosted VM - not streamed VMs are located at the DC, allowing better control. Superior performance since the OS is preinstalled, but performance is less than what is provided with physical PCs. High consolidation scenario using server virtualization; it provides green IT for your DC. Management/deployment is difficult, especially in large-scale deployment scenarios. Low storage optimization and utilization since each OS is preinstalled, thus requiring a storage side optimization/de-duplication technique that might add costs for some vendors. Hypervisor management could add a costs for some vendors. Requires users to be connected to the DC if they wish to access the VD. Suitable for task workers and call center agents. Some organizations use this model to give users access to server-level hardware resources such as rendering and CAD software. Hosted PC - streamed PCs are located at the DC, allowing better control. Can be used in a consolidation scenario along with refurbished PCs. Superior performance since a physical HW provides HW resources to the OS. High storage optimization and utilization since each OS is streamed on demand. Deployment is difficult, especially in large-scale deployment scenarios. High electricity consumption since all of those PCs (normal or blade) are consuming electricity and HVAC resources. Requires high storage performance to provide write cache storage for the OS. Requires the deployment of a streaming technology provider within the architecture. PCs must be connected to streaming servers over the network to stream the OS's VHD. Requires users to be connected to the DC if they wish to access the VD. Suitable for task workers and call center agents. Some organizations use this model to give users access to server-level hardware resources such as rendering and CAD software. Hosted VMs - Streamed VMs are located at the DC, allowing better control. Superior performance, but performance less than what is provided with physical PCs. High consolidation scenario using of server virtualization; it provides green IT for your DC. High storage optimization and utilization since each OS is streamed on demand. Requires high storage performance to provide write cache storage for the OS. Requires the deployment of a streaming technology provider within the architecture. Hyper-v management could add some costs for some vendors. VMs must be connected to streaming servers over the network to stream the OS's VHD. Requires users to be connected to the DC if they wish to access the VD. Hosted for users who need hosted environment but do not need powerful resources and isolated environment that is not shared between them. Hosted shared desktops Desktop is located in the DC, allowing better control. High capacity for server/user ratio. Server consolidation can be achieved by hosting server on top of hypervisor technology. Easy to manage, deploy, and administrate. Different types of users will require different types of applications, HW resources, and security, which cannot be fully achieved in HSD mode. Resource isolation for users cannot be fully achieved as in other models. Customization for the OS, application, and for profile management could be a little tricky. Requires users to be connected to the DC if they wish to access the VD. Suitable for most VDs, task workers, call center agents, and PC users. Level 1 client hypervisors Since VMs are hosted on the clients, the load is offloaded from the DC and less HW investments are required. Users can use their VD without being connected to the network. Client HW can be utilized, including graphic cards and sound devices. VD are not located on the DC, thus could be less secure for some organizations. Might not fit with organizations that allow users to use their own personal computers. Could require some HW investments at the client HW side to allow adequate performance. The client is connected to a central storage for the purpose of backing up the company's data. This type might fit into smaller environments, but for larger environments, management could be hard or even impossible. Suitable for mobile users and power users who are not connected to the data center at all times. Level 2 client hypervisors Since VMs are hosted on the clients, the load is offloaded from the DC and less HW investments are required. Users can use their VD without being connected to the network. Client HW can be utilized, including graphic cards and sound devices. Allows the users to mix the usage of virtual and physical desktops. VD are not located on the DC, thus could be less secure for some organizations. Might not fit with organizations that allows users to use their own personal computers. Could require some HW investments at the client HW side to allow adequate performance. The client is connected to a central storage for the purpose of backing up the company's data backup. This type might fit into smaller environments, but for larger environments, management could be hard or even impossible. Suitable for mobile users and power users who are not connected to the data center at all times. XenDesktop components Now that we have reviewed the DV delivery methods, let us see the components that are used by XenDesktop to build the DV infrastructure. The XenDesktop infrastructure can be divided into two major blocks the delivery block and the virtualization block, as shown in the next two sections. Delivery block This section is responsible for the VD delivery part, including authenticating users, remote access, redirecting users to the correct servers, and so on. This part consists of the following servers: The Desktop Delivery Controller Description : The Desktop Delivery Controller (DDC) is the core service within the XenDesktop deployment. Think of it as the glue that holds everything together within the XenDesktop infrastructure. It is responsible for authenticating the user, managing the user's connections, and redirecting the user to the assigned VD. Workload type: The DDC handles an HTTP/HTTPS connection sent from the web interface services to its XML service, passing the user's credentials and querying for the user's assigned VD. High availability: This is achieved by introducing extra DDCs in the infrastructure within the farm. To achieve high availability and load balancing between them, these DDCs will be configured in the web interface that will be loaded to balance them. Web interface (WI) Description : The web interface is a very critical part of the XenDesktop infrastructure. The web interface has three very important tasks to handle within the XenDesktop infrastructure. They are as follows: 1. It receives the user credentials either explicitly, by a pass-through, or by other means 2. It passes the credentials to the DDC 3. It passes the list of assigned VDs to the user after communicating with the DDC XML service 4. It passes the connection information to the client inside the ica file so the client can establish the connection Workload type: The WI receives an HTTP/HTTPS connection from users initiating connections to the XenDesktop farm either internally and externally. High availability: This is achieved by installing multiple WI servers and load balancing the traffic between the WI servers, using either Windows NLB or other external hardware load balancer devices, such as Citrix NetScaler. Licensing server Description : The licensing server is the one responsible for supplying the DDC with the required license when the virtual desktop agent tries to acquire one from the DDC. Workload type: When you install the DDC, a license server is required to be configured to provide the licenses to the DDC. When you install the license server, you import the licenses you purchased from Citrix to the licensing server. High availability: This can be achieved in two ways, by either installing a licensing server on Windows Cluster or installing another offline licensing server that could be brought online in the case of the first server failure. Database server Description : The Database server is responsible for holding the static and dynamic data as configuration and session information. XenDesktop 5 no longer uses the IMA data store as the central database in which the con figuration information is stored. Instead, a Microsoft SQL Server database is used as the data store for both configuration and session information. Workload type: The database server receives the DB connections from DDC and provisioning servers (PVS), querying for farm/site configuration. It is important to note that the XenDesktop creates a DB that is different from a provisioning services farm database; both databases could be located on the same servers, same instance or different instances, or different servers. High availability: This can be achieved by using SQL clustering plus Microsoft Windows or SQL replication. Virtualization block The virtualization block provides the VD service to the users through the delivery block; I often see that the virtualization block is referenced as a hypervisor stack, but as you saw previously, this can be done using blade PCs or PCs located in the DC, so don't miss that. The virtualization block consists of: Desktop hosting infrastructure Description : This is the infrastructure that hosts the virtual desktop that is accessed by users. This could be either a virtual machine infrastructure, which could be XenServer, Hyper-v, or VMware VSPHERE ESXI servers, or a physical machine infrastructure, for either regular PCs or blade ones. Workload type: Hosts OSs that will provide the desktops to users that are accessing them from the delivery network. The OS could be running as either installed or streamed and can be either physical or virtual. High availability: Depends on the type of DHI used but can be developed by using clustering for hypervisors or adding extra HW in the case of the physical ones. Provisioning servers infrastructure Description : The provisioning servers are usually coupled with the XenDesktop deployment. Most of the organizations I worked with never deployed preinstalled OSs to VMs or physical machines, but rather streamed the OSs to physical or virtual machines since they provide huge storage optimization and utilization. Provisioning servers are the servers that actually stream the OS in the form of VHD over the network to the machines requesting the streamed OS. Workload type: Provisioning servers stream the OS as VHD files over the network to the machines, which can be either physical or virtual. High availability: This can be achieved by adding extra PVS servers to the provisioning servers' farm and configuring them in the DHCP servers or in the bootstrap file. XenDesktop system requirements XenDesktop has different components; each has its own requirements. The detailed requirements for these components can be checked and reviewed on the Citrix XenDesktop requirements website: http://support.citrix.com/proddocs/topic/xendesktop-rho/cds-sys-reqs-wrapper-rho.html Summary In this article, we saw what the XenDesktop architecture consists of and its various components. Resources for Article : Further resources on this subject: Application Packaging in VMware ThinApp 4.7 Essentials [Article] Designing a XenApp 6 Farm [Article] Managing Citrix Policies [Article]
Read more
  • 0
  • 0
  • 1882

article-image-application-packaging-vmware-thinapp-47-essentials
Packt
15 Jan 2013
19 min read
Save for later

Application Packaging in VMware ThinApp 4.7 Essentials

Packt
15 Jan 2013
19 min read
(For more resources related to this topic, see here.) The capture and build environment You cannot write a book about a packaging format without discussing the environment used to create the packages. The environment you use to capture an installation is of great importance. ThinApp uses a method of snapshotting when capturing an application installation. This means you create a snapshot (Pre-Installation Snapshot) of the current state of the machine. After modifying the environment, you create another snapshot, the Post-Installation Snapshot. The differences between the two snapshots represent the changes made by the installer. This should be all the information you need in order to run the application. Many packaging products use snapshotting as a method of capturing changes. The alternative would be to try to hook into the installer itself. Both methods have their pros and cons. Using snapshot is much more flexible. You don't even have to run an installer. You can copy files and create registry keys manually and it will all be captured. But, your starting point will decide the outcome. If your machine already contains the Java Runtime Environment ( JRE ) and the application you are capturing requires Java, then you will not be able to capture the JRE. Since it was already there when you ran the pre-install snapshot, it will not be a part of the captured differences. This means your package would require Java installed or it will fail to run. The package will not be self-contained. The other method, monitoring the installer, will be more independent of the capturing environment but will not support all the installer formats and will not support manual tweaking during capture. Nothing is black or white. Snapshotting can be made a little more independent of the capture environment. When an installer discovers components already installed, it can register itself to the same components. ThinApp will recognize this, investigate which files are related to a component, and mark them as needed to be included in the package. But this is not a bulletproof method. So the general rule is to make sure your environment allows ThinApp to capture all required dependencies of the application. ThinApp packages are able to support multiple operating systems with one single package. This is a great feature and really helps in lowering the overall administration of maintaining an application. The possibility of running the same package on your Windows XP clients, Windows 7 machines, and your XenApp servers is unique. Most other packaging formats require you to maintain one package per environment. The easiest method to package an application is to capture it on the platform where it will run. Normally you can achieve an out of the box success rate of 60 — 80 percent. This means you have not tweaked the project in any way. The package might not be ready for production but it will run on a clean machine not having the application locally installed. If you want to support multiple operating systems you should choose the lowest platform you need to support. Most of the time this would be Windows XP. From ThinApp's point of view, Windows XP and Windows Server 2003 are of the same generation and Windows 7 and Windows 2008 R2 are of the same generation. Most installers are environment aware. They will investigate the targeting platform and if it discovers a Windows 7 operating system, it knows that some files are already present in the same or newer version than required. Installing on a Windows XP with no service pack would force those required files to be installed locally, and therefore captured by the capturing process. Having these files captured from and installation made on Windows XP rarely conflicts the running of the application on Windows 7 and helps you achieve multiple OS support. Creating a package for one single operating system is of course the easiest task. Creating a package supporting multiple operating systems, all being 32-bit systems is a little harder. How hard depends on the application. Creating a package supporting many different OS and both 32-bit and 64-bit versions is the hardest. But it is doable. It just requires a little extra packaging effort. Some applications cannot run on a 64-bit OS, but most applications offer some kind of work around. If the application contains 16-bit code, then it's impossible to make it run on a 64-bit environment. 64-bit environments cannot handle 16-bit code. Your only workaround for those scenarios is whole machine virtualization technologies. VMware Workstation, VMware View, Citrix XenDesktop, Microsoft Med-V, and many others offer you the capability to access a virtualized 32-bit operating system on your 64-bit machine. In general, you should use an environment that is as clean as possible. This will guarantee that all your packages include as many dependencies as possible, making them portable and robust. But it's not written in stone. If you are capturing an add-on to Microsoft Office, then Microsoft Office has to be locally installed in your capturing environment or the add-on installer would fail to run. You must design your capture environment to allow flexibility. Sometimes you capture on Windows XP, the next application might be captured on Windows 7 64-bit. The next day you'll capture on a machine having JRE installed, or Microsoft Office. The use of virtual machines is a must. Physical machines are supported but the hours spent on reverting to a clean state to start the capture of the next application makes it virtually useless. My capture environment is my MacBook Pro running VMware Fusion and several virtual machines such as Windows XP, Windows Vista, Windows 7, Windows 2003 Server, and of course Windows Server 2008. All VMs have several snapshots (states of the virtual machine) so I can easily jump back and forth between clean, Microsoft Office-installed and different service packs and browsers. Yes, it will require some serious disk space. I'm always low on free disk space. No matter how big the disks you buy are, your project folders and virtual machines will eat it all. I have two disks in my MacBook. One SSD disk, where I keep most of my virtual machines, and one traditional hard disk where I keep all my project folders. The best capture environments I've ever seen have been hosted on VMware vSphere and ESX. Using server hardware to run client operating systems make them fast as lightning. Snapshotting of your VMs take seconds, as well as reverting snapshots. Access to the virtual machines hosted on VMware ESX can be achieved using a console within the vSphere client or basic RDP. The only downside I can see to using an ESX environment is that you cannot do packaging offline, while traveling. The next logical question is if my capture machine should be a domain member or standalone, this depends, I always prefer to capture on standalone machines. This way I know that group policies will not mess with my capture process. No restrictions will be blocking me from doing what I need to do. But again, sometimes you can simply not capture an application without having access to a backend infrastructure. Then your capture machine must be on the corporate network and most of the time it means that it has to be a domain member. If possible, try putting the machine in a special Organizational Unit ( OU) where you limit the amount of restrictions. If at all possible, make sure you don't have antivirus installed on your capturing environment. I know that some enterprises have strict policies forcing even packaging machines to be protected by antivirus. But be careful. There is no way of telling what your antivirus may decide to do to your application's installation and the whole capture process. Most installer manuals clearly state to disable any antivirus during installation. They do that for a reason. Antivirus scanning logs and all that follows will also pollute your project folder. It will probably not break your package but I am a strong believer in delivering clean and optimized packages. So having an antivirus means you will have to spend some time cleaning up your project folders. Alternatively, you can include areas where the antivirus changes content in snapshot.ini, the Setup Capture exclusion list. Entry points and the data container An entry point is the doorway into the virtual environment for the end users. An entry point specifies what will be launched within the virtual environment. Mostly an entry point points to an executable, for example, winword.exe. But an entry point doesn't have to point to an executable. You can point an entry point to whatever kind of file you want, as long as the file type has a file association made to it. Whatever is associated to the file type will be launched within the virtual environment. If no file type association exists, you will get the standard operating system dialog box, asking you which application to open the file with. The name of the entry point must use an .exe extension. When the user double-clicks on an entry point, we are asking the operating system to execute the ThinApp runtime. Entry points are managed in Package.ini. You'll find them at the end of the Package.ini file. The data container is the file where ThinApp stores the whole virtual environment and the ThinApp runtime. There can only be one data container per project. The content in the data container is an exact copy of the representation of the virtual environment found in your project folder. The data in the data container is in read-only format. It's the packagers who change the content by rebuilding the project. An end user cannot change the content of the data container. An entry point can be a data container. Setup Capture will recommend not using an entry point as a data container if Setup Capture believes that the package will be large (200 MB-300 MB). The reason for this is that the icon of the entry point may take up to 20 minutes to be displayed. This is a feature of the operating system and there's nothing you can do about it. It's therefore better to store the data container in a separate file and keep your entry points small. Make sure the icons are displayed quickly. Setup Capture will force you to use a separate data container when the size is calculated to be larger than 1.5 GB. Windows has a size limitation for xecutable files. Windows will deny executing a .exe file larger than 2 GB. The name of the data container can be anything. You will not have to name it with the .dat extension. It doesn't have to have a file extension at all. If you're using a separate data container, you must keep the data container in the same folder as your entry points. Let's take a closer look at the data container and entry point section of Package.ini. You'll find the data container and entry points at the end of the Package.ini file. The following is an example Package.ini file from a virtualized Mozilla Firefox: [Mozilla Firefox.exe] Source=%ProgramFilesDir%Mozilla Firefoxfirefox.exe ;ChangeReadOnlyData to binPackage.ro.tvr to build with old versions(4.6.0 or earlier) of tools ReadOnlyData=Package.ro.tvr WorkingDirectory=%ProgramFilesDir%Mozilla Firefox FileTypes=.htm.html.shtml.xht.xhtml Protocols=FirefoxURL;ftp;http;https Shortcuts=%Desktop%;%Programs%Mozilla Firefox;%AppData%Microsoft Internet ExplorerQuick Launch [Mozilla Firefox (Safe Mode).exe] Disabled=1 Source=%ProgramFilesDir%Mozilla Firefoxfirefox.exe Shortcut=Mozilla Firefox.exe WorkingDirectory=%ProgramFilesDir%Mozilla Firefox CommandLine="%ProgramFilesDir%Mozilla Firefoxfirefox.exe"-safe-mode Shortcuts=%Programs%Mozilla Firefox A step-by-step explanation for the parameters is given as follows: [Mozilla Firefox.exe]   Within [] is the name of the entry point. This is the name the end user will see. Make sure to use .exe as your file extension. Source=%ProgramFilesDir%Mozilla Firefoxfirefox.exe The source parameter points to the target of the entry point, that is, what will be launched when the user clicks on the entry point. The source can either be a virtualized or physical file. The target will be launched within the virtual environment no matter where it lives. ReadOnlyData=Package.ro.tvr The ReadOnlyData indicates this entry point is in fact a data container as well. WorkingDirectory=%ProgramFilesDir%Mozilla Firefox This specifies the working directory for the executable launched. This is often a very important parameter. If you do not specify a working directory, the active working directory will be the location of your package. A lot of software depends on having their working directory set to the application's own folder in the program files directory. FileTypes=.htm.html.shtml.xht.xhtml This is used when registering the entry point. It specifies which file extensions should be associated with this entry point. The previous example registers .htm, .html, and so on to the virtualized Mozilla Firefox. Protocols=FirefoxURL;ftp;http;https This is used when registering the entry point. It specifies which protocols should be associated with this entry point. The previous example registers http, https, and so on to the virtualized Mozilla Firefox. Shortcuts=%Desktop%;%Programs%Mozilla Firefox The parameter Shortcuts is also used when registering your entry points. The Shortcuts parameter decides where shortcuts will be created. The previous example creates shortcuts to virtualized Mozilla Firefox on the Start menu in a folder called Mozilla Firefox, as well as a shortcut on the user's desktop. [Mozilla Firefox (Safe Mode).exe] Disabled=1 Disabled means this entry point will not be created during the build of your project. Source=%ProgramFilesDir%Mozilla Firefoxfirefox.exe Shortcut=Mozilla Firefox.exe Shortcut tells this ent;ry point what its data container is named. If you change the data container's name you will have to change the Shortcut parameter on all entry points using the data container. WorkingDirectory=%ProgramFilesDir%Mozilla Firefox CommandLine="%ProgramFilesDir%Mozilla Firefoxfirefox.exe"-safe-mode CommandLine will allow you to specify hardcoded parameters to the executable. It's the native parameters supported by the virtualized application that you use. Shortcuts=%Programs%Mozilla Firefox There are many more parameters related to entry points. The following are some more examples with descriptions: [Microsoft Office Enterprise 2007.dat] Source=%ProgramFilesDir%Microsoft OfficeOffice12OSA.EXE ;ChangeReadOnlyData to binPackage.ro.tvr to build with old versions(4.6.0 or earlier) of tools ReadOnlyData=Package.ro.tvr MetaDataContainerOnly=1 MetaDataContainer indicates that this is a separate data container. [Microsoft Office Excel 2007.exe] Source=%ProgramFilesDir%Microsoft OfficeOffice12EXCEL.EXE Shortcut=Microsoft Office Enterprise 2007.dat FileTypes=.csv.dqy.iqy.slk.xla.xlam.xlk.xll.xlm.xls.xlsb.xlshtml.xlsm. xlsx.xlt.xlthtml.xltm.xltx.xlw Comment=Perform calculations, analyze information, and visualize data in spreadsheets by using Microsoft Office Excel. Comment allows you to specify text to be displayed when hovering your mouse over the shortcut to the entry point. ObjectTypes=Excel.Addin;Excel.AddInMacroEnabled;Excel. Application;Excel.Application.12;Excel.Backup;Excel.Chart;Excel. Chart.8;Excel.CSV;Excel.Macrosheet;Excel.Sheet;Excel.Sheet.12;Excel. Sheet.8;Excel.SheetBinaryMacroEnabled;Excel.SheetBinaryMacroEnab led.12;Excel.SheetMacroEnabled;Excel.SheetMacroEnabled.12;Excel. SLK;Excel.Template;Excel.Template.8;Excel.TemplateMacroEnabled;Excel. Workspace;Excel.XLL This specifies the object types which will be registered to the entry point when registered. Shortcuts=%Programs%Microsoft Office StatusBarDisplayName=WordProcessor Users can change the name displayed in the ThinApp splash screen. In this example, WordProcessor will be displayed as the title. Icon=%ProgramFilesDir%Microsoft OfficeOffice12EXCEL.ico Icon allows you to specify an icon for your entry point. Most of the times ThinApp will display the correct icon without this parameter. You can point to an executable to use its built-in icons as well. You can specify a different icon set by applying 1 or 2 and so on to the icon path, for example, Icon=%ProgramFilesDir%Microsoft OfficeOffice12EXCEL.EXE,1 The most common entry points should be either cmd.exe or regedit.exe. You'll find them in all Package.ini files but they are disabled by default. Since cmd.exe and regedit.exe most likely weren't modified during Setup Capture, they are not part of the virtual environment. So the source will be the native cmd.exe and regedit.exe. These two entry points are the packagers' best friends. Using these entry points allows a packager to investigate the environment known to the virtualized application. What you can see using cmd.exe or regedit.exe is what the application sees. This is a great help when troubleshooting. If you package an add-on to a natively installed application, the typical example is packaging JRE and you want the local Internet Explorer to be able to use it. Creating an entry point within your Java package using native Internet Explorer as a source, is a perfect method of dealing with it. Now you can offer a separate shortcut to the user, allowing users to choose when to use native Java or when to use virtualized Java. ThinApp's isolation will allow virtualization of one Java version running on a machine with another version natively installed. The only problem with this approach is how you educate your users when to use which shortcut. ThinDirect, discussed later in this article, in the Virtualizing Internet Explorer 6 section, will allow you to automatically point the user to the right browser. There are many use cases for launching something natively within a virtualized environment. You may face troublesome Excel add-ons. Virtualizing them will protect against conflicts, but you must launch native Excel within the environment of the add-on for it to work. Here you could use the fact that many Excel add-ons use .xla files as the typical entry point to the add-on. Create your entry point using the .xla file as source and you will be able to launch any Excel version that is natively installed. When you use a non executable as your entry point source, remember that the name of your entry point must still be .exe. The following is an example of an entry point using a text file as source: [ReadMe.exe] Source=%Drive_C%Tempreadme.txt ReadOnlyData=Package.ro.tvr Running ReadMe.exe will launch whatever is associated to handle .txt files. The application will run within the virtualized environment of the package.   The project folder The project folder is where the packager spends most of his or her time. The capturing process is just a means to create the project folder. You could just as easily create your own project folder from scratch. I admit, to manually create a project folder representing a Microsoft Office installation would be far from easy but in theory it can be done. There is some default content in all project folders. Let's capture nothing and investigate what these are. During Setup Capture, to speed things up, disable the majority of the search locations. This way pre and post scans will take close to no time at all. Run Setup Capture. In the Ready to Prescan step, click on Advanced Scan Locations.... Exclude all but one location from the scanning, as shown in the following screenshot: Since we want to capture nothing, there is no point in scanning all locations. Normally you don't have to modify the advanced scan locations. After pressing Prescan, wait for Postscan to become available and click on it when possible, without modifying anything in your capturing environment. Accept CMD.EXE as your entry point and accept all defaults throughout the wizard. Your project folder will look like the following screenshot: The project folder of a capturing, bearing no changes, will still create folder macros and default isolation modes. Let's explore the defaults prepopulated by the Setup Capture wizard. This is the minimum project folder content that the Setup Capture will ever generate. As a packager you are expected to clean up unnecessary folders from the project folder, so your final project folder may very well contain a smaller number of folder macros. Folder macros are ThinApp's variables. %ProgramFilesDir% will be translated to C:Program Files on an English Windows installation but the same package running on a Swedish OS the %ProgramFilesDir% will point to C:Program. Folder macros are the key to ThinApp packages' portability. If we explore the filesystem part of the project folder, we'll see the default isolation modes prepopulated by Setup Capture. These are applied as defaults no matter what default filesystem isolation mode you choose during the Setup Capture wizard. This confuses some people. I'm often told that a certain package is using WriteCopy or Merged as the isolation mode. Well that's just the default used when no other isolation mode is specified. A proper project folder should have isolation modes specified on all locations of importance, basically making the default isolation mode of no importance. The prepopulated isolation modes are there to make sure most applications run out of the box ThinApped. You are expected to change these to suit your application and environment. Let's look at some examples of default isolation modes. %AppData%, the location where most applications store user settings, is by default using WriteCopy. This is to make sure that you sandbox all user settings. %SystemRoot% and %SystemSystem% have WriteCopy as their default isolation modes, allowing a virtualized application to see the operating system files without allowing it to modify C:Windows and C:WindowsSystem32. %SystemSystem%spool representing C:WindowsSystem32Spool has Merged as its default. This way print jobs will be spooled to the native location, allowing the printer to pick up the print job. %Desktop% (user's desktop folder) and %Personal% (user's document folder) have Merged by default. When ThinApp generates the project folder, it uses the following logic to decide which isolation mode to prepopulate other locations with. The same logic is used within the registry as well. Modified locations will get WriteCopy as the isolation mode New locations will get Full as their isolation mode
Read more
  • 0
  • 0
  • 3538
article-image-article-creating-a-sample-c-net-application
Packt
27 Jul 2012
4 min read
Save for later

Creating a sample C#.NET application

Packt
27 Jul 2012
4 min read
First, open C#.NET. Then, go to File | New Project | Windows Form Applications. Type the desired name for our project and click on the OK button. Adding references We need to add a reference to the System.Management.Automation.dll assembly. Adding this assembly is tricky; first, we need to copy the assembly file to our application folder using the following command: Copy %windir%assemblyGAC_MSILSystem.Management.Automation1.0.0.0__31b f3856ad364e35System.Management.Automation.dll C:CodeXA65Sample where C:CodeXA65Sample is the folder of our application. Then we need to add the reference to the assembly. In the Project menu, we need to select Add Reference, click on the Browse tab, search and select the file System. Management.Automation.dll. After referencing the assembly, we need to add the following directive statements to our code: using System.Management.Automation; using System.Management.Automation.Host; using System.Management.Automation.Runspaces; Also, adding the following directive statements will make it easier to work with the collections returned from the commands: using System.Collections.Generic; using System.Collections.ObjectModel; Creating and opening a runspace To use the Microsoft Windows PowerShell and Citrix XenApp Commands from managed code, we must first create and open a runspace. A runspace provides a way for the application to execute pipelines programmatically. Runspaces construct a logical model of execution using pipelines that contains cmdlets, native commands, and language elements. So let's go and create a new function called ShowXAServers for the new runspace: void ShowXAServers() Then the following code creates a new instance of a runspace and opens it: Runspace myRunspace = RunspaceFactory.CreateRunspace(); myRunspace.Open(); The preceding piece of code provides access only to the cmdlets that come with the default Windows PowerShell installation. To use the cmdlets included with XenApp Commands, we must call it using an instance of the RunspaceConfiguration class. The following code creates a runspace that has access to the XenApp Commands: RunspaceConfiguration rsConfig = RunspaceConfiguration.Create(); PSSnapInException snapInException=null; PSSnapInInfo info = rsConfig.AddPSSnapIn ("Citrix.XenApp.Commands", out snapInException); Runspace myRunSpace = RunspaceFactory.CreateRunspace(rsConfig); myRunSpace.Open(); This code specifies that we want to use Windows PowerShell in the XenApp Command context. This step gives us access to Windows PowerShell cmdlets and Citrix-specific cmdlets. Running a cmdlet Next, we need to create an instance of the Command class by using the name of the cmdlet that we want to run. The following code creates an instance of the Command class that will run the Get-XAServer cmdlet, add the command to the Commands collection of the pipeline, and finally run the command calling the Pipeline.Invoke method: Pipeline pipeLine = myRunSpace.CreatePipeline(); Command myCommand = newCommand("Get-XAServer"); pipeLine.Commands.Add(myCommand); Collection commandResults = pipeLine.Invoke(); Displaying results Now we run the command Get-XAServer on the shell and get this output: In the left-hand side column, the properties of the cmdlet are located, and in this case, we are looking for the first one, the ServerName, so we are going to redirect the output of the ServerName property to a ListBox. So the next step will be to add a ListBox and Button controls. The ListBox will show the list of XenApp servers when we click the button. Then we need to add the following code at the end of Function ShowXAServers: foreach (PSObject cmdlet in commandResults) { string cmdletName = cmdlet.Properties["ServerName"].Value. ToString(); listBox1.Items.Add (cmdletName); } The full code of the sample will look like this: And this is the final output of the application when we run it: Passing parameters to cmdlets We can pass parameters to cmdlets, using the Parameters.Add option. We can add multiple parameters. Each parameter will require a line. For example, we can add the ZoneName parameter to filter server members of the US-Zone zone: Command myCommand = newCommand("Get-XAServer"); myCommand.Parameters.Add("ZoneName", "US-ZONE") pipeLine.Commands.Add(myCommand); Summary In this article, we have learned about managing XenApp with Windows PowerShell and developed sample .NET applications on C#.NET. Specially, we saw: How to list all XenApp servers by using Citrix XenApp Commands How to add a reference to the System.Management.Automation.dll assembly How to create and open runspace, which helps to execute pipelines programmatically How to create an instance using the name of cmdlet How to pass parameters to cmdlets Further resources on this subject: Designing a XenApp 6 Farm [Article] Getting Started with XenApp 6 [Article] Microsoft Forefront UAG Building Blocks [Article]
Read more
  • 0
  • 0
  • 2744

article-image-article-vmware-view-5-desktop-virtualization-vcenter-vdesktop
Packt
15 Jun 2012
8 min read
Save for later

VMware View 5 Desktop Virtualization

Packt
15 Jun 2012
8 min read
Core components of VMware View This book assumes a familiarity with server virtualization, more specifically, VMware vSphere (sometimes referred to as ESX by industry graybeards). Therefore, this article will focus on: The VMware vCenter Server The types of View Connection Server Agent and client software vCenter Server VMware vCenter is a required component of a VMware View solution. This is because the View Connection Server interacts with the underlying Virtual Infrastructure (VI) through vCenter Web Service (typically over port 443). vCenter is also responsible for the complementary components of a VMware View solution provided by the underlying VMware vSphere, including VMotion and DRS (used to balance the virtual desktop load on the physical hosts). When an end customer purchases VMware View bundles, VMware vCenter is automatically included and does not need to be purchased via a separate Stock Keeping Unit (SKU). In the environments leveraging vSphere for server virtualization, vCenter Server is likely to already exist. To ensure a level set on the capabilities that VMware vCenter Server provides, the key terminologies are listed as follows: vMotion: It is the ability to live migrate a running virtual machine from one physical server to another with no downtime. DRS: It is the vCenter Server capability that balances virtual machines across physical servers participating in the same vCenter Server cluster. Cluster: It is a collection of physical servers that have access to the same networks and shared storage. The physical servers participating in a vCenter cluster have their resources (for example, CPU, memory, and so on) logically pooled for virtual machine consumption. HA: It is the vCenter Server capability that protects against the failure of a physical server. HA will power up virtual machines that reside on the failed physical server on available physical servers in the same cluster. Folder: It is a logical grouping of virtual machines, displayed within the vSphere Client. vSphere Client: It is the client-side software used to connect to vCenter Servers (or physical servers running vSphere) for management, monitoring, configuration, and other related tasks. Resource pool: It is a logical pool of resources (for example, CPU, memory, and so on). The virtual machines (or the groups of virtual machines) residing in the same resource pool will share a predetermined amount of resources. Designing a VMware View solution often touches on typical server virtualization design concepts such as the proper cluster design. Owing to this overlap in design concepts between server virtualization and VDI, many server virtualization engineers apply exactly the same principles from one solution to the other. The first misstep that a VDI architect can take is that VDI is not server virtualization and should not be treated as such. Server virtualization is the virtualization of server operating systems. While it is true that VDI does use some server virtualization (for the connection infrastructure, for example), there are many concepts that are new and critical to understand for success. The second misstep a VDI architect can make is in understanding the pure scale of some VDI solutions. For the average server virtualization administrator with no VDI in their environment, he/she may be tasked with managing a dozen physical servers with a few hundred virtual machines. The authors of this book have been involved in VDI solutions involving tens of thousands of vDesktops, well beyond the limits of a traditional VMware vSphere design. VDI is often performed on a different scale. The concepts of architectural scaling are covered later in this book, but many of the scaling concepts revolve around the limits of VMware vCenter Server. It should be noted that VMware vCenter Server was originally designed to be the central management point for the enterprise server virtualization environments. While VMware continues to work on its ability to scale, designing around VMware vCenter server will be important. So why do we need VMware vCenter in the first place, for the VDI architect? VMware vCenter is the gateway for all virtual machine tasks in a VMware View solution. This includes the following tasks: The creation of virtual machine folders to organize vDesktops The creation of resource pools to segregate physical resources for different groups of vDesktops The creation of vDesktops The creation of snapshots VMware vCenter is not used to broker the connection of an end device to a vDesktop. Therefore, an outage of VMware vCenter should not impact inbound connections to already-provisioned vDesktops as it will prevent additional vDesktops from being built, refreshed, or deleted. Because of vCenter Server's importance in a VDI solution, additional steps are often taken to ensure its availability even beyond the considerations made in a typical server virtualization solution. Later in this book, there is a question asking whether an incumbent vCenter Server should be used for an organization's VDI or whether a secondary vCenter Server infrastructure should be built. View Connection Server View Connection Server is the primary component of a VMware View solution; if VMware vCenter Server is the gateway for management communication to the virtual infrastructure and the underlying physical servers, the VMware View Connection Server is the gateway that end users pass through to connect to their vDesktop. In classic VDI terms, it is VMware's broker that connects end users with workspaces (physical or virtual). View Connection Server is the central point of management for the VDI solution and is used to manage almost the entire solution infrastructure. However, there will be times when the architect will need to make considerations to vCenter cluster configurations, as discussed later in this book. In addition, there may be times when the VMware View administrator will need access to the vCenter Server. The types of VMware View Connection Servers There are several options available when installing the VMware View Connection Server. Therefore, it is important to understand the different types of View Connection Servers and the role they play in a given VDI solution. Following are the three configurations in which View Connection Server can be installed: Full: This option installs all the components of View Connection Server, including a fresh Lightweight Directory Access Protocol (LDAP) instance. Security: This option installs only the necessary components for the View Connection portal. View Security Servers do not need to belong to an Active Directory domain (unlike the View Connection Server) as they do not access any authentication components (for example, Active Directory). Replica: This option creates a replica of an existing View Connection Server instance for load balancing or high availability purposes. The authentication/ LDAP configuration is copied from the existing View Connection Server. Our goal is to design the solutions that are highly available for our end customers. Therefore, all the designs will leverage two or more View Connection Servers (for example, one Full and one Replica). The following services are installed during a Full installation of View Connection Server: VMware View Connection Server VMware View Framework Component VMware View Message Bus Component VMware View Script Host VMware View Security Gateway Component VMware View Web Component VMware VDMDS VMware VDMDS provides the LDAP directory services. View Agent View Agent is a component that is installed on the target desktop, whether physical (seldom) or virtual (almost always). View Agent allows the View Connection Server to establish a connection to the desktop. View Agent also provides the following capabilities: USB redirection: It is defined as making a USB device—that is connected locally—appear to be connected to vDesktop Single Sign-On (SSO): It is done by using intelligent credential handling, which requires only one secured and successful authentication login request, as opposed to logging in multiple times (for example, at the connection server, vDesktop, and so on) Virtual printing via ThinPrint technology: It is the ability to streamline printer driver management through the use of ThinPrint (OEM) PCoIP connectivity: It is the purpose-built VDI protocol made by Teradici and used by VMware in their VMware View solution Persona management: It is the ability to manage a user profile across an entire desktop landscape; the technology comes via the recovery time objective (RTO) acquisition by VMware View Composer support: It is the ability to use linked clones and thin provisioning to drastically reduce operational efforts in managing a mid-to-large-scale VMware View environment View Client View Client is a component that is installed on the end device (for example, the user's laptop). View Client allows the device to connect to a View Connection Server, which then directs the device to an available desktop resource. Following are the two types of View Clients: View Client View Client with Local Mode These separate versions have their own unique installation bits (only one may be installed at a time). View Client provides all of the functionality needed for an online and connected worker. If Local Mode will be leveraged in the solution, View Client with Local Mode should be installed. VMware View Local Mode is the ability to securely check out a vDesktop to a local device for use in disconnected scenarios (for example, in the middle of the jungle). There is roughly an 80 MB difference in the installed packages (View Client with Local Mode being larger). For most scenarios, 80 MB of disk space will not make or break the solution as even flash drives are well beyond an 80 MB threshold. In addition to providing the functionality of being able to connect to a desktop, View Client talks to View Agent to perform the following tasks: USB redirection Single Sign-On
Read more
  • 0
  • 0
  • 1706

article-image-tips-and-tricks-microsoft-application-virtualization-46
Packt
25 Jan 2011
7 min read
Save for later

Tips and Tricks on Microsoft Application Virtualization 4.6

Packt
25 Jan 2011
7 min read
  Getting Started with Microsoft Application Virtualization 4.6 Virtualize your application infrastructure efficiently using Microsoft App-V Publish, deploy, and manage your virtual applications with App-V Understand how Microsoft App-V can fit into your company. Guidelines for planning and designing an App-V environment. Step-by-step explanations to plan and implement the virtualization of your application infrastructure Advantage of sequencing process Sequencing represents the process where the App-V Sequencer monitors and captures the files and environment changes (like registry modifications) created by an application installation. Once the capturing process is complete, the sequencing process ends by building the App-V package ready to be delivered to clients by a streaming method or just using an MSI. The sequencing process, to achieve this, creates a virtual environment which is isolated from the operating system avoiding most conflicts with other applications or components existing on the client's operating system. Application Virtualization quick facts Here are some facts about Application Virtualization: The applications are not installed on clients, they are published. With Application Virtualization we can achieve the co-existence of incompatible applications like Microsoft Office 2007 and Microsoft Office 2010. Applications are installed only once on a reference computer, where the package is captured and prepared. You can capture a set of interconnected applications into a single package. The capturing process is in most cases a transparent process; which identifies the environment that the application requires to work, like files and registry keys. Application Virtualization offers you the possibility of centralized management. There is one point where we handle virtualized applications and the distributing behavior in our environment. Even though you can create a package of almost any software, not all applications can be virtualized. There are some examples that could be quite tricky to actually pack into one bundle. Applications that require high operating system integration can generate some known issues. App-V Management Server Sizing App-V Management Server can maintain 12,000 publishing refreshes per minute. If your requirements are higher, you need to set up different Management Servers where you can manually separate the applications to be distributed (remember, multiple App-V Management Servers can use the same database) or deploy your servers with load-balancing features (hardware or software load balancing). Implementing Dynamic Suite Composition (DSC) Dynamic Suite Composition gives us the possibility to use "one-to-many" scenarios, where you have one primary application with several secondaries. But Dynamic Suite Composition is not in charge of managing and controlling the interaction between all these applications. That's why you must be careful which applications you select as secondary, as not all are suited for this category. When you discuss implementing DSC in your organization you must always remember that DSC is only in charge of sharing the virtual environment between the App-V packages. Level of dependency supported in DSC In Microsoft Application Virtualization an important thing to note while using Dynamic Suite Composition is that a primary application can have more than one secondary application but only one level of dependency is supported. You cannot define a secondary package as dependent on another secondary package. Deploying 16-bit applications to 64-bit clients Microsoft App-V 4.6 includes, among several others, improvements and changes that allow the possibility to use and virtualize 64-bit applications and 64-bit operating system clients. But there's one disclaimer—sequencing and deploying 16-bit applications to 64-bit clients is not supported. This is a restriction in 64-bit operating systems, and not only for virtual applications. Sequencing and Deploying Application in Different Operating Systems Even though Microsoft officially requires the same operating system for sequencing and deployment, you can find several examples of applications that can work normally across different operating systems. SQL database size The size of the App-V database depends principally on application launches and retained reporting information. Microsoft provides a small equation to calculate the approximate growth of the database: (560 bytes per launch and shutdown) X (number of launches per day) X (user population) = Daily database growth. For example, 10,000 users who launch and shut down one application per hour every day, translates to 125 MB per day. Streaming Servers and network bandwidth RTSP/S does not include tools to limit the use of network bandwidth. This is why it is highly recommended that you only stream applications between networks with a high speed link. Even though for Streaming Servers the process of delivering applications does not translate to high processor or memory usage, using secure communications with RTSPS or HTTPS introduces a minimum overhead you should consider. App-V Client Cache The client cache is another option you can combine with the streaming strategy selected. Having a large cache on each client will translate to lower network usage. You should also evaluate this when you start sequencing applications—the App-V packages' size will let you estimate the proper amount of cache needed. Software licenses Application Virtualization is related to a significant matter in many organization—application licensing. Application Virtualization can also maintain a central point for software licenses, allowing you to keep track of the current licensing situation of all your applications. Using named licenses on each App-V package, you can guarantee that only users who have the appropriate license can run the application. And if we are using concurrent licenses for the application, the App-V license management will only let the application run the number of times that is permitted. But you must also be cautious with the acquired licenses - not all applications support virtualization. For example, there are some applications that depend on and are attached to some hardware components, like a MAC address. Virtualization support by the application vendor Not all applications are suitable for virtualization. Each App-V package generates their own virtual environment, but some applications require a high degree of integration with the operating system, making the virtualized application unstable or incapable of working. A good example is antivirus software. Installing the App-V Management Console on a different machine Installing the App-V Management Console on a different machine is possible but not simple. The App-V Team created a configuration guide to achieve this, which you can access at the official Microsoft App-V blog: http://blogs.technet.com/b/appv/archive/2009/04/21/app-v-4-5-remote-consoleconfiguration-guide.aspx. An online assessment tool to achieve Dynamic IT Once we run this wizard-like tool, we receive a complete report on how to optimize our infrastructure in areas like Identity and Access, Desktop, Device and Server Management, Security and Networking, Data Protection, and IT Process. Access the online tool at: http://www.microsoft.com/infrastructure/about/assessment-start.aspx. IT Compliance Management Series Guidelines oriented to IT governance, risk, and compliance requirements. Download the series from the Microsoft Download Center at http://www.microsoft.com/downloads/en/default.aspx. Windows Optimized Desktop Scenarios Solution Accelerator This is a guideline to achieving a proper plan and designing applications and operating systems in your organization. This accelerator will be useful when we start thinking in App-V. More information is available at http://www.microsoft.com/infrastructure/resources/desktop-accelerators.aspx. Infrastructure Planning and Design Guides for Virtualization Complete references for designing a virtualization strategy; you will find specialist guides for App-V, Remote Desktop Services (formerly known as Terminal Services), System Center Virtual Machine Manager, Windows Server Virtualization, Desktop Virtualization, and so on. More information is available at http://technet.microsoft.com/en-us/solutionaccelerators/ee395429.aspx.
Read more
  • 0
  • 0
  • 1332
article-image-faq-virtualization-and-microsoft-app-v
Packt
24 Jan 2011
8 min read
Save for later

FAQ on Virtualization and Microsoft App-V

Packt
24 Jan 2011
8 min read
Getting Started with Microsoft Application Virtualization 4.6 Q: What is the need for virtualization? A: With virtual environments we rapidly start gaining the agility, scalability, cost-saving, and the security that almost any business today requires. Following are some advantages: Reduces infrastructure downtime by several hours Saves time and resources that is spend deploying/providing operating systems to users Saves troubleshooting time for application installations Q: How do cloud service models assist virtualization? A: The cloud service model is all around us, which presents to us several new ways of thinking about technology: Software as a Service (SaaS or S+S): Delivering applications over the network without any local installations or maintenance. Platform as a Service (PaaS): Providing solutions, like an Active Directory solution, as a service, avoiding the deployment tasks. Infrastructure as a Service (IaaS): Supplying computer infrastructure as a service. Instead of companies thinking about buying new hardware and the maintenance costs that implies, the infrastructure is provided (typically in virtual machines) as they need it. Q: Where do we stand today with regards to virtualizastion? A: Fortunately, today's demand regarding virtualization is incredibly high, which is why the possibilities and offerings are even higher. We can virtualize servers, appliances, desktops, and applications, achieving presentation and profile virtualization; you name it and there's probably already a bunch of products and technologies you can use to virtualize it. Application virtualization is still one of the emerging platforms but is increasing rapidly in the IT world. More and more of the dynamic aspects of isolating and scaling the applications deployment are being implemented. And Microsoft's App-V represents one of the strongest technologies we can rely on. Q: How does virtualization achieve faster and dynamic deployments? A: Handling server or desktop deployments is always a painful thing to do, requiring hours of deployment, tuning, and troubleshooting; all of these aspects are inherited in any operating system lifecycle. Having virtual machines as baselines would reduce OS deployment from several hours to a few minutes. The desktop virtualization concept provides the end user the same environment as using a local desktop computer but working with remote computer resources. Taking this strategy will enhance the provisioning of desktop environments, more resources can be added on demand, and the deployment will no longer depend on specific hardware. Building virtual machines templates ready to go, self-service portals to provision virtual machines for our power users whenever they need a virtual environment to test an application; these are some of the other features that can be included using a virtualization platform. Q: How does virtualization achieve cost savings? A: Listed below are two major factors that acheive cost saving: Lower power consumption: Large datacenters also include large electricity consumption; removing the physical layer from your servers will translate yearly to a nice reduced number in electrical bills. This is no small matter; most of the capacity and costs planning for implementing virtualization, also includes the "power consumption" variable. It won't be long until "Green Datacenters" and "Green IT" will be a requirement for every mid-size and large business. Hardware cost savings: Before thinking about probably needing expensive servers to host your entire infrastructure let me ask you this, did you know that the average hardware resources usage is around 5% to 7%? That means we are currently wasting around 90% of the money invested in that hardware. Virtualization will optimize and protect your investment; we can guarantee that the consolidation of your servers will be not only effective but also efficient.   Q: How does virtualization improve efficiency? A: There's a common scenario in several organizations where there are users that only depend on, for example, the Office suite for their work; but the cost of applying a different hardware baseline to them, that fits their needs exactly, is extremely high. That is why efficiency also plays an important variable in your desktop-using desktop virtualization you can be certain that you are not over-or under-resourcing any end-user workstation. You can easily provide all the necessary resources to a user as long for as they need them. Q: How does it achieve scalable and easy-to-manage platforms? A: A new contingency layer for every machine, taking a snapshot of an operating system, is a concept that didn't appear before virtualization existed. Whenever you introduce a change in your platform (like a new service pack release) there's always a risk that things won't be just as fine as they were before. Having a quick, immediate, and safe restore point of a server/desktop could represent a cost-saving solution. Virtual machines and snapshot possibilities will give you the necessary features to manage and easily maintain your labs for testing updates or environment changes, even the facilities to add/remove memory, CPUs, hard drives, and other devices to a machine in just a few seconds. Q: How does virtualization enhance backup and recovery? A: Virtual environments will let you redesign your disaster recovery plan and minimize any disruption to the services you are providing. The possibilities around virtual machine's hot backups and straightforward recoveries will give you the chance to arrange and define different service level agreements (SLAs) with your customers and company. The virtualization model offers you the possibility to remove the hardware dependencies of your roles, services, and applications; a hardware failure can present only a minor issue in the continuity of your business, simply by moving the virtual machines to different physical servers without major disruptions. Q: How is application deployment incompatibility issue addressed? A: Inserting a virtualized environment into our applications deployment will reduce the time invested in maintaining and troubleshooting operating system and applications incompatibilities. Allowing the applications to run in a virtualized and isolated environment every time they are deployed removes possible conflicts with other applications. It is also a common scenario for most organizations to face incompatibility issues with their business applications whenever there's a change—new operating system, new hardware, or even problems with the development of the application that starts generating issues with particular environments. You can say goodbye to those problems, facilitating real-time and secure deployments of applications that are decoupled from tons of requirements. Q: What is Application Virtualization? A: As virtual machines that work abstracting the hardware layer from physical servers, application virtualization abstracts the application and its dependencies from the operating system, effectively isolating the application from the OS and other applications. Application Virtualization, in general terms, represents a set of components and tools that remove the complexity of deploying and maintaining applications for desktop users; preserving only a small footprint of the operating system. Getting more specific, Application Virtualization is a process for packaging (or virtualizing) an application and the environment in which the application works, and distributing this package to end users. The use of this package (which can contain more than one application) is completely decoupled from the common requirements (like the installation and uninstallation processes) attached to applications. The Technical Overview of Application Virtualization offered by Microsoft represents a fine graphic explanation about how normal applications interact with the operating system and their components; and how virtualized applications do the same. Take a look at http://www.microsoft.com/systemcenter/appv/techoverview.mspx. Q: What are the drawbacks of a normal business application scenario? A: The three major aspects are: Special configurations every time that is deployed. Customizing files or setting special values within the application configuration environment. It is also interconnected with other applications (for example, Java Runtime Environment, a local database engine, or some other particular requirement). It demands several hours every week to support end users deployments and troubleshooting configurations. Application Virtualization offer us the possibility to guarantee that end users always have the same configuration deployed, no matter when or where, as you only need to configure it once and then wrap up the entire set of applications into one package. Q: How does Application Virtualization differ from running normal applications? A: In standard OS environments, applications install their settings onto the host operating system, hard-coding the entire system to fit that application's needs. Other applications' settings can be overwritten, possibly causing them to malfunction or break. Here's a common example of how two applications co-exist in the same operating system, and if these applications share some registry values the application's (or even operating system's) usability could be compromised. With Application Virtualization, each application brings down its own set of configurations on-demand, and executes in a way such that it only sees its own settings. Each virtual application is able to read and write information in their application profile and can access operating system settings in the registry or DLLs, but cannot change them.
Read more
  • 0
  • 0
  • 2079

article-image-microsoft-application-virtualization-managing-dynamic-suite-composition
Packt
12 Jan 2011
6 min read
Save for later

Microsoft application virtualization: managing Dynamic Suite Composition

Packt
12 Jan 2011
6 min read
With Dynamic Suite Composition, you get the chance to avoid large-sized packages or redundant sets of components, virtualized separately in different applications. You can virtualize one application normally, like Microsoft Office, and on a different package a Microsoft Offce plugin that will only be streamed down to the clients whenever it is required Having separate environments not only reduces the chance of getting application components captured more than once in different packages but also gives us control of these dependencies and lets you distribute the applications with more accuracy to users, achieving a one-to-many strategy on applications. For example, having several web applications packages but only one Java Runtime Environment (JRE) used by all these web applications. The dependencies mentioned in DSC are nothing but parameters in the virtual environment of each application. Manually editing the OSD file represents the main task in DSC preparation, but, of course, the risks increase as editing some of these parameters could end up causing erratic functionality in the App-V packages. Fortunately, Microsoft provides the Dynamic Suite Composition tool with which you can easily establish the necessary communication channels between applications. How Dynamic Suite Composition works Dynamic Suite Composition represents the way in which administrators can define dependencies between App-V packages and guarantee final users transparent operability in applications. In normal use of the operating system, you can find several applications which are dependent on other applications. Probably the best example are web applications interacting, from a browser of course, constantly with Java Runtime Environment, Silverlight, and other applications like a PDF reader. DSC is also suitable in any plugin scenario for other large applications like Microsoft Office. Dynamic Suite Composition always identifies two types of applications: Primary application: This is the main application and is usually the full software product. This should be identified as the application users execute primarily before needing a second application. Secondary application: This is usually a plugin attached to the primary application. It can be streamed in the normal way and does not need to be fully cached to run. An important note is that a primary application can have more than one secondary application but only one level of dependency is supported. You cannot define a secondary package as dependent on another secondary package. Here is an example from the App-V Sequencing Guide by Microsoft, showing the "many-to-one" and "one-to-many" relationship in Dynamic Suite Composition. These These application dependencies are customizable in the App-V configuration file for virtual applications, the OSD, where you are going to use the <DEPENDENCIES> tag in the primary application, adding the identifiers of the secondary application(s). So every time the main application needs a secondary application (like a plugin), it can find the right path and execute it without any additional user intervention. In some environments you can find secondary applications that could be a requirement for normal use of a primary application. With DSC, you can also use the variable MANDATORY=TRUE in the primary application OSD file. This value is added at the end of the secondary application reference. DSC does not control the interaction When you discuss implementing Dynamic Suite Composition in your organization you must always remember that DSC does not control the interaction between the two applications, and is only in charge of sharing the virtual environment between the App-V packages. The SystemGuard, the virtual environment created by each App-V application, is stored in one file, OSGuard.cp (you can find it in the OSD file description using the name SYSGUARDFILE). Once the application is distributed, every change made by the operating system's client and/or user changes, are stored in Settings.cp. DSC represents by sharing, between primary and secondary application, the Settings.cp file, while always maintaining the OSGuard.cp. This task will guarantee the interaction between the two applications, but Dynamic Suite Composition does not control how this interaction is occurring and which components are involved. The reason for this is that when you find yourself in a complex DSC scenario, where you have a primary application with several secondary applications that use the same shared environment as well, some conflicts may appear in the virtual environment; secondary applications overriding DLLs or registry keys, which were already being used by another secondary application in the same environment. If a conflict occurs, the last application to load wins. So, for example, the user data, which is saved in PKG files, will be kept within the secondary application package. Dynamic Suite Composition was designed to be used on simple dependencies situations and not when you want full and large software packages interacting as secondary packages. Microsoft Office is a good example of software that must be used as a "primary application" but never as a "secondary application". Configuring DSC manually Now you are going to take a closer look at configuring Dynamic Suite Composition. When you are working with DSC, using virtual machine snapshots is highly recommended as both applications, the primary and secondary, must be captured separately. This example will use a familiar environment for most, integrating an Internet browser with another program; Mozilla Firefox 3.6 with Adobe Reader 9. The scenario is very well known to most users as most find themselves with a PDF file that needs to be opened from within the browser while you are surfing around (or receiving an attachment via web mail). If a PDF reader is not a requirement on the client machines you will be obligated to capture and deliver one to all possible users, even though they only use it when a PDF files appears in their browsing session. Using Dynamic Suite Composition you can easily configure the browser, Mozilla Firefox, with a secondary application, Adobe Reader, which will only be streamed down to clients if the browser accesses a PDF file. These are the steps to follow: Log on to the sequencer machine using a clean image, install and capture the primary application. Import the primary application in the App-V Management Server. Set the permissions needed for the selected users. Restore the sequencer operating system to the base clean image. Install the primary application locally again; do not capture this installation. Install and capture the secondary application with App-V Sequencer. Import the secondary application in the App-V Management Server. Set the permissions needed for the selected users. Modify the dependencies on the OSD file from the primary application. Here is a detailed look at this process. Install and capture the primary application, Mozilla Firefox.This example is using an already captured application; you can review the process of sequencing Mozilla Firefox in the previous chapter. Here's a quick look at the procedure: Start the App-V Sequencer Application and begin the capture. Start the Mozilla Firefox installation (if using Windows 7 you may need to use compatibility mode for the installer). Select the installation folder in Q: using an 8.3 name in the folder. Complete the installation. Stop the capturing process and launch the application. It is recommended to remove automatic updates from the Mozilla Firefox options. Complete the package customization and save the App-V project.
Read more
  • 0
  • 0
  • 2080