Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Virtualization

115 Articles
article-image-moving-windows-appliance
Packt
12 Oct 2016
7 min read
Save for later

Moving from Windows to Appliance

Packt
12 Oct 2016
7 min read
In this article by Daniel Langenhan, author of VMware vRealize Orchestrator Cookbook, Second Edition, will show you how to move an existing Windows Orchestrator installation to the appliance. With vRO 7 the Windows install of Orchestrator doesn't exist anymore. (For more resources related to this topic, see here.) Getting ready We need an Orchestrator installed on windows. Download the same version of the Orchestrator appliance as you have installed in the Windows version. If needed upgrade the Windows version to the latest possible one. How to do it... There are three ways, using the migration tool, repointing to an external database or export/import the packages. Migration tool There is a migration tool that comes with vRO7 that allows you to pack up your vRO5.5 or 6.x install and deploy it into a vRO7. The migration tool works on Windows and Linux. It collects the configuration, the plug-ins as well as their configuration certificates, and licensing into a file. Follow these steps to use the migration tool: Deploy a new vRO7 Appliance. Log in to your Windows Orchestrator OS. Stop the VMware vCenter Orchestrator Service (Windows services). Open a Web browser and log in to your new vRO7 - control center and then go to Export/Import Configuration. Select Migrate Configuration and click on the here link. The link points to: https://[vRO7]:8283/vco-controlcenter/api/server/migration-tool . Stop the vRO7 Orchestrator service. Unzip the migration-tool.zip and copy the subfolder called migration‑cli into the Orchestrator director, for example, C:Program FilesVMwareInfrastructureOrchestratormigration-clibin. Open a command prompt. If you have a Java install, make sure your path points to it. Try java ‑version. If that works continue, if not, do the following: Set the PATH environment variable to the Java install that comes with Orchestrator, set PATH=%PATH%;C:Program FilesVMwareInfrastructureOrchestratorUninstall_vCenter Orchestratoruninstall-jrebin CD to the directory ..Orchestratormigration-clibin. Execute the command vro-migrate.bat export. There may be errors showing about SLF4J, you can ignore those. In the main directory (..Orchestrator) you should now find an orchestrator‑config‑export‑VC55‑[date].zip file. Go back to the Web browser and upload the ZIP file into Migration Configuration by clicking on Browse and select the file. Click on Import. You now see what can be imported. You can unselect item you wish not to migrate. Click Finish Migration. Restart the Orchestrator service. Check the settings. External database If you have an external database things are pretty easy. For using the initial internal database please see the additional steps in the There's more section of this recipe. Backup the external database. Connect to the Windows Orchestrator Configurator. Write down all the plugins you have installed as well as their version. Shutdown the Windows version and deploy the Appliance, this way you can use the same IP and hostname if you want. Login to the Appliance version's Configurator. Stop the Orchestrator service Install all plugins you had in the Windows version. Attach the external database. Make sure that all trusted SSL certificates are still there, such as vCenter and SSO. Check the authentication if it is still working. Use the test login. Check your licensing. Force a plugin reinstall (Troubleshooting | Reinstall the plug-ins when the server starts). Start the Orchestrator service and try to log in. Make a complete sanity check. Package transfer This is the method that will only pull your packages across. This the only easy method to use when you are transitioning between different databases, such as between MS SQL and PostgreSQL. Connect to your Windows version Create a package of all the workflows, action, and other items you need. Shutdown Windows and deploy the Appliance. Configure the Appliance with DB, Authentication and all the plugins you previously had. Import the package. How it works... Moving from the Windows version of Orchestrator to the Appliance version isn't such a big thing. Worst case scenario is using the packaging transfer. The only really important thing is to use the same version of the Windows Orchestrator as the Appliance version. You can download a lot of old versions including 5.5 from http://www.vmware.com/in.html. If you can't find the same version, upgrade your existing vCenter Orchestrator to one you can download. After you transferred the data to the appliance you need to make sure that everything works correctly and then you can upgrade to vRO7. There's more... When you just run Orchestrator from your Windows vCenter installation and didn't configure an external database then Orchestrator uses the vCenter database and mixes the Orchestrator tables with the vCenter tables. In order to only export the Orchestrator ones, we will use the MS SQL Server Management Studio (free download from www.microsoft.com called Microsoft SQL Server 2008 R2 RTM). To transfer the only the Orchestrator database tables from the vCenter MS-SQL to an external SQL do the following: Stop the VMware vCenter Orchestrator Service (Windows Services) on your Windows Orchestrator. Start the SQL Server Management Studio on your external SQL server. Connect to the vCenter DB. For SQL Express use: [vcenter]VIM_SQLEXP with Windows Authentication. Right-click on your vCenter Database (SQL Express: VIM_VCDB) and select Tasks | Export Data. In the wizard, select your source, which should be the correct one already and click Next. Choose SQL Server Native Client 10.0 and enter the name of your new SQL server. Click on New to create a new database on that SQL server (or use an empty one you created already. Click Next. Select Copy data from one or more tables or views and click Next. Now select every database which starts with VMO_ and then click Next. Select Run immediately and click Finish. Now you have the Orchestrator database extracted as an external database. You still need to configure a user and rights. Then proceed with the section External database in this recipe. Orchestrator client and 4K display scaling This recipe shows a hack how to make the Orchestrator client scale on 4K displays. Getting ready We need to download the program Resource Tuner (http://www.restuner.com/) the trial version will work, however, consider buying it if it works for you. You need to know the path to your Java install, this should be something like: C:Program Files (x86)Javajre1.x.xxbin How to do it... Before you start…. Please be careful as this impacts your whole Java environment. This worked for me very well with Java 1.8.0_91-b14. Download and install Resource Tuner. Run Resource Tuner as administrator. Open the file javaws.exe in your Java directory. Expand manifest and the click on the first entry (the name can change due to localization). Look for the line <dpiAware>true</dpiAware>. Exchange the true for a false Save end exit. Repeat the same for all the other java*.exe in the same directory as well as j2launcher.exe. Start the Client.jnlp (the file that downloads when you start the web application). How it works... In Windows 10 you can set the scaling of applications when you are using high definition monitors (4K displays). What you are doing is telling Java that it is not DPI aware, meaning it the will use the Windows 10 default scaler, instead of an internal scaler. There's more... For any other application such as Snagit or Photoshop I found that this solution works quite well: http://www.danantonielli.com/adobe-app-scaling-on-high-dpi-displays-fix/. Summary In this article we discussed about moving an existing Windows Orchestrator installation to the appliance and a hack on how to make the Orchestrator client scale on 4K displays. Resources for Article: Further resources on this subject: Working with VMware Infrastructure [article] vRealize Automation and the Deconstruction of Components [article] FAQ on Virtualization and Microsoft App-V [article]
Read more
  • 0
  • 0
  • 1448

article-image-openstack-networking-nutshell
Packt
22 Sep 2016
13 min read
Save for later

OpenStack Networking in a Nutshell

Packt
22 Sep 2016
13 min read
Information technology (IT) applications are rapidly moving from dedicated infrastructure to cloud based infrastructure. This move to cloud started with server virtualization where a hardware server ran as a virtual machine on a hypervisor. The adoptionof cloud based applicationshas accelerated due to factors such as globalization and outsourcing where diverse teams need to collaborate in real time. Server hardware connects to network switches using Ethernet and IP to establish network connectivity. However, as servers move from physical to virtual, the network boundary also moves from the physical network to the virtual network.Traditionally applications, servers and networking were tightly integrated. But modern enterprises and IT infrastructure demand flexibility in order to support complex applications. The flexibility of cloud infrastructure requires networking to be dynamic and scalable. Software Defined Networking (SDN) and Network Functions Virtualization (NFV) play a critical role in data centers in order to deliver the flexibility and agility demanded by cloud based applications. By providing practical management tools and abstractions that hide underlying physical network’s complexity, SDN allows operators to build complex networking capabilities on demand. OpenStack is an open source cloud platform that helps build public and private cloud at scale. Within OpenStack, the name for OpenStack Networking project is Neutron. The functionality of Neutron can be classified as core and service. In this article by Sriram Subramanian and SreenivasVoruganti, authors of the book Software Defined Networking (SDN) with OpenStack, aims to provide a short introduction toOpenStack Networking. We will cover the following topics in this article: Understand traffic flows between virtual and physical networks Neutron entities that support Layer 2 (L2) networking Layer 3 (L3) or routing between OpenStack Networks Securing OpenStack network traffic Advanced networking services in OpenStack OpenStack and SDN The terms Neutron and OpenStack Networking are used interchangeably throughout this article. (For more resources related to this topic, see here.) Virtual and physical networking Server virtualization led to the adoption of virtualized applications and workloads running inside physical servers. While physical servers are connected to the physical network equipment, modern networking has pushed the boundary of networking into the virtual domain as well. Virtual switches, firewalls and routers play a critical role in the flexibility provided by cloud infrastructure. Figure 1: Networking components for server virtualization The preceding figure describes a typical virtualized server and the various networking components. The virtual machines are connected to a virtual switch inside the compute node (or server). The traffic is secured using virtual routers and firewalls. The compute node is connected to a physical switch which is the entry point into the physical network. Let us now walk through different traffic flow scenarios using the picture above as the background. In Figure 2, traffic from one VM to another on same compute node is forwarded by the virtual switch itself. It does not reach the physical network. You can even apply firewall rules to traffic between the two virtual machine. Figure 2: Traffic flow between two virtual machines on the same server Next, let us have a look at how traffic flows between virtual machines across two compute nodes. In Figure 3, the traffic comes out from compute node and then reaches the physical switch. The physical switch forwards the traffic to the second compute node and the virtual switch within the second compute node steers the traffic to appropriate VM. Figure 3: Traffic flow between two virtual machines on the different servers Finally, here is the depiction of traffic flow when a virtual machine sends or receives traffic from the Internet. The physical switch forwards the traffic to the physical router and firewall which is presumed to be connected to the internet. Figure 4: Traffic flow from a virtual machine to external network As seen from the above diagrams, the physical and the virtual network components work together to provide connectivity to virtual machines and applications. Tenant isolation As a cloud platform, OpenStack supports multiple users grouped into tenants. One of the key requirements of a multi-tenant cloud is to provide isolation of data traffic belonging to one tenant from rest of the tenants that use the same infrastructure. OpenStack supports different ways of achieving isolation and the it is the responsibility of the virtual switch to implement the isolation. Layer 2 (L2) capabilities in OpenStack The connectivity to a physical or virtual switch is also known as Layer 2 (L2) connectivity in networking terminology. Layer 2 connectivity is the most fundamental form of network connectivity needed for virtual machines. As mentioned earlier OpenStack supports core and service functionality. The L2 connectivity for virtual machines falls under the core capability of OpenStack Networking, whereas Router, Firewall etc., fall under the service category. The L2 connectivity in OpenStack is realized using two constructs called Network and Subnet. Operators can use OpenStack CLI or the web interface to create Networks and Subnets. And virtual machines are instantiated, the operators can associate them appropriate Networks. Creatingnetwork using OpenStack CLI A Network defines the Layer 2 (L2) boundary for all the instances that are associated with it. All the virtual machines within a Network are part of the same L2 broadcast domain. The Liberty release has introduced new OpenStack CLI (Command Line Interface) for different services. We will use the new CLI and see how to create a Network. Creating Subnet using OpenStack CLI A Subnet is a range of IP addresses that are assigned to virtual machines on the associated network. OpenStack Neutron configures a DHCP server with this IP address range and it starts one DHCP server instance per Network, by default. We will now show you how to create a Subnet using OpenStack CLI. Note: Unlike Network, for Subnet, we need to use the regular neutron CLI command in the Liberty release. Associating a network and Subnet to a virtual machine To give a complete perspective, we will create a virtual machine using OpenStack web interface and show you how to associate a Network and Subnet to a virtual machine. In your OpenStack web interface, navigate to Project|Compute|Instances. Click on Launch Instances action on the right hand side as highlighted above. In the resulting window enter the name for your instance and how you want to boot your instance. To associate a network and a subnet with the instance, click on Networking tab. If you have more than one tenant network, you will be able to choose the network you want to associate with the instance. If you have exactly one network, the web interface will automatically select it. As mentioned earlier, providing isolation for Tenant network traffic is a key requirement for any cloud. OpenStack Neutron uses Network and Subnet to define the boundaries and isolate data traffic between different tenants. Depending on Neutron configuration, the actual isolation of traffic is accomplished by the virtual switches. VLAN and VXLAN are common networking technologies used to isolate traffic. Layer 3 (L3) capabilities in OpenStack Once L2 connectivity is established, the virtual machines within one Network can send or receive traffic between themselves. However, two virtual machines belonging to two different Networks will not be able communicate with each other automatically. This is done to provide privacy and isolation for Tenant networks. In order to allow traffic from one Network to reach another Network, OpenStack Networking supports an entity called Router. The default implementation of OpenStack uses Namespaces to support L3 routing capabilities. Creating Router using OpenStack CLI Operators can create Routers using OpenStack CLI or web interface. They can then add more than one Subnets as interface to the Router. This allows the Networks associated with the router to exchange traffic with one another. The command to create a Router is as follows: This command creates a Router with the specified name. Associating Subnetwork to a Router Once a Router is created, the next step is to associate one or more sub-networks to the Router. The command to accomplish this is: The Subnet represented by subnet1 is now associated to the Router router1. Securing network traffic in OpenStack The security of network traffic is very critical and OpenStack supports two mechanisms to secure network traffic. Security Groups allow traffic within a tenant’s network to be secured. Linux iptables on the compute nodes are used to implement OpenStack security groups. The traffic that goes outside of a tenant’s network – to another Network or the Internet, is secured using OpenStackFirewall Service functionality. Like Routing, Firewall is a service with Neutron. Firewall service also uses iptables but the scope of iptables is limited to the OpenStack Router used as part of the Firewall Service. Usingsecurity groups to secure traffic within a network In order to secure traffic going from one VM to another within a given Network, we must create a security group. The command to create a security group is: The next step is to create one or more rules within the security group. As an example let us create a rule which allows only UDP, incoming traffic on port 8080 from any Source IP address. The final step is to associate this security group and the rules to a virtual machine instance. We will use the nova boot command for this: Once the virtual machine instance has a security group associated with it, the incoming traffic will be monitored and depending upon the rules inside the security group, data traffic may be blocked or permitted to reach the virtual machine. Note: it is possible to block ingress or egress traffic using security groups. Using firewall service to secure traffic We have seen that security groups provide a fine grain control over what traffic is allowed to and from a virtual machine instance. Another layer of security supported by OpenStack is the Firewall as a Service (FWaaS). The FWaaS enforces security at the Router level whereas security groups enforce security at a virtual machine interface level. The main use case of FWaaS is to protect all virtual machine instances within a Network from threats and attacks from outsidethe Network. This could be virtual machines part of another Network in the same OpenStack cloud or some entity on the Internet trying to make an unauthorized access. Let us now see how FWaaS is used in OpenStack. In FWaaS, a set of firewall rules are grouped into a firewall policy. And then a firewall is created that implements one policy at a time. This firewall is then associated to a Router. Firewall rule can be created using neutron firewall-rule-create command as follows: This rule blocks ICMP protocol so applications like Ping will be blocked by the firewall. The next step is to create a Firewall policy. In real world scenarios the security administrators will define several rules and consolidate them under a single policy. For example all rules that block various types of traffic can be combined into a single Policy. The command to create a firewall policy is: The final step is to create a firewall and associate it with a router. The command to do this is: In the command above we did not specify any Routers and the OpenStack behavior is to associate the firewall (and in turn the policy and rules) to all the Routers available for that tenant. The neutron firewall-create command supports an option to pick a specific Router as well. Advanced networking services Besides routing and firewall, there are few other commonly used networking technologies supported by OpenStack. Let’s take a quick look at these without delving deep into the respective commands. Load Balancing as a Service (LBaaS) Virtual machines instances created in OpenStack are used to run applications. Most applications are required to support redundancy and concurrent access. For example, a web server may be accessed by a large number of users at the same time. One of the common strategies to handle scale and redundancy is to implement load-balancing for incoming requests. In this approach, aLoad Balancer distributes an incoming service request onto a pool of servers, which processes the request thus providing higher throughput. If one of the servers in the pool fails, the Load Balancer removes it from the pool and the subsequent service requests are distributed among the remaining servers. User of the application use the IP address of the Load Balancer to access the application and are unaware of the pool of servers. OpenStack implements Load Balancer using HAProxy software and Linux Namespace. Virtual Private Network as a Service (VPNaaS) As mentioned earlier tenant isolation requires data traffic to be segregated and secured within an OpenStack cloud. However, there are times when external entities need to be part of the same Network without removing the firewall based security. This can be accomplished using a Virtual Private Network or VPN. A Virtual Private Network (VPN) connects two endpoints on different networks over a public Internet connection, such that the endpoints appear to be directly connected to each other.  VPNs also provide confidentiality and integrity of transmitted data. Neutron provides a service plugin that enables OpenStack users to connect two networks using a Virtual Private Network (VPN).  The reference implementation of VPN plugin in Neutron uses Openswan to create an IPSec based VPN. IPSec is a suite of protocols that provides secure connection between two endpoints by encrypting each IP packet transferred between them. OpenStack and SDN context So far in this article we have seen the different networking capabilities provided by OpenStack. Let us know look at two capabilities in OpenStack that enable SDN to be leveraged effectively. Choice of technology OpenStack being an open source platform bundles open source networking solutions as default implementation for these networking capabilities. For example, Routing is supported using Namespace, security using iptables and Load balancing using HAproxy. Historically these networking capabilities were implemented using customized hardware and software, most of them being proprietary solutions. These custom solutions are capable of much higher performance and are well supported by their vendors. And hence they have a place in the OpenStack and SDN ecosystem. From it initial releases OpenStack has been designed for extensibility. Vendors can write their own extensions and then can easily configure OpenStack to use their extension instead of the default solutions. This allows the operators to deploy the networking technology of their choice. OpenStack API for networking One of the most powerful capabilities of OpenStack is the extensive support for APIs. All services of OpenStack interact with one another using well defined RESTful APIs. This allows custom implementations and pluggable components to provide powerful enhancements for practical cloud implementation. For example, when a Network is created using OpenStack web interface, a RESTful request is sent to Horizon service. This in turn invokes a RESTful API to validate the user using Keystone service. Once validated, Horizon sends another RESTful API request to Neutron to actually create the Network. Summary As seen in this article, OpenStack supports a wide variety of networking functionality right out of the box. The importance of isolating tenant traffic and the need to allow customized solution requires OpenStack to support flexible configuration. We also highlighted some key aspects of OpenStack that will play a key role in deploying Software Defined Networking in datacenters, thereby supporting powerful cloud architecture and solution. Resources for Article: Further resources on this subject: Setting Up a Network Backup Server with Bacula [article] Jenkins 2.0: The impetus for DevOps Movement [article] Integrating Accumulo into Various Cloud Platforms [article]
Read more
  • 0
  • 0
  • 3226

article-image-overview-horizon-view-architecture-and-components
Packt
06 Sep 2016
18 min read
Save for later

An Overview of Horizon View Architecture and Components

Packt
06 Sep 2016
18 min read
In this article by Peter von Oven, author of the book Mastering VMware Horizon 7 - Second Edition, we will introduce you to the architecture and infrastructure components that make up the core VMware Horizon solution, concentrating on the virtual desktop elements of Horizon with Horizon Standard edition, plus the Instant Clone technology that is available in the Horizon Enterprise edition. We are going to concentrate on the core Horizon View functionality of brokering virtual desktop machines that are hosted on a VMware vSphere platform. Throughout the sections of this article we will discuss the role of each of the Horizon View components, explaining how they fit into the overall infrastructure, their role, and the benefits they bring. Once we have explained the high-level concept, we will then take a deeper dive into how that particular component works. As we work through the sections we will also highlight some of the best practices as well as useful hints and tips along the way. We will also cover some of the third-party technologies that integrate and compliment Horizon View, such as antivirus solutions, storage acceleration technologies, and high-end graphics solutions that help deliver a complete end-to-end solution. After reading this article, you will be able to describe each of the components and what part they play within the solution, and why you would use them. (For more resources related to this topic, see here.) Introducing the key Horizon components To start with, we are going to introduce, at a high level, the core infrastructure components and the architecture that make up the Horizon View product. We will start with the high-level architecture, as shown in the following diagram, before going on to drill down into each part in greater detail. All of the VMware Horizon components described are included as part of the licensed product, and the features that are available to you depend on whether you have the Standard Edition, the Advanced Edition, or the Enterprise Edition. It’s also worth remembering that Horizon licensing also includes ESXi and vCenter licensing to support the abilityto deploy the core hosting infrastructure. You can deploy as many ESXi hosts andvCenter servers as you require to host the desktop infrastructure. High-level architectural overview In this section, we will cover the core Horizon View features and functionality for brokeringvirtual desktop machines that are hosted on the VMware vSphere platform. The Horizon View architecture is pretty straightforward to understand, as its foundations lie in the standard VMware vSphere products (ESXi and vCenter). So, if you have the necessary skills and experience of working with this platform, then you are already nearly halfway there. Horizon View builds on the vSphere infrastructure, taking advantage of some of the features of the ESXi hypervisor and vCenter Server. Horizon View requires adding a number of virtual machines to perform the various View roles and functions. An overview of the View architecture for delivery virtual desktops is shown in the following diagram: View components run as applications that are installed on the Microsoft WindowsServer operating system,with the exception of the Access Point which is a hardened Linux appliance, so they could actually run on physical hardware as well.However, there are a great number of benefits available when you run them asvirtual machines, such as delivering HA and DR, as well as the typical cost savingsthat can be achieved through virtualization. The following sections will cover each of these roles/components of the Viewarchitecture in greater detail, starting with the Horizon View Connection Server. The Horizon View Connection Server The Horizon View Connection Server, sometimes referred to as Connection Broker or View Manager, is the central component of the View infrastructure. Its primary role is to connect a user to their virtual desktop by means of performing user authentication and then delivering the appropriate desktop resources based on the user's profile and user entitlement. When logging on to your virtual desktop, it is the Connection Server that you are communicating with How does the Connection Server work? A user will typically connect to their virtual desktop machine from their end point device by launching the View Client, but equally they could use browser-based access. So how does the login process work? Once the View Client has launched (shown as 1 in the diagram on the following page), the user enters the address details of the View Connection Server, which in turn responds (2) by asking them to provide their network login details (their Active Directory (AD) domain username and password). It's worth noting that Horizon View now supports the following different AD Domain functionallevels: Windows Server 2003 Windows Server 2008 and 2008 R2 Windows Server 2012 and 2012 R2 Based on the user’s entitlements, these credentials are authenticated with AD (3) and, if successful, the user is able to continue the logon process. Depending on what they are entitled to, the user could see a launch screen that displays a number of different virtual desktop machine icons that are available for them to login to. These desktop icons represent the desktop poolsthat the user has been entitled to use. A pool is basically a collection of like virtual desktop machines; for example, it could be a pool for the marketing department where the virtual desktop machines contain specific applications/software for that department. Once authenticated, the View Manager or Connection Server makes a call to the vCenter Server (4) to create avirtual desktop machine and then vCenter makes a call (5) to either View Composer (if you areusing linked clones) or will create an Instant Clone using the VM Fork feature of vSphere to start the build process of the virtual desktop if there is not onealready available for the user to login to. When the build process has completed, and the virtual desktop machine is available to the end user,it is displayed/delivered within the View Client window (6) using the chosen display protocol (PCoIP, Blast, or RDP). This process is described pictorially in the following diagram: There are other ways to deploy VDI solutions that do not require a connection broker, although you could argue that strictly speaking this is not a true VDI solution. This is actually what the first VDI solutions looked like, and just allowed a user to connect directly to their own virtual desktop via RDP. If you think about it there are actually some specific use cases for doing just this. For example, if you have a large number of remote branches or offices, you could deploy local infrastructure allowingusers to continue working in the event of a WAN outageor poor network communication between the branch and head office. The infrastructure required would be a sub-set of what you deploy centrally in order to keep costs minimal. It just so happens that VMware have also thought of this use case and have a solution that’s referred to as a Brokerless View, which uses the VMware Horizon View Agent Direct-Connection Plugin. However, don't forget that, in a Horizon View environment, the View Connection Server provides greater functionality and does much more than just connecting users to desktops, as we will see later in this article. As we previously touched on, the Horizon View Connection Server runs as an application on a Windows Server which could be either be a physical or a virtual machine. Running as a virtual machine has many advantages; for example, it means that you can easily add high-availability features, which are critical in this environment, as you could potentially have hundreds or maybe even thousands of virtual desktop machines running on a single host server. Along with brokering the connections between the users and virtual desktop machines, the Connection Server also works with vCenter Server to manage the virtual desktop machines. For example, when using Linked Clones or Instant Clones and powering on virtual desktops, these tasks are initiated by the Connection Server, but they are executed at the vCenter Server level. Now that we have covered what the Connection Server is and how it works, in the next section we are going to look at the requirements you need for it to run. Minimum requirements for the Connection Server To install the View Connection Server, you need to meet the following minimum requirements to run on physical or virtual machines: Hardware requirements: The following table shows the hardware required: Supported operating systems: The View Connection Server must be installed on one of the following operating systems listed in the table below: In the next section we are going to look at the Horizon View Security Server. The Horizon View Security Server The Horizon View Security Server is another component in the architecture and is essentially another version of the View Connection Server but, this time, it sits within your DMZ so that you can allow end users to securely connect to their virtual desktop machine from an external network or the Internet. How does the Security Server work? To start with, the user login process at the beginning is the same as when connecting to a View Connection Server, essentially because the Security Server is just another version of the Connection Server running a subset of the features. The difference being is that you connect to the address of the Security Server.The Security Server sits inside your DMZ and communicates with a Connection Server sitting on the internal network that it ispaired with. So now we have added an extra security layer as the internal Connection Server is not exposed externally, with theidea being that users can now access their virtual desktop machines externally withoutneeding to first connect to a VPN on the network first.The Security Server should not be joined to the Domain. This process is described pictorially in the following diagram: We mentioned previously that the Security Server is paired with a Connection Servers. The pairing is configured by the use of a one-time password during installation. It's a bit like pairing your smart phone with the hands-free kit in your car using Bluetooth. When the user logs in from the View Client, they now use the external URL of the Security Server to access the Connection Server, which in turn authenticates the user against AD. If the Connection Server is configured as a PCoIP gateway, then it will pass the connection and addressing information to the View Client. This connection information will allow the View Client to connect to the Security Server using PCoIP. This is shown in the diagram by the green arrow (1). The Security Server will then forward the PCoIP connection to the virtual desktop machine, (2) creating the connection for the user. The virtual desktop machine is displayed/delivered within the View Client window (3) using the chosen display protocol (PCoIP, Blast, or RDP). The Horizon View Replica Server The Horizon View Replica Server, as the name suggests, is a replica or copy of a View Connection Server and serves two key purposes. The first is that it is used to enable high availability to your HorizonView environment. Having a replica of your View Connection Server means that, if the Connection Server fails, users are still able to connect to their virtualdesktop machines. Secondly, adding Replica Servers allows you to scale up the number of users and virtual desktop connections. An individual instance of a Connection Server can support 2000 connections, so by adding additional Connection Servers allows you to add another 2000 users at a time, up to the maximum of five connection servers and 10,000 users per Horizon View Pod. When deploying a Replica Server, you will need to change the IP address or update the DNSrecord to match this server if you are not using a load balancer. How does the Replica Server work? So, the first question is, what actually gets replicated? The Connection Broker stores all its information relating to the end users, desktop pools, virtual desktopmachines, and other View-related objects, in an Active Directory Application Mode(ADAM) database. Then, using the Lightweight Directory Access Protocol (LDAP) (it uses a method similar to the one AD uses for replication), this View information getscopied from the original Connection Server to the Replica Server. As both, the Connection Server and the Replica Server are now identical to each other, if your Connection Server fails, then you essentially have a backup that steps in and takes over so that end users can still continue to connect to their virtual desktop machines. Just like with the other components, you cannot install the Replica Server role on the same machine that is running as a Connection Server or any of the other Horizon View components. The Horizon View Enrollment Server and True SSO The Horizon View Enrollment Server is the final component that is part of the Horizon View Connection Server installation options, and is selected from the drop-down menu from the installation options screen.So what does the Enrollment Server do? Horizon 7 sees the introduction to a new feature called True SSO. True SSO is a solution that allows a user to authenticate to a Microsoft Windows environment without them having to enter their AD credentials.It integrates into another VMware product, VMware Identity Manager which forms part of both Horizon 7 Advanced and Enterprise Editions. Its job is to sit between the Connection Server and the Microsoft Certificate Authority and to request temporary certificates from the certificate store. This process is described pictorially in the following diagram: A user first logs in to VMware Identity Manager either using their credentials or other authentication methods such as smartcards or biometric devices. Once successfully authenticated, the user will be presented with the virtual desktop machines or hosted applications that they are entitled to use. They can launch any of these by simply double clicking, which by doing so will launch the Horizon View Client as shown by the red arrow (1) in the previous diagram. The user’s credentials will then be passed to the Connection Server (2) which in turn will verify them by sending a Security Assertion Markup Language (SAML)assertion back to the Identity Manager (3). If the user’s credentials are verified, then the Connection Server passes them on to the Enrollment Server (4). The Enrollment Server then makes a request to the Microsoft Certificate Authority (CA) to generate a short-lived, temporary certificate for that user to use (5). With the certificate now generated, the Connection Server presents it to the operating system of the virtual desktop machine (6), which in turn validates with Active Directory as to whether or not the certificate is authentic (7). When the certificate has been authenticated then the user is logged on to theirvirtual desktop machine which will be displayed/delivered to the View Client using the chosen display protocol (8). True SSO is supported with all Horizon 7 supported desktop operating systems for desktops as well Windows Server 2008 R2 and Windows Server 2012 R2. It also supports PCoIP, HTML, and Blast Extreme delivery protocols. VMware Access Point VMware Access Point performs exactly the same functionality as the View Security Server, as shown in the following diagram, however with one key difference. Instead of being a Windows application and another role of the Connection Server, the Access Point is a separate virtual appliance that runs a hardened, locked-down Linux operating system. Although the Access Point appliance delivers pretty much the same functionality as the Security Server, it does not yet completely replace it. Especially if you already have a production deployment that uses the Security Server for external access. You can continue to use this architecture. If you are using the secure tunnel function, PCoIP Secure Gateway, or the Blast Secure Gateway features of the Connection Server, then these features will need to be disabled on the Connection Server if you are using the Access Point. They are all enabled by default on the Access Point appliance. A key difference between the Access Point appliance and the Security Server is in the way it scales. Before you had to pair a Security Server with a Connection Server, which was a limitation, but this is now no longer the case. As such you can now scale to as many Access Point appliances as you need for your environment, with the maximum limit being around 2000 sessions for a single appliance. Adding additional appliances is simply a case of deploying the appliance as appliances don’t depend on other appliances and do not communicate with them. They communicate directly with the Connection Servers. Persistent or non-persistent desktops In this section, we are going to talk about the different types of desktop assignments, and the way a virtual desktop machine is delivered to an end user. This is an important design consideration as the chosen method could potentially impact on the storage requirements (covered in the next section), the hosting infrastructure, and also which technology or solution is used to provision the desktop to the end users. One of the questions that always get asked is whether you should deploy a dedicated (persistent) assignment, or a floating desktop assignment (non-persistent). Desktops can either be individual virtual machines, which are dedicated to a user on a 1:1 basis (as we have in a physical desktop deployment, where each user effectively owns their own desktop), or a user has a new, vanilla desktop that gets provisioned, built, personalized, and then assigned at the time of login. The virtual desktop machine is chosen at random from a pool of available desktops that the end user is entitled to use. The two options are described in more detail as follows: Persistent desktop: Users are allocated a desktop that retains all of their documents, applications, and settings between sessions. The desktop is statically assigned the first time that the user connects and is then used for all subsequent sessions. No other user is permitted access to the desktop. Non-persistent desktop: Users might be connected to different desktops from the pool, each time that they connect. Environmental or user data does not persist between sessions and instead is delivered as the user logs on to their desktop. The desktop is refreshed or reset when the user logs off. In most use cases, a non-persistent configuration is the best option, the key reason is that, in this model, you don't need to build all the desktops upfront for each user. You only need to power on a virtual desktop as and when it's required. All users start with the same basic desktop, which then gets personalized before delivery. This helps with concurrency rates. For example, you might have 5,000 people in your organization, but only 2,000 ever login at the same time; therefore, you only need to have 2,000 virtual desktops available. Otherwise, you would have to build a desktop for each one of the 5,000 users that might ever log in, resulting in more server infrastructure and certainly a lot more storage capacity. We will talk about storage in the next section. The one thing that used to be a bit of a show-stopper for non-persistent desktops was around how to deliver the applications to the virtual desktop machine. Now application layering solutions such as VMware App Volumes is becoming a more main stream technology, the applications can now be delivered on demand as the desktop is built and the user logs in. Another thing that we often see some confusion over is the difference between dedicated and floating desktops, and how linked clones fit in. Just to make it clear, linked clones, full clones, and Instant Clones are not what we are talking about when we refer to dedicated and floating desktops. Cloning operations refer to how a desktop is built and provisioned, whereas the terms persistent and non-persistent refer to how a desktop is assigned to an end user. Dedicated and floating desktops are purely about user assignment and whether a user has a dedicated desktop or one allocated from a pool on-demand. Linked clones and full clones are features of Horizon View, which uses View Composer to create the desktop image for each user from a master or parent image. This means, regardless of having a floating or dedicated desktop assignment, the virtual desktop machine could still be a linked or full clone. So, here's a summary of the benefits: It is operationally efficient: All users start from a single or smaller number of desktop images. Organizations reduce the amount of image and patch management. It is efficient storage-wise: The amount of storage required to host the non-persistent desktop images will be smaller than keeping separate instances of unique user desktop images. In the next sections, we are going to cover an in-depth overview of the cloning technologies available in Horizon 7, starting with Horizon View Composer and linked clones, and the advantages the technology delivers. Summary In this article, we dscussed the Horizon View architecture and the different components that make up the complete solution. We covered the key technologies, such as how linked clones and Instant Clones work to optimize storage, and then introduced some of the features that go toward delivering a great end user experience, such as delivering high-end graphics, unified communications, profile management, and how the protocols deliver the desktop to the end user. Resources for Article: Further resources on this subject: An Introduction to VMware Horizon Mirage [article] Upgrading VMware Virtual Infrastructure Setups [article] Backups in the VMware View Infrastructure [article]
Read more
  • 0
  • 0
  • 8400

article-image-introducing-vsphere-vmotion
Packt
16 Aug 2016
5 min read
Save for later

Introducing vSphere vMotion

Packt
16 Aug 2016
5 min read
In this article by Abhilash G B and Rebecca Fitzhugh author of the book Learning VMware vSphere, we are mostly going to be talking about howvSphere vMotion is a VMware technology used to migrate a running virtual machine from one host to another without altering its power-state. The beauty of the whole process is that it is transparent to the applications running inside the virtual machine. In this section we will understand the inner workings of vMotion and learn how to configure it. There are different types of vMotion, such as: Compute vMotion Storage vMotion Unified vMotion Enhanced vMotion (X-vMotion) Cross vSwitch vMotion Cross vCenter vMotion Long Distance vMotion (For more resources related to this topic, see here.) Compute vMotion is the default vMotion method and is employed by other features such as DRS, FT and Maintenance Mode. When you initiate a vMotion, it initiates an iterative copy of all memory pages. After the first pass, all the dirtied memory pages are copied again by doing another pass and this is done iteratively until the amount of pages left over to be copied is small enough to be transferred and to switch over the state of the VM to the destination host. During the switch over, the virtual machine's device state are transferred and resumed at the destination host.You can initiate up to 8 simultaneous vMotion operations on a single host. Storage vMotion is used to migrate the files backing a virtual machine (virtual disks, configuration files, logs) from one datastore to another while the virtual machine is still running. When you initiate a storage vMotion, it starts a sequential copy of source disk in 64 MB chunks. While a region is being copied, all the writes issued to that region are deferred until the region is copied. An already copied source region is monitored for further writes. If there is a write I/O, then it will be mirrored to the destination disk as well. This process of mirror writes to the destination virtual disk continues until the sequential copy of the entire source virtual disk is complete. Once the sequential copy is complete, all subsequent READS/WRITES are issued to the destination virtual disk. Keep in mind though that while the sequential copy is still in progress all the READs are issued to the source virtual disk. Storage vMotion is used be Storage DRS. You initiate up to 2 simultaneous SvMotion operations on a single host. Unified vMotion is used to migrate both the running state of a virtual machine and files backing it from one host and datastore to another. Unified vMotion uses a combination of both Compute and Storage vMotion to achieve the migration. First, the configuration files and the virtual disks are migrated and only then the migration of live state of the virtual machine will begin. You can initiate up to 2 simultaneous Unified vMotion operations on a single host. Enhanced vMotion (X-vMotion) is used to migrate virtual machine between hosts that do not share storage. Both the virtual machine's running state and the files backing it are transferred over the network to the destination. The migration procedure is same as the compute and storage vMotion. In fact, Enhanced vMotion uses Unified vMotion to achieve the migration. Since the memory and disk states are transferred over vMotion network, ESXi hosts maintain a transmit buffer at the source and a receive buffer at the destination. The transmit buffer collects and places data on to the network, while the receive buffer will collect data received via the network and flushes it to the storage. You can initiate up to 2 simultaneous X-vMotion operations on a single host. Cross vSwitch vMotion allows you to choose a destination port group for the virtual machine. It is important to note that unless the destination port group supports the same L2 network, the virtual machine will not be able to communicate over the network. Cross vSwitch vMotion allows changing from Standard vSwitch to VDS, but not from VDS to Standard vSwitch. vSwitch to vSwitch and VDS to VDS is supported. Cross vCenter vMotion allows migrating virtual machines beyond the vCenter's boundary. This is a new enhancement with vSphere 6.0. However, for this to be possible both the vCenter's should be in the same SSO Domain and should be in Enhanced Linked Mode. Infrastructure requirement for Cross vCenter vMotion has been detailed in the VMware Knowledge Base article 2106952 at the following link:http://kb.vmware.com/kb/2106952. Long Distance vMotion allows migrating virtual machines over distances with a latency not exceeding 150 milliseconds. Prior to vSphere 6.0, the maximum supported network latency for vMotion was 10 milliseconds. Using the provisioning interface You can configure a Provisioning Interface to send all non-active data of the virtual machine being migrated. Prior to vSphere 6.0, vMotion used the vmkernel interface which has the default gateway configured on it (which in most cases is the management interface vmk0) to transfer non-performance impacting vMotion data. Non-performance impacting vMotion data includes the Virtual Machine's home directory, older delta in the snapshot chain, base disks etc. Only the live data will hit the vMotion interface. The Provisioning Interface is nothing but a vmkernel interface with Provisioning Traffic enabled on this. The procedure to do this is very similar to how you would configure a vmkernel interface for Management or vMotion traffic. You will have to edit the settings of the intended vmk interface and set Provisioning traffic as the enabled service: It is important to keep in mind that the provisioning interface is not just meant for VMotion data, but if enabled it will be used for cold migrations, cloning operations and virtual machine snapshots. The provisioning interface can be configured to use a different gateway other than vmkernel's default gateway. Further resources on this subject: Cloning and Snapshots in VMware Workstation [article] Essentials of VMware vSphere [article] Upgrading VMware Virtual Infrastructure Setups [article]
Read more
  • 0
  • 0
  • 2110

article-image-creating-horizon-desktop-pools
Packt
16 May 2016
17 min read
Save for later

Creating Horizon Desktop Pools

Packt
16 May 2016
17 min read
A Horizon desktop pool is a collection of desktops that users select when they log in using the Horizon client. A pool can be created based on a subset of users, such as finance, but this is not explicitly required unless you will be deploying multiple virtual desktop master images. The pool can be thought of as a central point of desktop management within Horizon; from it you create, manage, and entitle access to Horizon desktops. This article by Jason Ventresco, author of the book Implementing VMware Horizon View 6.X, will discuss how to create a desktop pool using the Horizon Administrator console, an important administrative task. (For more resources related to this topic, see here.) Creating a Horizon desktop pool This section will provide an example of how to create two different Horizon dedicated assignment desktop pools, one based on Horizon Composer linked clones and another based on full clones. Horizon Instant Clone pools only support floating assignment, so they have fewer options compared to the other types of desktop pools. Also discussed will be how to use the Horizon Administrator console and the vSphere client to monitor the provisioning process. The examples provided for full clone and linked clone pools created dedicated assignment pools, although floating assignment may be created as well. The options will be slightly different for each, so refer the information provided in the Horizon documentation (https://www.vmware.com/support/pubs/view_pubs.html), to understand what each setting means. Additionally, the Horizon Administrator console often explains each setting within the desktop pool configuration screens. Creating a pool using Horizon Composer linked clones The following steps outline how to use the Horizon Administrator console to create a dedicated assignment desktop pool using Horizon Composer linked clones. As discussed previously, it is assumed that you already have a virtual desktop master image that you have created a snapshot of. During each stage of the pool creation process, a description of many of the settings is displayed in the right-hand side of the Add Desktop Pool window. In addition, a question mark appears next to some of the settings; click on it to read important information about the specified setting. Log on to the Horizon Administrator console using an AD account that has administrative permissions within Horizon. Open the Catalog | Desktop Pools window within the console. Click on the Add… button in the Desktop Pools window to open the Add Desktop Pool window. In the Desktop Pool Definition | Type window, select the Automated Desktop Pool radio button as shown in the following screenshot, and then click on Next >: In the Desktop Pool Definition | User Assignment window, select the Dedicated radio button and check the Enable automatic assignment checkbox as shown in the following screenshot, and then click on Next >: In the Desktop Pool Definition | vCenter Server window, select the View Composer linked clones radio button, highlight the vCenter server as shown in the following screenshot, and then click on Next >: In the Setting | Desktop Pool Identification window, populate the pool ID: as shown in the following screenshot, and then click on Next >. Optionally, configure the Display Name: field. When finished, click on Next >: In the Setting | Desktop Pool Settings window, configure the various settings for the desktop pool. These settings can also be adjusted later if desired. When finished, click on Next >: In the Setting | Provisioning Settings window, configure the various provisioning options for the desktop pool that include the desktop naming format, the number of desktops, and the number of desktops that should remain available during Horizon Composer maintenance operations. When finished, click on Next >: When creating a desktop naming pattern, use a {n} to instruct Horizon to insert a unique number in the desktop name. For example, using Win10x64{n} as shown in the preceding screenshot will name the first desktop Win10x641, the next Win10x642, and so on. In the Setting | View Composer Disks window, configure the settings for your optional linked clone disks. By default, both a Persistent Disk for user data and a non-persistent disk for Disposable File Redirection are created. When finished, click on Next >: In the Setting | Storage Optimization window, we configure whether or not our desktop storage is provided by VMware Virtual SAN, and if not whether or not to separate our Horizon desktop replica disks from the individual desktop OS disks. In our example, we have checked the Use VMware Virtual SAN radio button as that is what our destination vSphere cluster is using. When finished, click on Next >: As all-flash storage arrays or all-flash or flash-dependent Software Defined Storage (SDS) platforms become more common, there is less of a need to place the shared linked clone replica disks on separate, faster datastores than the individual desktop OS disks. In the Setting | vCenter Settings window, we will need to configure six different options that include selecting the parent virtual machine, which snapshot of that virtual machine to use, what vCenter folder to place the desktops in, what vSphere cluster and resource pool to deploy the desktops to, and what datastores to use. Click on the Browse… button next to the Parent VM: field to begin the process and open the Select Parent VM window: In the Select Parent VM window, highlight the virtual desktop master image that you wish to deploy desktops from, as shown in the following screenshot. Click on OK when the image is selected to return to the previous window: The virtual machine will only appear if a snapshot has been created. In the Setting | vCenter Settings window, click on the Browse… button next to the Snapshot: field to open the Select default image window. Select the desired snapshot, as shown in the following screenshot, and click on OK to return to the previous window: In the Setting | vCenter Settings window, click on the Browse… button next to the VM folder location: field to open the VM Folder Location window, as shown in the following screenshot. Select the folder within vCenter where you want the desktop virtual machines to be placed, and click on OK to return to the previous window: In the Setting | vCenter Settings window, click on the Browse… button next to the Host or cluster: field to open the Host or Cluster window, as shown in the following screenshot. Select the cluster or individual ESXi server within vCenter where you want the desktop virtual machines to be created, and click on OK to return to the previous window: In the Setting | vCenter Settings window, click on the Browse… button next to the Resource pool: field to open the Resource Pool window, as shown in the following screenshot. If you intend to place the desktops within a resource pool you would select that here; if not select the same cluster or ESXi server you chose in the previous step. Once finished, click on OK to return to the previous window: In the Setting | vCenter Settings window, click on the Browse… button next to the Datastores: field to open the Select Linked Clone Datastores window, as shown in the following screenshot. Select the datastore or datastores where you want the desktops to be created, and click on OK to return to the previous window: If you were using storage other than VMware Virtual SAN, and had opted to use separate datastores for your OS and replica disks in step 11, you would have had to select unique datastores for each here instead of just one. Additionally, you would have had the option to configure the storage overcommit level. The Setting | vCenter Settings window should now have all options selected, enabling the Next > button. When finished, click on Next >: In the Setting | Advanced Storage Options window, if desired select and configure the Use View Storage Accelerator and Other Options check boxes to enable those features. In our example, we have enabled both the Use View Storage Accelerator and Reclaim VM disk space options, and configured Blackout Times to ensure that these operations do not occur between 8 A.M. (08:00) and 5 P.M. (17:00) on weekdays. When finished, click on Next >: The Use native NFS snapshots (VAAI) feature enables Horizon to leverage features of the a supported NFS storage array to offload the creation of linked clone desktops. If you are using an external array with your Horizon ESXi servers, consult the product documentation to understand if it supports this feature. Since we are using VMware Virtual SAN, this and other options under Other Options are greyed out as these settings are not needed. Additionally, if View Storage Accelerator is not enabled in the vCenter Server settings the option to use it would be greyed out here. In the Setting | Guest Customization window, select the Domain: where the desktops will be created, the AD container: where the computer accounts will be placed, whether to Use QuickPrep or Use a customization specification (Sysprep), and any other options as required. When finished, click on Next >: In the Setting | Ready to Complete window, verify that the settings we selected were correct, using the < Back button if needed to go back and make changes. If all the settings are correct, click on Finish to initiate the creation of the desktop pool. The Horizon desktop pool and virtual desktops will now be created. Creating a pool using Horizon Instant Clones The process used to create an Instant Clone desktop pool is similar to that used to create a linked clone pool. As discussed previously, it is assumed that you already have a virtual desktop master image that has the Instant Clone option enabled in the Horizon agent, and that you have taken a snapshot of that master image. A master image can have either the Horizon Composer (linked clone) option or Instant Clone option enabled in the Horizon agent, but not both. To get around this restriction you can configure one snapshot of the master image with the View Composer option installed, and a second with the Instant Clone option installed. The following steps outline the process used to create the Instant Clone desktop pool. Screenshots are included only when the step differs significantly from the same step in the Creating a pool using Horizon Composer linked clones section. Log on to the Horizon Administrator console using an AD account that has administrative permissions within Horizon. Open the Catalog | Desktop Pools window within the console. Click on the Add… button in the Desktop Pools window to open the Add Desktop Pool window. In the Desktop Pool Definition | Type window, select the Automated Desktop Pool radio button as shown in the following screenshot, and then click on Next >. In the Desktop Pool Definition | User Assignment window, select the Floating radio button (mandatory for Instant Clone desktops), and then click on Next >. In the Desktop Pool Definition | vCenter Server window, select the View Composer linked clones radio button as shown in the following screenshot, highlight the vCenter server, and then click on Next >: If Instant Clones is greyed out here, it is usually because you did not select Floating in the previous step. In the Setting | Desktop Pool Identification window, populate the pool ID:, and then click on Next >. Optionally, configure the Display Name: field. In the Setting | Desktop Pool Settings window, configure the various settings for the desktop pool. These settings can also be adjusted later if desired. When finished, click on Next >. In the Setting | Provisioning Settings window, configure the various provisioning options for the desktop pool that include the desktop naming format, the number of desktops, and the number of desktops that should remain available during maintenance operations. When finished, click on Next >. Instant Clones are required to always be powered on, so some options available to linked clones will be greyed out here. In the Setting | Storage Optimization window, we configure whether or not our desktop storage is provided by VMware Virtual SAN, and if not whether or not to separate our Horizon desktop replica disks from the individual desktop OS disks. When finished, click on Next >. In the Setting | vCenter Settings window, we will need to configure six different options that include selecting the parent virtual machine, which snapshot of that virtual machine to use, what vCenter folder to place the desktops in, what vSphere cluster and resource pool to deploy the desktops to, and what datastores to use. Click on the Browse… button next to the Parent VM: Horizon Instant Clone field to begin the process and open the Select Parent VM window. In the Select Parent VM window, highlight the virtual desktop master image that you wish to deploy desktops from. Click on OK when the image is selected to return to the previous window. In the Setting | vCenter Settings window, click on the Browse… button next to the Snapshot: field to open the Select default image window. Select the desired snapshot, and click on OK to return to the previous window. In the Setting | vCenter Settings window, click on the Browse… button next to the VM folder location: field to open the VM Folder Location window. Select the folder within vCenter where you want the desktop virtual machines to be placed, and click on OK to return to the previous window. In the Setting | vCenter Settings window, click on the Browse… button next to the Host or cluster: field to open the Host or Cluster window. Select the cluster or individual ESXi server within vCenter where you want the desktop virtual machines to be created, and click on OK to return to the previous window. In the Setting | vCenter Settings window, click on the Browse… button next to the Resource pool: field to open the Resource Pool window. If you intend to place the desktops within a resource pool you would select that here; if not select the same cluster or ESXi server you chose in the previous step. Once finished, click on OK to return to the previous window. In the Setting | vCenter Settings window, click on the Browse… button next to the Datastores: field to open the Select Instant Clone Datastores window. Select the datastore or datastores where you want the desktops to be created, and click on OK to return to the previous window. The Setting | vCenter Settings window should now have all options selected, enabling the Next > button. When finished, click on Next >. In the Setting | Guest Customization window, select the Domain: where the desktops will be created, the AD container: where the computer accounts will be placed, and any other options as required. When finished, click on Next >. Instant Clones only support ClonePrep for customization, so there are fewer options here than seen when deploying a linked clone desktop pool. In the Setting | Ready to Complete window, verify that the settings we selected were correct, using the < Back button if needed to go back and make changes. If all the settings are correct, click on Finish to initiate the creation of the desktop pool. The Horizon desktop pool and Instant Clone virtual desktops will now be created. Creating a pool using full clones The process used to create full clone desktops pool is similar to that used to create a linked clone pool. As discussed previously, it is assumed that you already have a virtual desktop master image that you have converted to a vSphere template. In addition, if you wish for Horizon to perform the virtual machine customization, you will need to create a Customization Specification using the vCenter Customization Specifications Manager. The Customization Specification is used by the Windows Sysprep utility to complete the guest customization process. Visit the VMware vSphere virtual machine administration guide (http://pubs.vmware.com/vsphere-60/index.jsp) for instructions on how to create a Customization Specification. The following steps outline the process used to create the full clone desktop pool. Screenshots are included only when the step differs significantly from the same step in the Creating a pool using Horizon Composer linked clones section. Log on to the Horizon Administrator console using an AD account that has administrative permissions within Horizon. Open the Catalog | Desktop Pools window within the console. Click on the Add… button in the Desktop Pools window to open the Add Desktop Pool window. In the Desktop Pool Definition | Type window select the Automated Pool radio button and then click on Next. In the Desktop Pool Definition | User Assignment window, select the Dedicated radio button, check the Enable automatic assignment checkbox, and then click on Next. In the Desktop Pool Definition | vCenter Server window, click the Full virtual machines radio button, highlight the desired vCenter server, and then click on Next. In the Setting | Desktop Pool Identification window, populate the pool ID: and Display Name: fields and then click on Next. In the Setting | Desktop Pool Settings window, configure the various settings for the desktop pool. These settings can also be adjusted later if desired. When finished, click on Next >. In the Setting | Provisioning Settings window, configure the various provisioning options for the desktop pool that include the desktop naming format and number of desktops. When finished, click on Next >. In the Setting | Storage Optimization window, we configure whether or not our desktop storage is provided by VMware Virtual SAN. When finished, click on Next >. In the Setting | vCenter Settings window, we will need to configure settings that set the virtual machine template, what vSphere folder to place the desktops in, which ESXi server or cluster to deploy the desktops to, and which datastores to use. Other than the Template setting described in the next step, each of these settings is identical to those seen when creating a Horizon Composer linked clone pool. Click on the Browse… button next to each of the settings in turn and select the appropriate options. To configure the Template: setting, select the vSphere template that you created from your virtual desktop master image as shown in the following screenshot, and then click OK to return to the previous window: A template will only appear if one is present within vCenter. Once all the settings in the Setting | vCenter Settings window have been configured, click on Next >. In the Setting | Advanced Storage Options window, if desired select and configure the Use View Storage Accelerator radio buttons and configure Blackout Times. When finished, click on Next >. In the Setting | Guest Customization window, select either the None | Customization will be done manually or Use this customization specification radio button, and if applicable select a customization specification. When finished, click on Next >. In the following screenshot, we have selected the Win10x64-HorizonFC customization specification that we previously created within vCenter: Manual customization is typically used when the template has been configured to run Sysprep automatically upon start up, without requiring any interaction from either Horizon or VMware vSphere. In the Setting | Ready to Complete window, verify that the settings we selected were correct, using the < Back button if needed to go back and make changes. If all the settings are correct, click on Finish to initiate the creation of the desktop pool. The desktop pool and virtual desktops will now be created. Summary In this article, we have learned about Horizon desktop pools. In addition to learning how to create three different types of desktop pools, we were introduced to a number of key concepts that are part of the pool creation process. Resources for Article: Further resources on this subject: Essentials of VMware vSphere [article] Cloning and Snapshots in VMware Workstation [article] An Introduction to VMware Horizon Mirage [article]
Read more
  • 0
  • 0
  • 3707

article-image-understanding-proxmox-ve-and-advanced-installation
Packt
13 Apr 2016
12 min read
Save for later

Understanding Proxmox VE and Advanced Installation

Packt
13 Apr 2016
12 min read
In this article by Wasim Ahmed, the author of the book Mastering Proxmox - Second Edition, we will see Virtualization as we all know today is a decade old technology that was first implemented in mainframes of the 1960s. Virtualization was a way to logically divide the mainframe's resources for different application processing. With the rise in energy costs, running under-utilized server hardware is no longer a luxury. Virtualization enables us to do more with less thus save energy and money while creating a virtual green data center without geographical boundaries. (For more resources related to this topic, see here.) A hypervisor is a piece software, hardware, or firmware that creates and manages virtual machines. It is the underlying platform or foundation that allows a virtual world to be built upon. In a way, it is the very building block of all virtualization. A bare metal hypervisor acts as a bridge between physical hardware and the virtual machines by creating an abstraction layer. Because of this unique feature, an entire virtual machine can be moved over a vast distance over the Internet and be made able to function exactly the same. A virtual machine does not see the hardware directly; instead, it sees the layer of the hypervisor, which is the same no matter on what hardware the hypervisor has been installed. The Proxmox Virtual Environment (VE) is a cluster-based hypervisor and one of the best kept secrets in the virtualization world. The reason is simple. It allows you to build an enterprise business-class virtual infrastructure at a small business-class price tag without sacrificing stability, performance, and ease of use. Whether it is a massive data center to serve millions of people, or a small educational institution, or a home serving important family members, Proxmox can handle configuration to suit any situation. If you have picked up this article, no doubt you will be familiar with virtualization and perhaps well versed with other hypervisors, such VMWare, Xen, Hyper-V, and so on. In this article and upcoming articles, we will see the mighty power of Proxmox from inside out. We will examine scenarios and create a complex virtual environment. We will tackle some heavy day-to-day issues and show resolutions, which might just save the day in a production environment. So, strap yourself and let's dive into the virtual world with the mighty hypervisor, Proxmox VE. Understanding Proxmox features Before we dive in, it is necessary to understand why one should choose Proxmox over the other main stream hypervisors. Proxmox is not perfect but stands out among other contenders with some hard to beat features. The following are some of the features that makes Proxmox a real game changer. It is free! Yes, Proxmox is free! To be more accurate, Proxmox has several subscription levels among which the community edition is completely free. One can simply download Proxmox ISO at no cost and raise a fully functional cluster without missing a single feature and without paying anything. The main difference between the paid and community subscription level is that the paid subscription receives updates, which goes through additional testing and refinement. If you are running a production cluster with real workload, it is highly recommended that you purchase support and licensing from Proxmox or Proxmox resellers. Built-in firewall Proxmox VE comes with a robust firewall ready to be configured out of the box. This firewall can be configured to protect the entire Proxmox cluster down to a virtual machine. The Per VM firewall option gives you the ability to configure each VM individually by creating individualized firewall rules, a prominent feature in a multi-tenant virtual environment. Open vSwitch Licensed under Apache 2.0 license, Open vSwitch is a virtual switch designed to work in a multi-server virtual environment. All hypervisors need a bridge between VMs and the outside network. Open vSwitch enhances features of the standard Linux bridge in an ever changing virtual environment. Proxmox fully supports Open vSwitch that allows you to create an intricate virtual environment all the while, reducing virtual network management overhead. For details on Open vSwitch, refer to http://openvswitch.org/. The graphical user interface Proxmox comes with a fully functional graphical user interface or GUI out of the box. The GUI allows an administrator to manage and configure almost all the aspects of a Proxmox cluster. The GUI has been designed keeping simplicity in mind with functions and features separated into menus for easier navigation. The following screenshot shows an example of the Proxmox GUI dashboard: KVM virtual machines KVM or Kernel-based virtual machine is a kernel module that is added to Linux for full virtualization to create isolated fully independent virtual machines. KVM VMs are not dependent on the host operating system in any way, but they do require the virtualization feature in BIOS to be enabled. KVM allows a wide variety of operating systems for virtual machines, such as Linux and Windows. Proxmox provides a very stable environment for KVM-based VMs. Linux containers or LXC Introduced recently in Proxmox VE 4.0, Linux containers allow multiple Linux instances on the same Linux host. All the containers are dependent on the host Linux operating system and only Linux flavors can be virtualized as containers. There are no containers for the Windows operating system. LXC replace prior OpenVZ containers, which were the primary containers in the virtualization method in the previous Proxmox versions. If you are not familiar with LXC and for details on LXC, refer to https://linuxcontainers.org/. Storage plugins Out of the box, Proxmox VE supports a variety of storage systems to store virtual disk images, ISO templates, backups, and so on. All plug-ins are quite stable and work great with Proxmox. Being able to choose different storage systems gives an administrator the flexibility to leverage the existing storage in the network. As of Proxmox VE 4.0, the following storage plug-ins are supported: The local directory mount points iSCSI LVM Group NFS Share GlusterFS Ceph RBD ZFS Vibrant culture Proxmox has a growing community of users who are always helping others to learn Proxmox and troubleshoot various issues. With so many active users around the world and through active participation of Proxmox developers, the community has now become a culture of its own. Feature requests are continuously being worked on, and the existing features are being strengthened on a regular basis. With so many users supporting Proxmox, it is sure here to stay. The basic installation of Proxmox The installation of a Proxmox node is very straightforward. Simply, accept the default options, select localization, and enter the network information to install Proxmox VE. We can summarize the installation process in the following steps: Download ISO from the official Proxmox site and prepare a disc with the image (http://proxmox.com/en/downloads). Boot the node with the disc and hit enter to start the installation from the installation GUI. We can also install Proxmox from a USB drive. Progress through the prompts to select options or type in information. After the installation is complete, access the Proxmox GUI dashboard using the IP address, as follows: https://<proxmox_node_ip:8006 In some cases, it may be necessary to open the firewall port to allow access to the GUI over port 8006. The advanced installation option Although the basic installation works in all scenarios, there may be times when the advanced installation option may be necessary. Only the advanced installation option provides you the ability to customize the main OS drive. A common practice for the operating system drive is to use a mirror RAID array using a controller interface. This provides drive redundancy if one of the drives fails. This same level of redundancy can also be achieved using a software-based RAID array, such as ZFS. Proxmox now offers options to select ZFS-based arrays for the operating system drive right at the beginning of the installation. For details on ZFS, if you are not familiar with ZFS, refer to https://en.wikipedia.org/wiki/ZFS. It is a common question to ask why one should choose ZFS software RAID over tried and tested hardware-based RAID. The simple answer is flexibility. A hardware RAID is locked or fully dependent on the hardware RAID controller interface that created the array, whereas ZFS software-based is not dependent on any hardware, and the array can be easily be ported to different hardware nodes. Should a RAID controller failure occur, the entire array created from that controller is lost unless there is an identical controller interface available for replacement? The ZFS array is only lost when all the drives or maximum tolerable number of drives are lost in the array. Besides ZFS, we can also select other filesystem types, such as ext3, ext4, or xfs from the same advanced option. We can also set the custom disk or partition sizes through the advanced option. The following screenshot shows the installation interface with the Target Hard disk selection page: Click on Options, as shown in the preceding screenshot, to open the advanced option for the Hard disk. The following screenshot shows the option window after clicking on the Options button: In the preceding screenshot, we selected ZFS RAID1 for mirroring and the two drives, Harddisk 0 and Harddisk 1, respectively to install Proxmox. If we pick one of the filesystems such as ext3, ext4, or xfs instead of ZFS, the Hard disk Option dialog box will look like the following screenshot with different set of options: Selecting a filesystem gives us the following advanced options: hdsize: This is the total drive size to be used by the Proxmox installation. swapsize: This defines the swap partition size. maxroot: This defines the maximum size to be used by the root partition. minfree: This defines the minimum free space that should remain after the Proxmox installation. maxvz: This defines the maximum size for data partition. This is usually /var/lib/vz. Debugging the Proxmox installation Debugging features are part of any good operating system. Proxmox has debugging features that will help you during a failed installation. Some common reasons are unsupported hardware, conflicts between devices, ISO image errors, and so on. Debugging mode logs and displays installation activities in real time. When the standard installation fails, we can start the Proxmox installation in debug mode from the main installation interface, as shown in the following screenshot: The debug installation mode will drop us in the following prompt. To start the installation, we need to press Ctrl + D. When there is an error during the installation, we can simply press Ctrl + C to get back to this console to continue with our investigation: From the console, we can check the installation log using the following command: # cat /tmp/install.log From the main installation menu, we can also press e to enter edit mode to change the loader information, as shown in the following screenshot: At times, it may be necessary to edit the loader information when normal booting does not function. This is a common case when Proxmox is unable to show the video output due to UEFI or a nonsupported resolution. In such cases, the booting process may hang. One way to continue with booting is to add the nomodeset argument by editing the loader. The loader will look as follows after editing: linux/boot/linux26 ro ramdisk_size=16777216 rw quiet nomodeset Customizing the Proxmox splash screen When building a custom Proxmox solution, it may be necessary to change the default blue splash screen to something more appealing in order to identify the company or department the server belongs to. In this section, we will see how easily we can integrate any image as the splash screen background. The splash screen image must be in the .tga format and must have fixed standard sizes, such as 640 x 480, 800 x 600, or 1024 x 768. If you do not have any image software that supports the .tga format, you can easily convert an jpg, gif, or png image to the .tga format using a free online image converter (http://image.online-convert.com/convert-to-tga). Once the desired image is ready in the .tga format, the following steps will integrate the image as the Proxmox splash screen: Copy the .tga image in the Proxmox node in the /boot/grub directory. Edit the grub file in /etc/default/grub to add the following code, and click on save: GRUB_BACKGROUND=/boot/grub/<image_name>.tga Run the following command to update the grub configuration: # update-grub Reboot. The following screenshot shows an example of how the splash screen may look like after we add a custom image to it: Picture courtesy of www.techcitynews.com We can also change the font color to make it properly visible, depending on the custom image used. To change the font color, edit the debian theme file in /etc/grub.d/05_debian_theme, and find the following line of code: set_background_image "${GRUB_BACKGROUND}" || set_default_theme Edit the line to add the font color, as shown in the following format. In our example, we have changed the font color to black and highlighted the font color to light blue: set_background_image "${GRUB_BACKGROUND}" "black/black" "light-blue/black" || set_default_theme After making the necessary changes, update grub, and reboot to see the changes. Summary In this article, we looked at why Proxmox is a better option as a hypervisor, what advanced installation options are available during an installation, and why do we choose software RAID for the operating system drive. We also looked at the cost of Proxmox, storage options, and network flexibility using openvswitch. We learned the presence of the debugging features and customization options of the Proxmox splash screen. In next article, we will take a closer look at the Proxmox GUI and see how easy it is to centrally manage a Proxmox cluster from a web browser. Resources for Article:   Further resources on this subject: Proxmox VE Fundamentals [article] Basic Concepts of Proxmox Virtual Environment [article]
Read more
  • 0
  • 0
  • 6511
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-proxmox-ve-fundamentals
Packt
04 Apr 2016
12 min read
Save for later

Proxmox VE Fundamentals

Packt
04 Apr 2016
12 min read
In this article written by Rik Goldman author of the book Learning Proxmox VE, we introduce to you Proxmox Virtual Environment (PVE) which is a mature, complete, well-supported, enterprise-class virtualization environment for servers. It is an open source tool—based in the Debian GNU/Linux distribution—that manages containers, virtual machines, storage, virtualized networks, and high-availability clustering through a well-designed, web-based interface or via the command-line interface. (For more resources related to this topic, see here.) Developers provided the first stable release of Proxmox VE in 2008; 4 years and eight point releases later, ZDNet's Ken Hess boldly, but quite sensibly, declared Proxmox VE as Proxmox: The Ultimate Hypervisor (http://www.zdnet.com/article/proxmox-the-ultimate-hypervisor/). Four years later, PVE is on version 4.1, in use by at least 90,000 hosts, and more than 500 commercial customers in 140 countries; the web-based administrative interface itself is translated into nineteen languages. This article will explore the fundamental technologies underlying PVE's hypervisor features: LXC, KVM, and QEMU. To do so, we will develop a working understanding of virtual machines, containers, and their appropriate use. We will cover the following topics: Proxmox VE in brief Virtualization and containerization with PVE Proxmox VE virtual machines, KVM, and QEMU Containerization with PVE and LXC Proxmox VE in brief With Proxmox VE, Proxmox Server Solutions GmbH (https://www.proxmox.com/en/about) provides us with an enterprise-ready, open source type II hypervisor. Later, you'll find some of the features that make Proxmox VE such a strong enterprise candidate. The license for Proxmox VE is very deliberately the GNU Affero General Public License (V3) (https://www.gnu.org/licenses/agpl-3.0.html). From among the many free and open source compatible licenses available, this is a significant choice because it is "specifically designed to ensure cooperation with the community in the case of network server software." PVE is primarily administered from an integrated web interface or from the command line locally or via SSH. Consequently, there is no need for a separate management server and the associated expenditure. In this way, Proxmox VE significantly contrasts with alternative enterprise virtualization solutions by vendors such as VMware. Proxmox VE instances/nodes can be incorporated into PVE clusters, and centrally administered from a unified web interface. Proxmox VE provides for live migration—the movement of a virtual machine or container from one cluster node to another without any disruption of services. This is a rather unique feature to PVE and not common to competing products. Features Proxmox VE VMware vSphere Hardware requirements Flexible Strict compliance with HCL Integrated management interface Web- and shell-based (browser and SSH) No. Requires dedicated management server at additional cost Simple subscription structure Yes; based on number of premium support tickets per year and CPU socket count No High availability Yes Yes VM live migration Yes Yes Supports containers Yes No Virtual machine OS support Windows and Linux Windows, Linux, and Unix Community support Yes No Live VM snapshots Yes Yes Contrasting Proxmox VE and VMware vSphere features For a complete catalog of features, see the Proxmox VE datasheet at https://www.proxmox.com/images/download/pve/docs/Proxmox-VE-Datasheet.pdf. Like its competitors, PVE is a hypervisor: a typical hypervisor is software that creates, runs, configures, and manages virtual machines based on an administrator or engineer's choices. PVE is known as a type II hypervisor because the virtualization layer is built upon an operating system. As a type II hypervisor, Proxmox VE is built on the Debian project. Debian is a GNU/Linux distribution renowned for its reliability, commitment to security, and its thriving and dedicated community of contributing developers. A type II hypervisor, such as PVE, runs directly over the operating system. In Proxmox VE's case, the operating system is Debian; since the release of PVE 4.0, the underlying operating system has been Debian "Jessie." By contrast, a type I hypervisor (such as VMware's ESXi) runs directly on bare metal without the mediation of an operating system. It has no additional function beyond managing virtualization and the physical hardware. A type I hypervisor runs directly on hardware, without the mediation of an operating system. As a type II hypervisor, Proxmox VE is built on the Debian project. Debian is a GNU/Linux distribution renowned for its reliability, commitment to security, and its thriving and dedicated community of contributing developers. Debian-based GNU/Linux distributions are arguably the most popular GNU/Linux distributions for the desktop. One characteristic that distinguishes Debian from competing distribution is its release policy: Debian releases only when its development community can stand behind it for its stability, security, and usability. Debian does not distinguish between long-term support releases and regular releases as do some other distributions. Instead, all Debian releases receive strong support and critical updates through the first year following the next release. (Since 2007, a major release of Debian has been made about every two years. Debian 8, Jessie, was released just about on schedule in 2015. Proxmox VE's reliance on Debian is thus a testament to its commitment to these values: stability, security, and usability over scheduled releases that favor cutting-edge features. PVE provides its virtualization functionality through three open technologies, and the efficiency with which they're integrated by its administrative web interface: LXC KVM QEMU To understand how this foundation serves Proxmox VE, we must first be able to clearly understand the relationship between virtualization (or, specifically, hardware virtualization) and containerization (OS virtualization). As we proceed, their respective use cases should become clear. Virtualization and containerization with Proxmox VE It is correct to ultimately understand containerization as a type of virtualization. However, here, we'll look first to conceptually distinguish a virtual machine from a container by focusing on contrasting characteristics. Simply put, virtualization is a technique through which we provide fully-functional, computing resources without a demand for resources' physical organization, locations, or relative proximity. Briefly put, virtualization technology allows you to share and allocate the resources of a physical computer into multiple execution environments. Without context, virtualization is a vague term, encapsulating the abstraction of such resources as storage, networks, servers, desktop environments, and even applications from their concrete hardware requirements through software implementation solutions called hypervisors. Virtualization thus affords us more flexibility, more functionality, and a significant positive impact on our budgets—often realized with merely the resources we have at hand. In terms of PVE, virtualization most commonly refers to the abstraction of all aspects of a discrete computing system from its hardware. In this context, virtualization is the creation, in other words, of a virtual machine or VM, with its own operating system and applications. A VM may be initially understood as a computer that has the same functionality as a physical machine. Likewise, it may be incorporated and communicated with via a network exactly as a machine with physical hardware would. Put yet another way, from inside a VM, we will experience no difference from which we can distinguish it from a physical computer. The virtual machine, moreover, hasn't the physical footprint of its physical counterparts. The hardware it relies on is, in fact, provided by software that borrows from the hardware resources from a host installed on a physical machine (or bare metal). Nevertheless, the software components of the virtual machine, from the applications to the operating system, are distinctly separated from those of the host machine. This advantage is realized when it comes to allocating physical space for resources. For example, we may have a PVE server running a web server, database server, firewall, and log management system—all as discrete virtual machines. Rather than consuming the physical space, resources, and labor of maintaining four physical machines, we simply make physical room for the single Proxmox VE server and configure an appropriate virtual LAN as necessary. In a white paper entitled Putting Server Virtualization to Work, AMD articulates well the benefits of virtualization to businesses and developers (https://www.amd.com/Documents/32951B_Virtual_WP.pdf): Top 5 business benefits of virtualization: Increases server utilization Improves service levels Streamlines manageability and security Decreases hardware costs Reduces facility costs The benefits of virtualization with a development and test environment: Lowers capital and space requirements. Lowers power and cooling costs Increases efficiencies through shorter test cycles Faster time-to-market To these benefits, let's add portability and encapsulation: the unique ability to migrate a live VM from one PVE host to another—without suffering a service outage. Proxmox VE makes the creation and control of virtual machines possible through the combined use of two free and open source technologies: Kernel-based Virtual Machine (or KVM) and Quick Emulator (QEMU). Used together, we refer to this integration of tools as KVM-QEMU. KVM KVM has been an integral part of the Linux kernel since February, 2007. This kernel module allows GNU/Linux users and administrators to take advantage of an architecture's hardware virtualization extensions; for our purposes, these extensions are AMD's AMD-V and Intel's VT-X for the x86_64 architecture. To really make the most of Proxmox VE's feature set, you'll therefore very much want to install on an x86_64 machine with a CPU with integrated virtualization extensions. For a full list of AMD and Intel processors supported by KVM, visit Intel at http://ark.intel.com/Products/VirtualizationTechnology or AMD at http://support.amd.com/en-us/kb-articles/Pages/GPU120AMDRVICPUsHyperVWin8.aspx. QEMU QEMU provides an emulation and virtualization interface that can be scripted or otherwise controlled by a user. Visualizing the relationship between KVM and QEMU Without Proxmox VE, we could essentially define the hardware, create a virtual disk, and start and stop a virtualized server from the command line using QEMU. Alternatively, we could rely on any one of an array of GUI frontends for QEMU (a list of GUIs available for various platforms can be found at http://wiki.qemu.org/Links#GUI_Front_Ends). Of course, working with these solutions is productive only if you're interested in what goes on behind the scenes in PVE when virtual machines are defined. Proxmox VE's management of virtual machines is itself managing QEMU through its API. Managing QEMU from the command line can be tedious. The following is a line from a script that launched Raspbian, a Debian remix intended for the architecture of the Raspberry Pi, on an x86 Intel machine running Ubuntu. When we see how easy it is to manage VMs from Proxmox VE's administrative interfaces, we'll sincerely appreciate that relative simplicity: qemu-system-arm -kernel kernel-qemu -cpu arm1176 -m 256 -M versatilepb -no-reboot -serial stdio -append "root=/dev/sda2 panic=1" -hda ./$raspbian_img -hdb swap If you're familiar with QEMU's emulation features, it's perhaps important to note that we can't manage emulation through the tools and features Proxmox VE provides—despite its reliance on QEMU. From a bash shell provided by Debian, it's possible. However, the emulation can't be controlled through PVE's administration and management interfaces. Containerization with Proxmox VE Containers are a class of virtual machines (as containerization has enjoyed a renaissance since 2005, the term OS virtualization has become synonymous with containerization and is often used for clarity). However, by way of contrast with VMs, containers share operating system components, such as libraries and binaries, with the host operating system; a virtual machine does not. Visually contrasting virtual machines with containers The container advantage This arrangement potentially allows a container to run leaner and with fewer hardware resources borrowed from the host. For many authors, pundits, and users, containers also offer a demonstrable advantage in terms of speed and efficiency. (However, it should be noted here that as resources such as RAM and more powerful CPUs become cheaper, this advantage will diminish.) The Proxmox VE container is made possible through LXC from version 4.0 (it's made possible through OpenVZ in previous PVE versions). LXC is the third fundamental technology serving Proxmox VE's ultimate interest. Like KVM and QEMU, LXC (or Linux Containers) is an open source technology. It allows a host to run, and an administrator to manage, multiple operating system instances as isolated containers on a single physical host. Conceptually then, a container very clearly represents a class of virtualization, rather than an opposing concept. Nevertheless, it's helpful to maintain a clear distinction between a virtual machine and a container as we come to terms with PVE. The ideal implementation of a Proxmox VE guest is contingent on our distinguishing and choosing between a virtual-machine solution and a container solution. Since Proxmox VE containers share components of the host operating system and can offer advantages in terms of efficiency, this text will guide you through the creation of containers whenever the intended guest can be fully realized with Debian Jessie as our hypervisor's operating system without sacrificing features. When our intent is a guest running a Microsoft Windows operating system, for example, a Proxmox VE container ceases to be a solution. In such a case, we turn, instead, to creating a virtual machine. We must rely on a VM precisely because the operating system components that Debian can share with a Linux container are not components a Microsoft Windows operating system can make use of. Summary In this article, we have come to terms with the three open source technologies that provide Proxmox VE's foundational features: containerization and virtualization with LXC, KVM, and QEMU. Along the way, we've come to understand that containers, while being a type of virtualization, have characteristics that distinguish them from virtual machines. These differences will be crucial as we determine which technology to rely on for a virtual server solution with Proxmox VE. Resources for Article: Further resources on this subject: Deploying App-V 5 in a Virtual Environment[article] Setting Up a Spark Virtual Environment[article] Basic Concepts of Proxmox Virtual Environment[article]
Read more
  • 0
  • 0
  • 4506

article-image-architectural-and-feature-overview
Packt
22 Feb 2016
12 min read
Save for later

Architectural and Feature Overview

Packt
22 Feb 2016
12 min read
 In this article by Giordano Scalzo, the author of Learning VMware App Volumes, we are going to look a little deeper into the different component parts that make up an App Volumes solution. Then, once you are familiar with these different components, we will discuss how they fit and work together. (For more resources related to this topic, see here.) App Volumes Components We are going to start by covering an overview of the different core components that make up the complete App Volumes solution, a glossary if you like. These are either the component parts of the actual App Volumes solution or additional components that are required to build your complete environment. App Volumes Manager The App Volumes Manager is the heart of the solution. Installed on a Windows Server operating system, the App Volumes Manager controls the application delivery engine and also provides you the access to a web-based dashboard and console from where you can manage your entire App Volumes environment. You will get your first glimpse of the App Volumes Manager when you complete the installation process and start the post-installation tasks, where you will configure details about your virtual host servers, storage, Active Directory, and other environment variables. Once you have completed the installation tasks, you will use the App Volumes Manager to perform tasks, such as creating new and updating existing AppStacks, creating Writable Volumes as well as then assigning both AppStacks and Writable Volumes to end users or virtual desktop machines. The App Volumes Manager also manages the virtual desktop machine that has the App Volumes Agent installed. Once virtual desktop machine has the agent installed, then it will then appear within the App Volumes Manager inventory so that you are able to configure assignments. In summary the App Volumes Manager performs the following functions: It provides the following functionality: Orchestrates the key infrastructure components such as, Active Directory, AppStack or Writable Volumes attachments, virtual hosting infrastructure (ESXi hosts and vCenter Servers) Manages assignments of AppStack or Writable Volumes to users, groups, and virtual desktop machines Collates AppStacks and Writable Volumes usage Provides a history of administrative actions Acts as a broker for the App Volumes agents for automated assignment of AppStacks and Writable Volumes as virtual desktop machines boot up and the end user logs in Provides a web-based graphical interface from which to manage the entire environment Throughout this article you will see the following icon used in any drawings or schematics to denote the App Volumes Manager. App Volumes Agent The App Volumes Agent is installed onto a virtual desktop machine on which you want to be able to attach AppStacks or Writable Volumes, and runs as a service on that machine. As such it is invisible to the end user. When you attach AppStack or Writable Volume to a virtual machine, then the agent acts as a filter driver and takes care of any application calls and file system redirects between the operating system and the App Stack or Writable Volume. Rather than seeing your AppStack, which appears as an additional hard drive within the operating system, the agent makes the applications appear as if they were natively installed. So, for example, the icons for your applications will automatically appear on your desktop/taskbar. The App Volumes Agent is also responsible for registering the virtual machine with the App Volumes Manager. Throughout this article, you will see the following icon used in any drawings or schematics to denote the App Volumes Agent. The App Volumes Agent can also be installed onto an RDSH host server to allow the attaching of AppStacks within a hosted applications environment. AppStacks An AppStack is a read-only volume that contains your applications, which is mounted as a Virtual Machine Disk file (VMDK) for VMware environments, or as a Virtual Hard Disk file (VHD) for Citrix and Microsoft environments on your virtual desktop machine, or RDSH host server. An AppStack is created using a provisioning machine, which has the App Volumes Agent installed on it. Then, as a part of the provisioning process, you mount an empty container (VMDK or VHD file) and then install the application(s) as you would do normally. The App Volumes Agent redirects the installation files, file system, and registry settings to the AppStack. Once completed, AppStack is set to read-only, which then allows one AppStack to be used for multiple users. This not only helps you reduce the storage requirements (an App Stack is also thin provisioned) but also allows any application that is delivered via AppStack to be centrally managed and updated. AppStacks are then delivered to the end users either as individual user assignments or via group membership using Active Directory. Throughout this article, you will see the following icon used in any drawings or schematics to denote AppStack. Writable Volumes One of the many use cases that was not best suited to a virtual desktop environment was that of developers, where they would need to install various different applications and other software. To cater for this use case, you would need to deploy a dedicated, persistent desktop to meet their requirements. This method of deployment is not necessarily the most cost-effective method, which potentially requires additional infrastructure resources, and management. With App Volumes, this all changes with the Writable Volumes feature. In the same way as you assign AppStack containing preinstalled and configured applications to an end user, with Writable Volumes, you attach an empty container as a VMDK file to their virtual desktop machine into which they can install their own applications. This virtual desktop machine will be running the App Volumes Agent, which provides the filter between any applications that the end user installs into the Writable Volume and the native operating system of the virtual desktop machine. The user then has their own drive onto which they can install applications. Now you can deploy nonpersistent, floating desktops for these users and attach not only their corporate applications via AppStacks, but also their own user-installed applications via a Writable Volume. Throughout this article, you will see the following icon used in any drawings or schematics to denote a Writable Volume. Provisioning virtual machine Although not an actual part of the App Volumes software, a key component is to have a clean virtual desktop machine to use as reference point from which to create your AppStacks from. This is known as the provisioning machine. Once you have your provisioning virtual desktop machine, you first install the App Volumes Agent onto it. Then, from the App Volumes Manager, you initiate the provisioning process, which attaches an empty VMDK file to the provisioning virtual desktop machine, and then prompts you, as the IT admin, to install the application. Before you start the installation of the application(s) that you are going to create as AppStack, it’s a good practice to take a snapshot before you start. in this way, you can roll back to your clean virtual desktop machine state before installation, ready to create the next AppStack. Throughout this article, you will see the following icon used in any drawings or schematics to denote a provisioning machine. A Broker Integration service The Broker Integration service is installed on a VMware Horizon View Connection Server, and it provides faster log on times for the end users who have access to a Horizon View virtual desktop machine. Throughout this article, you will see the following icon used in any drawings or schematics to denote the Broker Integration Service. Storage Groups Again, although not a specific component of App Volumes, you have the ability to define Storage Groups to store your AppStacks and Writable Volumes. Storage Groups are primarily used to provide replication of AppStacks and distribute Writable Volumes across multiple data stores. With AppStack storage groups, you can define a group of data stores that will be used to store the same AppStacks, enabling replication to be automatically deployed on those data stores. With Writable Volumes, only some of the storage group settings will apply attributes for the storage group, for example, the template location and distribution strategy. The distribution strategy allows you to define how writable volumes are distributed across the storage group. There are two settings for this as described: Spread: This will distribute files evenly across all the storage locations. When a file is created, the storage with the most available space is used. Round-Robin: This works by distributing the Writable Volume files sequentially, using the storage location that was used the longest amount of time ago. In this article, you will see the following icon used in any drawings or schematics to denote storage groups. We have introduced you to the core components that make up the App Volumes deployment. App Volumes Architecture Now that you understand what each of the individual components is used for, the next step is to look at how they all fit together to form the complete solution. We are going to break the architecture down into two parts. The first part will be focused on the application delivery and virtual desktop machines from an end user’s perspective. In the second part, we will look more at the supporting and underlying infrastructure, so the view from an IT administrator’s point of view. Finally, in the infrastructure section, we will look at the infrastructure with a networking hat on and illustrate the various network ports we are going to require to be available to us. So let's go back and look at our first part, what the end user will see. In this example, we have a virtual desktop machine to run a Windows operating system as the starting point of our solution. Onto that virtual desktop machine, we have installed the App Volumes Agent. We also have some core applications already installed onto this virtual desktop machine as a part of the core/parent image. These would be applications that are delivered to every user, such as Adobe Reader for example. This is exactly the same best practice as we would normally follow in any other virtual desktop environment. The updates here would be taken care of by updating the parent image and then using the recompose feature of linked clones in Horizon View. With the agent installed, the virtual desktop machine will appear in the App Volumes Manager console from where we can start to assign AppStacks to our Active Directory users and groups. When a user who has been assigned AppStack or Writable Volume logs in to a virtual desktop machine, AppStack that has been assigned to them will be attached to that virtual desktop machine, and the applications within that AppStack will seamlessly appear on the desktop. Users will also have access to their Writable Volume. The following diagram illustrates an example deployment from the virtual desktop machines perspective, as we have just described. Moving on to the second part of our focus on the architecture, we are now going to look at the underlying/supporting infrastructure. As a starting point, all of our infrastructure components have been deployed as virtual machines. They are hosted on the VMware vSphere platform. The following diagram illustrates the infrastructure components and how they fit together to deliver the applications to the virtual desktop machines. In the top section of the diagram, we have the virtual desktop machine running our Windows operating system with the App Volumes Agent installed. Along with acting as the filter driver, the agent talks to the App Volumes Manager (1) to read user assignment information for who can access which AppStacks and Writable Volumes. The App Volumes Manager also communicates with Active Directory (2) to read user, group, and machine information to assign AppStacks and Writable Volumes. The virtual desktop machine also talks to Active Directory to authenticate user logins (3). The App Volumes Manager also needs access to a SQL database (4), which stores the information about the assignments, AppStacks, Writable Volumes, and so on. A SQL database is also a requirement for vCenter Server (5), and if you are using the linked clone function of Horizon View, then a database is required for the View Composer. The final part of this diagram shows the App Volumes storage groups that are used to store the AppStacks and the Writable Volumes. These get mounted to the virtual desktop machines as virtual disks or VMDK files (6). Following on from the architecture and the how the different components fit together and communicate, later we are going to cover which ports need to be open to allow the communication between the various services and components. Network ports Now, we are going to cover the firewall ports that are required to be open in order for the App Volumes components to communicate with the other infrastructure components. The diagram here shows the port numbers (highlighted in the boxes) that are required to be open for each component to communicate. It's worth ensuring that these ports are configured before you start the deployment of App Volumes. Summary In this article, we introduced you to the individual components that make up the App Volumes solution and what task each of them performs. We then went on to look at how those components fit into the overall solution architecture, as well as how the architecture works. Resources for Article:   Further resources on this subject: Elastic Load Balancing [article] Working With CEPH Block Device [article] Android and IOs Apps Testing At A Glance [article]
Read more
  • 0
  • 0
  • 1738

article-image-configuration-manager-troubleshooting-toolkit
Packt
18 Jan 2016
8 min read
Save for later

The Configuration Manager Troubleshooting Toolkit

Packt
18 Jan 2016
8 min read
In this article by Peter Egerton and Gerry Hampson, the author of the book Troubleshooting System Center Configuration Manager you will be able to dive deeper in the troubleshoot Configuration Manager concepts. In order to successfully troubleshoot Configuration Manager, there are a number of tools that are recommended to be always kept in your troubleshooting toolkit. These include a mixture of Microsoft provided tools, third-party tools, and some community developed tools. Best of all is that they are free. As it could be expected with the broad scope of functionality within Configuration Manager, there are also quite a variety of different utilities out there, so we need to know where to use the right tool for the problem. We are going to take a look at some commonly used tools and some not so commonly used ones and see what they do and where we can use them. These are not necessarily the be all and end all, but they will certainly help us get on the way to solving problems and undoubtedly save some time. In this article, we are going to cover the following: Registry editor Group policy tools Log file viewer PowerShell Community tools (For more resources related to this topic, see here.) Registry Editor Also worth a mention is the Registry Editor that is built into Microsoft Windows on both server and client operating systems. Most IT administrators know this as regedit.exe and it is the default tool of choice for making any changes to, or just simply viewing the contents of a registry key or value. Many of the Configuration Manager roles and the clients allow us to make changes to enables features such as extended logging or manually changing policy settings by using the registry to do so. It should be noted that changing the registry is not something that should be taken lightly however, as making incorrect changes can result in creating more problems not just in Configuration Manager but the operating system as a whole. If we stick to the published settings though, we should be fine and this can be a fine tool while troubleshooting oddities and problems in a Configuration Manager environment. Group Policy Tools As Configuration Manager is a client management tool, there are certain features and settings on a client such as software updates that may conflict with settings defined in Group Policy. In particular, in larger organizations, it can often be useful to compare and contrast the settings that may conflict between Group Policy and Configuration Manager. Using integrated tools such as Resultant Set of Policy (RSoP) and Group Policy Result (gpresult.exe) or the Group Policy management console as part of the Remote Server Administration Tools (RSAT) can help identify where and why clients are not functioning as expected. We can then move forward and amend group policies as and where required using the Group Policy object editor. Used in combination, these tools can prove essential while dealing with Configuration Manager clients in particular. Log file viewer Those who have spent any time at all working with Configuration Manager will know that it contains quite a few log files, literally hundreds. We will go through the log files in more detail in the next chapter but we will need to use something to read the logs. We can use something as simple as Notepad and to an extent there are some advantages with using this as it is a no nonsense text reader. Having said that, generally speaking most people want a little more when it comes to reading Configuration Manager logs as they can often be long, complex, and frequently refreshed. We have already seen one example of a log viewer as part of the Configuration Manager Support Center, but Configuration Manager includes its own log file viewer that is tailored to the needs of troubleshooting the product logs. In Configuration Manager 2012 versions, we are provided with CMTrace.exe. The previous versions provided us with Trace32.exe or SMSTrace.exe. They are very similar tools but we will highlight some of the features of CMTrace which is the more modern of the two. To begin with, we can typically find CMTrace in the following locations: <INSTALL DRIVE>Program FilesMicrosoft Configuration ManagerToolsCMTrace.exe <INSTALL MEDIA>SMSSETUPTOOLSCMTrace.exe Those that are running Configuration Manager 2012 R2 and up also have CMTrace available out of the box in WinPE when running Operating System Deployments. We can simply hit F8 if we have command support enabled in the WinPE image and type CMTrace. This can also be added to the later stages of a task sequence when running in the full operating system by copying the file onto the hard disk. The single biggest advantage of using CMTrace over a standard text reader is that it is a tail reader which by default is refreshed every 500 milliseconds or, in others words, it will update the window as new lines are logged in the log file; we also have the functionality to pause the file too. The other functionality of CMTrace is to allow filtering of the log based on certain conditions and there is also a highlight feature which can highlight a whole line in yellow if a word we are looking for is found on the line. The program automatically highlights lines if certain words are found such as error or warning, which is useful but can also be a red herring at times, so this is something to be aware of if we come across logs with these key words. We can also merge log files; this is particularly useful when looking at time critical incidents, as we can analyze data from multiple sources in the order they happened and understand the flow of information between the different components. PowerShell PowerShell is here to stay. A phrase often heard recently is Learn PowerShell or learn golf and like it or not you cannot get away from the emphasis on this homemade product from Microsoft. This is evident in just about all the current products, as PowerShell is so deeply embedded. Configuration Manager is no exception to this and although we cannot quite do everything you can in the console, there are an increasing number of cmdlets becoming available, more than 500 at the time of writing. So the question we may ask is where does this come into troubleshooting? Well, for the uninitiated in PowerShell, maybe it won't be the first tool they turn to, but with some experience, we can soon find that performing things like WMI queries and typical console tasks can be made quicker and slicker with PowerShell. If we prefer, we can also read log files from PowerShell and make remote changes to the machines. PowerShell can be a one-stop shop for our troubleshooting needs if we spend the time to pick up the skills. Community tools Finally, as user group community leaders, we couldn't leave this section out of the troubleshooting toolkit. Configuration Manager has such a great collection of community contributors that have likely to have been through our troubleshooting pain before us and either blog about it, post it on a forum or create a fix for it. There is such an array of free tools out there that people share that we cannot ignore them. Outside of troubleshooting specifically, some of the best add-ons available for Configuration Manager are community contributions whether that be from individuals or businesses. There are so many utilities which are ever evolving and not all will suit your needs, but if we browse the Microsoft TechNet galleries, Codeplex and GitHub, you are sure find a great resource to meet your requirements. Why not get involved with a user group too, in terms of troubleshooting, this is probably one of the best things I personally could recommend. It gives access to a network of people who work on the same product as us and are often using them in the same way, so it is quite likely that someone has seen our problem before and can fast forward us to a solution. Microsoft TechNet Galleries:https://www gallery.technet.microsoft.com/ Codeplex: https://www.codeplex.com/ GitHub: https://www.github.com/ Summary In this article, you learned about various troubleshoot Configuration Manager tools such as Registry editor, Group policy tools, Log file viewer, PowerShell, and Community tools. Resources for Article: Further resources on this subject: Basic Troubleshooting Methodology [article] Monitoring and Troubleshooting Networking [article] Troubleshooting your BAM Applications [article]
Read more
  • 0
  • 0
  • 1539

article-image-nsx-core-components
Packt
05 Jan 2016
16 min read
Save for later

NSX Core Components

Packt
05 Jan 2016
16 min read
In this article by Ranjit Singh Thakurratan, the author of the book, Learning VMware NSX, we have discussed some of the core components of NSX. The article begins with a brief introduction of the NSX core components followed by a detailed discussion of these core components. We will go over three different control planes and see how each of the NSX core components fit in this architecture. Next, we will cover the VXLAN architecture and the transport zones that allow us to create and extend overlay networks across multiple clusters. We will also look at NSX Edge and the distributed firewall in greater detail and take a look at the newest NSX feature of multi-vCenter or cross-vCenterNSX deployment. By the end of this article, you will have a thorough understanding of the NSX core components and also their functional inter-dependencies. In this article, we will cover the following topics: An introduction to the NSX core components NSX Manager NSX Controller clusters VXLAN architecture overview Transport zones NSX Edge Distributed firewall Cross-vCenterNSX (For more resources related to this topic, see here.) An introduction to the NSX core components The foundational core components of NSX are divided across three different planes. The core components of a NSX deployment consist of a NSX Manager, Controller clusters, and hypervisor kernel modules. Each of these are crucial for your NSX deployment; however, they are decoupled to a certain extent to allow resiliency during the failure of multiple components. For example if your controller clusters fail, your virtual machines will still be able to communicate with each other without any network disruption. You have to ensure that the NSX components are always deployed in a clustered environment so that they are protected by vSphere HA. The high-level architecture of NSX primarily describes three different planes wherein each of the core components fit in. They are the Management plane, the Control plane, and the Data plane. The following figure represents how the three planes are interlinked with each other. The management plane is how an end user interacts with NSX as a centralized access point, while the data plane consists of north-south or east-west traffic. Let's look at some of the important components in the preceding figure: Management plane: The management plane primarily consists of NSX Manager. NSX Manager is a centralized network management component and primarily allows a single management point. It also provides the REST API that a user can use to perform all the NSX functions and actions. During the deployment phase, the management plane is established when the NSX appliance is deployed and configured. This management plane directly interacts with the control plane and also with the data plane. The NSX Manager is then managed via the vSphere web client and CLI. The NSX Manager is configured to interact with vSphere and ESXi, and once configured, all of the NSX components are then configured and managed via the vSphere web GUI. Control plane: The control plane consists of the NSX Controller that manages the state of virtual networks. NSX Controllers also enable overlay networks (VXLAN) that are multicast-free and make it easier to create new VXLAN networks without having to enable multicast functionality on physical switches. The controllers also keep track of all the information about the virtual machines, hosts, and VXLAN networks and can perform ARP suppression as well. No data passes through the control plane, and a loss of controllers does not affect network functionality between virtual machines. Overlay networks and VXLANs can be used interchangeably. They both represent L2 over L3 virtual networks. Data plane: The NSX data plane primarily consists of NSX logical switch. The NSX logical switch is a part of the vSphere distributed switch and is created when a VXLAN network is created. The logical switch and other NSX services such as logical routing and logical firewall are enabled at the hypervisor kernel level after the installation of hypervisor kernel modules (VIBs). This logical switch is the key to enabling overlay networks that are able to encapsulate and send traffic over existing physical networks. It also allows gateway devices that allow L2 bridging between virtual and physical workloads.The data plane receives its updates from the control plane as hypervisors maintain local virtual machines and VXLAN (Logical switch) mapping tables as well. A loss of data plane will cause a loss of the overlay (VXLAN) network, as virtual machines that are part of a NSX logical switch will not be able to send and receive data. NSX Manager NSX Manager, once deployed and configured, can deploy Controller cluster appliances and prepare the ESXi host that involves installing various vSphere installation bundles (VIB) that allow network virtualization features such as VXLAN, logical switching, logical firewall, and logical routing. NSX Manager can also deploy and configure Edge gateway appliances and its services. The NSX version as of this writing is 6.2 that only supports 1:1 vCenter connectivity. NSX Manager is deployed as a single virtual machine and relies on VMware's HA functionality to ensure its availability. There is no NSX Manager clustering available as of this writing. It is important to note that a loss of NSX Manager will lead to a loss of management and API access, but does not disrupt virtual machine connectivity. Finally, the NSX Manager's configuration UI allows an administrator to collect log bundles and also to back up the NSX configuration. NSX Controller clusters NSX Controller provides a control plane functionality to distribute Logical Routing, VXLAN network information to the underlying hypervisor. Controllers are deployed as Virtual Appliances, and they should be deployed in the same vCenter to which NSX Manager is connected. In a production environment, it is recommended to deploy minimum three controllers. For better availability and scalability, we need to ensure that DRS ant-affinity rules are configured to deploy Controllers on a separate ESXI host. The control plane to management and data plane traffic is secured by a certificate-based authentication. It is important to note that controller nodes employ a scale-out mechanism, where each controller node uses a slicing mechanism that divides the workload equally across all the nodes. This renders all the controller nodes as Active at all times. If one controller node fails, then the other nodes are reassigned the tasks that were owned by the failed node to ensure operational status. The VMware NSX Controller uses a Paxos-based algorithm within the NSX Controller cluster. The Controller removes dependency on multicast routing/PIM in the physical network. It also suppresses broadcast traffic in VXLAN networks. The NSX version 6.2 only supports three controller nodes. VXLAN architecture overview One of the most important functions of NSX is enabling virtual networks. These virtual networks or overlay networks have become very popular due to the fact that they can leverage existing network infrastructure without the need to modify it in any way. The decoupling of logical networks from the physical infrastructure allows users to scale rapidly. Overlay networks or VXLAN was developed by a host of vendors that include Arista, Cisco, Citrix, Red Hat, and Broadcom. Due to this joint effort in developing its architecture, it allows the VXLAN standard to be implemented by multiple vendors. VXLAN is a layer 2 over layer 3 tunneling protocol that allows logical network segments to extend on routable networks. This is achieved by encapsulating the Ethernet frame with additional UPD, IP, and VXLAN headers. Consequently, this increases the size of the packet by 50 bytes. Hence, VMware recommends increasing the MTU size to a minimum of 1600 bytes for all the interfaces in the physical infrastructure and any associated vSwitches. When a virtual machine generates traffic meant for another virtual machine on the same virtual network, the hosts on which these source and destination virtual machines run are called VXLAN Tunnel End Point (VTEP). VTEPs are configured as separate VM Kernel interfaces on the hosts. The outer IP header block in the VXLAN frame contains the source and the destination IP addresses that contain the source hypervisor and the destination hypervisor. When a packet leaves the source virtual machine, it is encapsulated at the source hypervisor and sent to the target hypervisor. The target hypervisor, upon receiving this packet, decapsulates the Ethernet frame and forwards it to the destination virtual machine. Once the ESXI host is prepared from NSX Manager, we need to configure VTEP. NSX supports multiple VXLAN vmknics per host for uplink load balancing features. In addition to this, Guest VLAN tagging is also supported. A sample packet flow We face a challenging situation when a virtual machine generates traffic—Broadcast, Unknown Unicast, or Multicast (BUM)—meant for another virtual machine on the same virtual network (VNI) on a different host. Control plane modes play a crucial factor in optimizing the VXLAN traffic depending on the modes selected for the Logical Switch/Transport Scope: Unicast Hybrid Multicast By default, a Logical Switch inherits its replication mode from the transport zone. However, we can set this on a per-Logical-Switch basis. Segment ID is needed for Multicast and Hybrid Modes. The following is a representation of the VXLAN-encapsulated packet showing the VXLAN headers: As indicated in the preceding figure, the outer IP header identifies the source and the destination VTEPs. The VXLAN header also has the Virtual Network Identifier (VNI) that is a 24-bit unique network identifier. This allows the scaling of virtual networks beyond the 4094 VLAN limitation placed by the physical switches. Two virtual machines that are a part of the same virtual network will have the same virtual network identifier, similar to how two machines on the same VLAN share the same VLAN ID. Transport zones A group of ESXi hosts that are able to communicate with one another over the physical network by means of VTEPs are said to be in the same transport zone. A transport zone defines the extension of a logical switch across multiple ESXi clusters that span across multiple virtual distributed switches. A typical environment has more than one virtual distributed switch that spans across multiple hosts. A transport zone enables a logical switch to extend across multiple virtual distributed switches, and any ESXi host that is a part of this transport zone can have virtual machines as a part of that logical network. A logical switch is always created as part of a transport zone and ESXi hosts can participate in them. The following is a figure that shows a transport zone that defines the extension of a logical switch across multiple virtual distributed switches: NSX Edge Services Gateway The NSX Edge Services Gateway (ESG) offers a feature rich set of services that include NAT, routing, firewall, load balancing, L2/L3 VPN, and DHCP/DNS relay. NSX API allows each of these services to be deployed, configured, and consumed on-demand. The ESG is deployed as a virtual machine from NSX Manager that is accessed using the vSphere web client. Four different form factors are offered for differently-sized environments. It is important that you factor in enough resources for the appropriate ESG when building your environment. The ESG can be deployed in different sizes. The following are the available size options for an ESG appliance: X-Large: The X-large form factor is suitable for high performance firewall, load balancer, and routing or a combination of multiple services. When an X-large form factor is selected, the ESG will be deployed with six vCPUs and 8GB of RAM. Quad-Large: The Quad-large form factor is ideal for a high performance firewall. It will be deployed with four vCPUs and 1GB of RAM. Large: The large form factor is suitable for medium performance routing and firewall. It is recommended that, in production, you start with the large form factor. The large ESG is deployed with two vCPUs and 1GB of RAM. Compact: The compact form factor is suitable for DHCP and DNS replay functions. It is deployed with one vCPU and 512MB of RAM. Once deployed, a form factor can be upgraded by using the API or the UI. The upgrade action will incur an outage. Edge gateway services can also be deployed in an Active/Standby mode to ensure high availability and resiliency. A heartbeat network between the Edge appliances ensures state replication and uptime. If the active gateway goes down and the "declared dead time" passes, the standby Edge appliance takes over. The default declared dead time is 15 seconds and can be reduced to 6 seconds. Let's look at some of the Edge services as follows: Network Address Translation: The NSX Edge supports both source and destination NAT and NAT is allowed for all traffic flowing through the Edge appliance. If the Edge appliance supports more than 100 virtual machines, it is recommended that a Quad instance be deployed to allow high performance translation. Routing: The NSX Edge allows centralized routing that allows the logical networks deployed in the NSX domain to be routed to the external physical network. The Edge supports multiple routing protocols including OSPF, iBGP, and eBGP. The Edge also supports static routing. Load balancing: The NSX Edge also offers a load balancing functionality that allows the load balancing of traffic between the virtual machines. The load balancer supports different balancing mechanisms including IP Hash, least connections, URI-based, and round robin. Firewall: NSX Edge provides a stateful firewall functionality that is ideal for north-south traffic flowing between the physical and the virtual workloads behind the Edge gateway. The Edge firewall can be deployed alongside the hypervisor kernel-based distributed firewall that is primarily used to enforce security policies between workloads in the same logical network. L2/L3VPN: The Edge also provides L2 and L3 VPNs that makes it possible to extend L2 domains between two sites. An IPSEC site-to-site connectivity between two NSX Edges or other VPN termination devices can also be set up. DHCP/DNS relay: NSX Edge also offers DHCP and DNS relay functions that allows you to offload these services to the Edge gateway. Edge only supports DNS relay functionality and can forward any DNS requests to the DNS server. The Edge gateway can be configured as a DHCP server to provide and manage IP addresses, default gateway, DNS servers and, search domain information for workloads connected to the logical networks. Distributed firewall NSX provides L2-L4stateful firewall services by means of a distributed firewall that runs in the ESXi hypervisor kernel. Because the firewall is a function of the ESXi kernel, it provides massive throughput and performs at a near line rate. When the ESXi host is initially prepared by NSX, the distributed firewall service is installed in the kernel by deploying the kernel VIB – VMware Internetworking Service insertion platform or VSIP. VSIP is responsible for monitoring and enforcing security policies on all the traffic flowing through the data plane. The distributed firewall (DFW) throughput and performance scales horizontally as more ESXi hosts are added. DFW instances are associated to each vNIC, and every vNIC requires one DFW instance. A virtual machine with 2 vNICs has two DFW instances associated with it, each monitoring its own vNIC and applying security policies to it. DFW is ideally deployed to protect virtual-to-virtual or virtual-to-physical traffic. This makes DFW very effective in protecting east-west traffic between workloads that are a part of the same logical network. DFW policies can also be used to restrict traffic between virtual machines and external networks because it is applied at the vNIC of the virtual machine. Any virtual machine that does not require firewall protection can be added to the exclusion list. A diagrammatic representation is shown as follows: DFW fully supports vMotion and the rules applied to a virtual machine always follow the virtual machine. This means any manual or automated vMotion triggered by DRS does not cause any disruption in its protection status. The VSIP kernel module also adds spoofguard and traffic redirection functionalities as well. The spoofguard function maintains a VM name and IP address mapping table and prevents against IP spoofing. Spoofguard is disabled by default and needs to be manually enabled per logical switch or virtual distributed switch port group. Traffic redirection allows traffic to be redirected to a third-party appliance that can do enhanced monitoring, if needed. This allows third-party vendors to be interfaced with DFW directly and offer custom services as needed. Cross-vCenterNSX With NSX 6.2, VMware introduced an interesting feature that allows you to manage multiple vCenterNSX environments using a primary NSX Manager. This allows to have easy management and also enables lots of new functionalities including extending networks and other features such as distributed logical routing. Cross-vCenterNSX deployment also allows centralized management and eases disaster recovery architectures. In a cross-vCenter deployment, multiple vCenters are all paired with their own NSX Manager per vCenter. One NSX Manager is assigned as the primary while other NSX Managers become secondary. This primary NSX Manager can now deploy a universal controller cluster that provides the control plane. Unlike a standalone vCenter-NSX deployment, secondary NSX Managers do not deploy their own controller clusters. The primary NSX Manager also creates objects whose scope is universal. This means that these objects extend to all the secondary NSX Managers. These universal objects are synchronized across all the secondary NSX Managers and can be edited and changed by the primary NSX Manager only. This does not prevent you from creating local objects on each of the NSX Managers. Similar to local NSX objects, a primary NSX Manager can create global objects such as universal transport zones, universal logical switches, universal distributed routers, universal firewall rules, and universal security objects. There can be only one universal transport zone in a cross-vCenterNSX environment. After it is created, it is synchronized across all the secondary NSX Managers. When a logical switch is created inside a universal transport zone, it becomes a universal logical switch that spans layer 2 network across all the vCenters. All traffic is routed using the universal logical router, and any traffic that needs to be routed between a universal logical switch and a logical switch (local scope) requires an ESG. Summary We began the article with a brief introduction of the NSX core components and looked at the management, control, and the data plane. We then discussed NSX Manager and the NSX Controller clusters. This was followed by a VXLAN architecture overview discussion, where we looked at the VXLAN packet. We then discussed transport zones and NSX Edge gateway services. We ended the article with NSX Distributed firewall services and also an overview of Cross-vCenterNSX deployment. Resources for Article: Further resources on this subject: vRealize Automation and the Deconstruction of Components [article] Monitoring and Troubleshooting Networking [article] Managing Pools for Desktops [article]
Read more
  • 0
  • 0
  • 5642
article-image-installing-neutron
Packt
04 Nov 2015
15 min read
Save for later

Installing Neutron

Packt
04 Nov 2015
15 min read
We will learn about OpenStack networking in this article by James Denton, who is the author of the book Learning OpenStack Networking (Neutron) - Second Edition. OpenStack Networking, also known as Neutron, provides a network infrastructure as-a-service platform to users of the cloud. In this article, I will guide you through the installation of Neutron networking services on top of the OpenStack environment. Components to be installed include: Neutron API server Modular Layer 2 (ML2) plugin By the end of this article, you will have a basic understanding of the function and operation of various Neutron plugins and agents, as well as a foundation on top of which a virtual switching infrastructure can be built. (For more resources related to this topic, see here.) Basic networking elements in Neutron Neutron constructs the virtual network using elements that are familiar to all system and network administrators, including networks, subnets, ports, routers, load balancers, and more. Using version 2.0 of the core Neutron API, users can build a network foundation composed of the following entities: Network: A network is an isolated layer 2 broadcast domain. Typically reserved for the tenants that created them, networks could be shared among tenants if configured accordingly. The network is the core entity of the Neutron API. Subnets and ports must always be associated with a network. Subnet: A subnet is an IPv4 or IPv6 address block from which IP addresses can be assigned to virtual machine instances. Each subnet must have a CIDR and must be associated with a network. Multiple subnets can be associated with a single network and can be noncontiguous. A DHCP allocation range can be set for a subnet that limits the addresses provided to instances. Port: A port in Neutron represents a virtual switch port on a logical virtual switch. Virtual machine interfaces are mapped to Neutron ports, and the ports define both the MAC address and the IP address to be assigned to the interfaces plugged into them. Neutron port definitions are stored in the Neutron database, which is then used by the respective plugin agent to build and connect the virtual switching infrastructure. Cloud operators and users alike can configure network topologies by creating and configuring networks and subnets, and then instruct services such as Nova to attach virtual devices to ports on these networks. Users can create multiple networks, subnets, and ports, but are limited to thresholds defined by per-tenant quotas set by the cloud administrator. Extending functionality with plugins Neutron introduces support for third-party plugins and drivers that extend network functionality and implementation of the Neutron API. Plugins and drivers can be created that use a variety of software- and hardware-based technologies to implement the network built by operators and users. There are two major plugin types within the Neutron architecture: Core plugin Service plugin A core plugin implements the core Neutron API and is responsible for adapting the logical network described by networks, ports, and subnets into something that can be implemented by the L2 agent and IP address management system running on the host. A service plugin provides additional network services such as routing, load balancing, firewalling, and more. The Neutron API provides a consistent experience to the user despite the chosen networking plugin. For more information on interacting with the Neutron API, visit http://developer.openstack.org/api-ref-networking-v2.html. Modular Layer 2 plugin Prior to the inclusion of the Modular Layer 2 (ML2) plugin in the Havana release of OpenStack, Neutron was limited to using a single core plugin at a time. The ML2 plugin replaces two monolithic plugins in its reference implementation: the LinuxBridge plugin and the Open vSwitch plugin. Their respective agents, however, continue to be utilized and can be configured to work with the ML2 plugin. Drivers The ML2 plugin introduced the concept of type drivers and mechanism drivers to separate the types of networks being implemented and the mechanisms for implementing networks of those types. Type drivers An ML2 type driver maintains type-specific network state, validates provider network attributes, and describes network segments using provider attributes. Provider attributes include network interface labels, segmentation IDs, and network types. Supported network types include local, flat, vlan, gre, and vxlan. Mechanism drivers An ML2 mechanism driver is responsible for taking information established by the type driver and ensuring that it is properly implemented. Multiple mechanism drivers can be configured to operate simultaneously, and can be described using three types of models: Agent-based: This includes LinuxBridge, Open vSwitch, and others Controller-based: This includes OpenDaylight, VMWare NSX, and others Top-of-Rack: This includes Cisco Nexus, Arista, Mellanox, and others The LinuxBridge and Open vSwitch ML2 mechanism drivers are used to configure their respective switching technologies within nodes that host instances and network services. The LinuxBridge driver supports local, flat, vlan, and vxlan network types, while the Open vSwitch driver supports all of those as well as the gre network type. The L2 population driver is used to limit the amount of broadcast traffic that is forwarded across the overlay network fabric. Under normal circumstances, unknown unicast, multicast, and broadcast traffic floods out all tunnels to other compute nodes. This behavior can have a negative impact on the overlay network fabric, especially as the number of hosts in the cloud scales out. As an authority on what instances and other network resources exist in the cloud, Neutron can prepopulate forwarding databases on all hosts to avoid a costly learning operation. When ARP proxy is used, Neutron prepopulates the ARP table on all hosts in a similar manner to avoid ARP traffic from being broadcast across the overlay fabric. ML2 architecture The following diagram demonstrates at a high level how the Neutron API service interacts with the various plugins and agents responsible for constructing the virtual and physical network: Figure 3.1 The preceding diagram demonstrates the interaction between the Neutron API, Neutron plugins and drivers, and services such as the L2 and L3 agents. For more information on the Neutron ML2 plugin architecture, refer to the OpenStack Neutron Modular Layer 2 Plugin Deep Dive video from the 2013 OpenStack Summit in Hong Kong available at https://www.youtube.com/watch?v=whmcQ-vHams. Third-party support Third-party vendors such as PLUMGrid and OpenContrail have implemented support for their respective SDN technologies by developing their own monolithic or ML2 plugins that implement the Neutron API and extended network services. Others, including Cisco, Arista, Brocade, Radware, F5, VMWare, and more, have created plugins that allow Neutron to interface with OpenFlow controllers, load balancers, switches, and other network hardware. For a look at some of the commands related to these plugins, refer to Appendix, Additional Neutron Commands. The configuration and use of these plugins is outside the scope of this article. For more information on the available plugins for Neutron, visit http://docs.openstack.org/admin-guide-cloud/content/section_plugin-arch.html. Network namespaces OpenStack was designed with multitenancy in mind and provides users with the ability to create and manage their own compute and network resources. Neutron supports each tenant having multiple private networks, routers, firewalls, load balancers, and other networking resources. It is able to isolate many of those objects through the use of network namespaces. A network namespace is defined as a logical copy of the network stack with its own routes, firewall rules, and network interface devices. When using the open source reference plugins and drivers, every network, router, and load balancer that is created by a user is represented by a network namespace. When network namespaces are enabled, Neutron is able to provide isolated DHCP and routing services to each network. These services allow users to create overlapping networks with other users in other projects and even other networks in the same project. The following naming convention for network namespaces should be observed: DHCP namespace: qdhcp-<network UUID> Router namespace: qrouter-<router UUID> Load Balancer namespace: qlbaas-<load balancer UUID> A qdhcp namespace contains a DHCP service that provides IP addresses to instances using the DHCP protocol. In a reference implementation, dnsmasq is the process that services DHCP requests. The qdhcp namespace has an interface plugged into the virtual switch and is able to communicate with instances and other devices in the same network or subnet. A qdhcp namespace is created for every network where the associated subnet(s) have DHCP enabled. A qrouter namespace represents a virtual router and is responsible for routing traffic to and from instances in the subnets it is connected to. Like the qdhcp namespace, the qrouter namespace is connected to one or more virtual switches depending on the configuration. A qlbaas namespace represents a virtual load balancer and may run a service such as HAProxy that load balances traffic to instances. The qlbaas namespace is connected to a virtual switch and can communicate with instances and other devices in the same network or subnet. The leading q in the name of the network namespaces stands for Quantum, the original name for the OpenStack Networking service. Network namespaces of the types mentioned earlier will only be seen on nodes running the Neutron DHCP, L3, and LBaaS agents, respectively. These services are typically configured only on controllers or dedicated network nodes. The ip netns list command can be used to list available namespaces, and commands can be executed within the namespace using the following syntax: ip netns exec NAMESPACE_NAME <command> Commands that can be executed in the namespace include ip, route, iptables, and more. The output of these commands corresponds to data specific to the namespace they are executed in. For more information on network namespaces, see the man page for ip netns at http://man7.org/linux/man-pages/man8/ip-netns.8.html. Installing and configuring Neutron services In this installation, the various services that make up OpenStack Networking will be installed on the controller node rather than a dedicated networking node. The compute nodes will run L2 agents that interface with the controller node and provide virtual switch connections to instances. Remember that the configuration settings recommended here and online at docs.openstack.org may not be appropriate for production systems. To install the Neutron API server, the DHCP and metadata agents, and the ML2 plugin on the controller, issue the following command: # apt-get install neutron-server neutron-dhcp-agent neutron-metadata-agent neutron-plugin-ml2 neutron-common python-neutronclient On the compute nodes, only the ML2 plugin is required: # apt-get install neutron-plugin-ml2 Creating the Neutron database Using the mysql client, create the Neutron database and associated user. When prompted for the root password, use openstack: # mysql –u root –p Enter the following SQL statements in the MariaDB [(none)] > prompt: CREATE DATABASE neutron; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron'; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron'; quit; Update the [database] section of the Neutron configuration file at /etc/neutron/neutron.conf on all nodes to use the proper MySQL database connection string based on the preceding values rather than the default value: [database] connection = mysql://neutron:neutron@controller01/neutron Configuring the Neutron user, role, and endpoint in Keystone Neutron requires that you create a user, role, and endpoint in Keystone in order to function properly. When executed from the controller node, the following commands will create a user called neutron in Keystone, associate the admin role with the neutron user, and add the neutron user to the service project: # openstack user create neutron --password neutron # openstack role add --project service --user neutron admin Create a service in Keystone that describes the OpenStack Networking service by executing the following command on the controller node: # openstack service create --name neutron --description "OpenStack Networking" network The service create command will result in the following output: Figure 3.2 To create the endpoint, use the following openstack endpoint create command: # openstack endpoint create --publicurl http://controller01:9696 --adminurl http://controller01:9696 --internalurl http://controller01:9696 --region RegionOne network The resulting endpoint is as follows: Figure 3.3 Enabling packet forwarding Before the nodes can properly forward or route traffic for virtual machine instances, there are three kernel parameters that must be configured on all nodes: net.ipv4.ip_forward net.ipv4.conf.all.rp_filter net.ipv4.conf.default.rp_filter The net.ipv4.ip_forward kernel parameter allows the nodes to forward traffic from the instances to the network. The default value is 0 and should be set to 1 to enable IP forwarding. Use the following command on all nodes to implement this change: # sysctl -w "net.ipv4.ip_forward=1" The net.ipv4.conf.default.rp_filter and net.ipv4.conf.all.rp_filter kernel parameters are related to reverse path filtering, a mechanism intended to prevent certain types of denial of service attacks. When enabled, the Linux kernel will examine every packet to ensure that the source address of the packet is routable back through the interface in which it came. Without this validation, a router can be used to forward malicious packets from a sender who has spoofed the source address to prevent the target machine from responding properly. In OpenStack, anti-spoofing rules are implemented by Neutron on each compute node within iptables. Therefore, the preferred configuration for these two rp_filter values is to disable them by setting them to 0. Use the following sysctl commands on all nodes to implement this change: # sysctl -w "net.ipv4.conf.default.rp_filter=0" # sysctl -w "net.ipv4.conf.all.rp_filter=0" Using sysctl –w makes the changes take effect immediately. However, the changes are not persistent across reboots. To make the changes persistent, edit the /etc/sysctl.conf file on all hosts and add the following lines: net.ipv4.ip_forward = 1 net.ipv4.conf.default.rp_filter = 0 net.ipv4.conf.all.rp_filter = 0 Load the changes into memory on all nodes with the following sysctl command: # sysctl -p Configuring Neutron to use Keystone The Neutron configuration file found at /etc/neutron/neutron.conf has dozens of settings that can be modified to meet the needs of the OpenStack cloud administrator. A handful of these settings must be changed from their defaults as part of this installation. To specify Keystone as the authentication method for Neutron, update the [DEFAULT] section of the Neutron configuration file on all hosts with the following setting: [DEFAULT] auth_strategy = keystone Neutron must also be configured with the appropriate Keystone authentication settings. The username and password for the neutron user in Keystone were set earlier in this article. Update the [keystone_authtoken] section of the Neutron configuration file on all hosts with the following settings: [keystone_authtoken] auth_uri = http://controller01:5000 auth_url = http://controller01:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = neutron password = neutron Configuring Neutron to use a messaging service Neutron communicates with various OpenStack services on the AMQP messaging bus. Update the [DEFAULT] and [oslo_messaging_rabbit] sections of the Neutron configuration file on all hosts to specify RabbitMQ as the messaging broker: [DEFAULT] rpc_backend = rabbit The RabbitMQ authentication settings should match what was previously configured for the other OpenStack services: [oslo_messaging_rabbit] rabbit_host = controller01 rabbit_userid = openstack rabbit_password = rabbit Configuring Nova to utilize Neutron networking Before Neutron can be utilized as the network manager for Nova Compute services, the appropriate configuration options must be set in the Nova configuration file located at /etc/nova/nova.conf on all hosts. Start by updating the following sections with information on the Neutron API class and URL: [DEFAULT] network_api_class = nova.network.neutronv2.api.API [neutron] url = http://controller01:9696 Then, update the [neutron] section with the proper Neutron credentials: [neutron] auth_strategy = keystone admin_tenant_name = service admin_username = neutron admin_password = neutron admin_auth_url = http://controller01:35357/v2.0 Nova uses the firewall_driver configuration option to determine how to implement firewalling. As the option is meant for use with the nova-network networking service, it should be set to nova.virt.firewall.NoopFirewallDriver to instruct Nova not to implement firewalling when Neutron is in use: [DEFAULT] firewall_driver = nova.virt.firewall.NoopFirewallDriver The security_group_api configuration option specifies which API Nova should use when working with security groups. For installations using Neutron instead of nova-network, this option should be set to neutron as follows: [DEFAULT] security_group_api = neutron Nova requires additional configuration once a mechanism driver has been determined. Configuring Neutron to notify Nova Neutron must be configured to notify Nova of network topology changes. Update the [DEFAULT] and [nova] sections of the Neutron configuration file on the controller node located at /etc/neutron/neutron.conf with the following settings: [DEFAULT] notify_nova_on_port_status_changes = True notify_nova_on_port_data_changes = True nova_url = http://controller01:8774/v2 [nova] auth_url = http://controller01:35357 auth_plugin = password project_domain_id = default user_domain_id = default region_name = RegionOne project_name = service username = nova password = nova Summary Neutron has seen major internal architectural improvements over the last few releases. These improvements have made developing and implementing network features easier for developers and operators, respectively. Neutron maintains the logical network architecture in its database, and network plugins and agents on each node are responsible for configuring virtual and physical network devices accordingly. With the introduction of the ML2 plugin, developers can spend less time implementing the core Neutron API functionality and more time developing value-added features. Now that OpenStack Networking services have been installed across all nodes in the environment, configuration of a layer 2 networking plugin is all that remains before instances can be created. Resources for Article: Further resources on this subject: Installing OpenStack Swift [article] Securing OpenStack Networking [article] The orchestration service for OpenStack [article]
Read more
  • 0
  • 0
  • 2442

article-image-setting-citrix-components
Packt
03 Nov 2015
4 min read
Save for later

Setting Up the Citrix Components

Packt
03 Nov 2015
4 min read
In this article by Sunny Jha, the author of the book Mastering XenApp, we are going to implement the Citrix XenApp infrastructure components, which are going to work together to deliver the applications. The components we will be implementing are as follows: Setting up Citrix License Server Setting up Delivery Controller Setting up Director Setting up StoreFront Setting up Studio Once you will complete this article, you will be able to understand how to install the Citrix XenApp infrastructure components for the effective delivery of applications. (For more resources related to this topic, see here.) Setting up the Citrix infrastructure components You must be aware of the fact that Citrix reintroduced Citrix XenApp in the version of Citrix XenApp 7.5 with the new FMA-based architecture, replacing IMA. In this article, we will be setting up different Citrix components so that they can deliver the applications. As this is the proof of concept, I will be setting up almost all the Citrix components on the single Microsoft Windows 2012 R2 machine, where it is recommended that in the production environment, you should keep the Citrix components such as License Server, Delivery Controller, and StoreFront. These need to be installed on the separate servers to avoid the single point of failure and better performance. The components that we will be setting up in this article are: Delivery Controller: This Citrix component will act as broker, and the main function is to assign users to a server, based on their selection of application published. License Server: This will assign the license to the Citrix components as every Citrix product requires license in order to work. Studio: This will act as control panel for Citrix XenApp 7.6 delivery. Inside Citrix, studio administrator makes all the configuration and changes. Director: This component is basically for monitoring and troubleshooting, which is web-based application. StoreFront: This is the frontend of the Citrix infrastructure by which users connect to their application, either via receiver or web based. Installing of Citrix components In order to start the installation, we need the Citrix XenApp 7.6 DVD or ISO image. You can always download, from the Citrix website, all you need to have in the MyCitrix account. Follow these steps: Mount the disc/ISO you have downloaded. When you will double-click on the mounted disc, it will bring up a nice screen where you have to make the selection between XenApp Deliver applications or XenDesktop Deliver application and desktops: Once you have made the selection, it will show you the next option related to the product. Here, we need to select XenApp. Choose Delivery Controller from the options: The next screen will show you the License Agreement. You can go through it and accept the terms and click on Next: As described earlier, this is the proof of concept. We will install all the components on single server, but it is recommended to put each component on different server for better performance. Select all the components and click on Next: The next screen will show you the features that can be installed. As we have already installed the SQL server, we don't have to select the SQL Express, but we will choose Install Windows Remote Assistance. Click on Next: The next screen will show you the firewall ports that needs to be allowed to communicate, and it can be adjusted by Citrix as well. Click on Next: The next screen will show you the summary of your selection. Here, you can review your selection and click on Install to install the components: After you click on Install, it will go through the installation procedure, and once the installation is complete, click on Next. By following these steps, we completed the installation of the Citrix components such as Delivery Controller, Studio, Director, and StoreFront. We also adjusted the firewall ports as per the Citrix XenApp requirement. Summary In this article, you learned about setting up the Citrix infrastructure components and also how to install Citrix Director, License Server, Citrix Studio, and Citrix Director, and Citrix StoreFront. Resources for Article: Further resources on this subject: Getting Started – Understanding Citrix XenDesktop and its Architecture [article] High Availability, Protection, and Recovery using Microsoft Azure [article] A Virtual Machine for a Virtual World [article]
Read more
  • 0
  • 0
  • 1247

article-image-monitoring-and-troubleshooting-networking
Packt
21 Oct 2015
21 min read
Save for later

Monitoring and Troubleshooting Networking

Packt
21 Oct 2015
21 min read
This article by Muhammad Zeeshan Munir, author of the book VMware vSphere Troubleshooting, includes troubleshooting vSphere virtual distributed switches, vSphere standard virtual switches, vLANs, uplinks, DNS, and routing, which is one of the core issues a seasonal system engineer has to deal with on a daily basis. This article will cover all these topics and give you hands-on step-by-step instructions to manage and monitor your network resources. The following topics will be covered in this article: Different network troubleshooting commands VLANs troubleshooting Verification of physical trunks and VLAN configuration Testing of VM connectivity VMkernel interface troubleshooting Configuration command (Vicfg-vmknic and esxcli network ip interface) Use of Direct Console User Interface (DCUI) to verify configuration (For more resources related to this topic, see here.) Network troubleshooting commands Some of the commands that can be used for networking troubleshooting include net-dvs, Esxcli network, vicfg-route, vicfg-vmknic, vicfg-dns, vicfg-nics, and vicfg-vswitch. You can use the net-dvs command to troubleshoot VMware distributed dvSwitches. The command shows all the information regarding the VMware distributed dvSwtich configuration. The net-dvs command reads the information from the /etc/vmware/dvsdata.db file and displays all the data in the console. A vSphere host keeps updating its dvsdata.db file every five minutes. Connect to a vSphere host using PuTTY. Enter your user name and password when prompted. Type the following command in the CLI: net-dvs You will see something similar to the following screenshot: In the preceding screenshot, you can see that the first line represents the UUID of a VMware distributed switch. The second line shows the maximum number of ports a distributed switch can have. The line com.vmware.common.alias = dvswitch-Network-Pools represents the name of a distributed switch. The next line com.vmware.common.uplinkPorts: dvUplink1 to dvUplinkn shows the uplink ports a distributed switch has. The distributed switch MTU is set to 1,600 and you can see the information about CDP just below it. CDP information can be useful to troubleshoot connectivity issues. You can see com.vmware.common.respools.list listing networking resource pools, while com.vmware.common.host.uplinkPorts shows the ports numbers assigned to uplink ports. Further details about these uplink ports are explained as follows for each uplink port by their port number. You can also see the port statistics as displayed in the following screenshot. When you perform troubleshooting, these statistics can help you to check the behavior of the distributed switch and the ports. From these statistics, you can diagnose if the data packets are going in and out. As you can see in the following screenshot, all the metrics regarding packet drops are zero. If you find in your troubleshooting that the packets are being dropped, you can easily start finding the root cause of the problem: Unfortunately, the net-dvs command is very poorly documented, and usually, it is hard to find useful references. Moreover, it is not supported by VMware. However, you can use it with –h switch to display more options. Repairing a dvsdata.db file Sometimes, the dvsdata.db file of a vSphere host becomes corrupted and you face different types of distributed switch errors, for example, unable to create proxy DVS. In this case, when you try to run the net-dvs command on a vSphere host, it will fail with an error as well. As I have mentioned earlier, the net-dvs command reads data from the /etc/vmware/dvsdata.db file—it fails because it is unable to read data from the file. The possible cause for the corruption of the dvsdata.db file could be network outage; or when a vSphere host is disconnected from vCenter and deleted, it might have the information in its cache. You can resolve this issue by restoring the dvsdata.db file by following these steps: Through PuTTY, connect to a functioning vSphere host in your infrastructure. Copy the dvsdata.db file from the vSphere host. The file can be found in /etc/vmware/dvsdata.db. Transfer the copied dvsdata.db file to the corrupted vSphere host and overwrite it. Restart your vSphere host. Once the vSphere host is up and running, use PuTTY to connect to it. Run the net-dvs command. The command should be executed successfully this time without any errors. ESXCLI network The esxcli network command is a longtime friend of the system administrator and the support staff for troubleshooting network related issues. The esxcli network command will be used to examine different network configurations and to troubleshoot problems. You can type esxcli network to quickly see a help reference and the different options that can be used with the command. Let's walk through some useful esxcli network troubleshooting commands. Type the following command into your vSphere CLI to list all the virtual machines and the networks they are on. You can see that the command returned World ID, virtual machine name, number of ports, and the network: esxcli network vm list World ID  Name  Num Ports  Networks --------  ---------------------------------------------------  ---------  --------------- 14323012  cluster08_(5fa21117-18f7-427c-84d1-c63922199e05)          1  dvportgroup-372 Now use the World ID of a virtual machine returned by the last command to list all the ports the virtual machine is currently using. You can see the virtual switch name, MAC address of the NIC, IP address, and uplink port ID: esxcli network vm port list -w 14323012 Port ID: 50331662 vSwitch: dvSwitch-Network-Pools Portgroup: dvportgroup-372 DVPort ID: 1063 MAC Address: 00:50:56:01:00:7e IP Address: 0.0.0.0 Team Uplink: all(2) Uplink Port ID: 0 Active Filters: Type the following command in the CLI to list the statistics of the virtual switch—you need to replace the port ID as returned by the last command after –p flag: esxcli network port stats get -p 50331662 Packet statistics for port 50331662 Packets received: 10787391024 Packets sent: 7661812086 Bytes received: 3048720170788 Bytes sent: 154147668506 Broadcast packets received: 17831672 Broadcast packets sent: 309404 Multicast packets received: 656 Multicast packets sent: 52 Unicast packets received: 10769558696 Unicast packets sent: 7661502630 Receive packets dropped: 92865923 Transmit packets dropped: 0 Type the following command to list complete information about the network card of the virtual machine: esxcli network nic stats get -n vmnic0 NIC statistics for vmnic0 Packets received: 2969343419 Packets sent: 155331621 Bytes received: 2264469102098 Bytes sent: 46007679331 Receive packets dropped: 0 Transmit packets dropped: 0 Total receive errors: 78507 Receive length errors: 0 Receive over errors: 22 Receive CRC errors: 0 Receive frame errors: 0 Receive FIFO errors: 78485 Receive missed errors: 0 Total transmit errors: 0 Transmit aborted errors: 0 Transmit carrier errors: 0 Transmit FIFO errors: 0 Transmit heartbeat errors: 0 Transmit window errors: 0 A complete reference of the ESXCli Network command can be found here at https://goo.gl/9OMbVU. All the vicfg-* commands are very helpful and easy to use. I will encourage you to learn in order to make your life easier. Here are some of the vicfg-* commands relevant to network troubleshooting: vicfg-route: We will use this command for how to add or remove IP routes and how to create and delete default IP Gateways. vicfg-vmknic: We will use this command to perform different operations on VMkernel NICs for vSphere hosts. vicfg-dns: This command will be used to manipulate DNS information. vicfg-nics: We will use this command to manipulate vSphere Physical NICs. vicfg-vswitch: We will use this command to to create, delete, and modify vswitch information. Troubleshooting uplinks We will use the vicfg-nics command to manage physical network adapters of vSphere hosts. The vicfg-nics command can also be used to set up the speed, VMkernel name for the uplink adapters, duplex setting, driver information, and link state information of the NIC. Connect to your vMA appliance console and set up the target vSphere host: vifptarget --set crimv3esx001.linxsol.com List all the network cards available in the vSphere host. See the following screenshot for the output: vicfg-nics –l You can see that my vSphere host has five network cards from vmnic0 to vmnic5. You are able to see the PCI and driver information. The link state for the all the network cards is up. You can also see two types of network card speeds: 1000 Mbs and 9000 Mbs. There is also a card name in the Description field, MTU, and the Mac address for the network cards. You can set up a network card to auto-negotiate as follows: vicfg-nics --auto vimnic0 Now let's set the speed of vmnic0 to 1000 and its duplex settings to full: vicfg-nics --duplex full --speed 1000 vmnic0 Troubleshooting virtual switches The last command we will discuss in this article is vicfg-vswitch. The vicfg-vswitch command is a very powerful command that can be used to manipulate the day-to-day operations of a virtual switch. I will show you how to create and configure port groups and virtual switches. Set up a vSphere host in the vMA appliance in which you want to get information about virtual switches: vifptarget --set crimv3esx001.linxsol.com Type the following command to list all the information about the switches the vSphere host has. You can see the command output in the screenshot that follows: vicfg-vswitch -l You can see that the vSphere host has one virtual switch and two virtual NICs carrying traffic for the management network and for the vMotion. The virtual switch has 128 ports, and 7 of them are in used state. There are two uplinks to the switch with MTU set to 1500, while two VLANS are being used: one for the management network and one for the vMotion traffic. You can also see three distributed switches named OpenStack, dvSwitch-External-Networks, and dvSwitch-Network-Pools. Prefixing dv with the distributed switch name is a command practice, and it can help you to easily recognize a distributed switch. I will go through adding a new virtual switch: vicfg-vswitch --add vSwitch002 This creates a virtual switch with 128 ports and MTU of 1500. You can use the --mtu flag to specify a different MTU. Now add an uplink adapter vnic02 to the newly created virtual switch vSwitch002: vicfg-vswitch --link vmnic0 vSwitch002 To add a port group to the virtual switch, use the following command: vicfg-vswitch --add-pg portgroup002 vSwitch002 Now add an uplink adapter to the port group: vicfg-vswitch --add-pg-uplink vmnic0 --pg portgroup002 vSwitch002 We have discussed all the commands to create a virtual switch and its port groups and to add uplinks. Now we will see how to delete and edit the configuration of a virtual switch. An uplink NIC from the port group can be deleted using –N flag. Remove vmnic0 from the portgroup002: vicfg-vswitch --del-pg-uplink vmnic0 --pg portgroup002 vSwitch002 You can delete the recently created port group as follows: vicfg-vswitch --del-pg portgroup002 vSwitch002 To delete a switch, you first need to remove an uplink adapter from the virtual switch. You need to use the –U flag, which unlinks the uplink from the switch: vicfg-vswitch --unlink vmnic0 vSwitch002 You can delete a virtual switch using the –d flag. Here is how you do it: vicfg-vswitch --delete vSwitch002 You can check the Cisco Discovery Protocol (CDP) settings by using the --get-cdp flag with the vicfg-vswitch command. The following command resulted in putting the CDP in the Listen state, which indicates that the vSphere host is configured to receive CDP information from the physical switch: vi-admin@vma:~[crimv3esx001.linxsol.com]> vicfg-vswitch --get-cdp vSwitch0 listen You can configure CDP options for the vSphere host to down, listen, or advertise. In the Listen mode, the vSphere host tries to discover and publish this information received from a Cisco switch port, though the information of the vSwitch cannot be seen by the Cisco device. In the Advertise mode, the vSphere host doesn't discover and publish the information about the Cisco switch; instead, it publishes information about its vSwitch to the Cisco switch device. vicfg-vswitch --set-cdp both vSwitch0 Troubleshooting VLANs Virtual LANS or VLANs are used to separate the physical switching segment into different logical switching segments in order to segregate the broadcast domains. VLANs not only provide network segmentation but also provide us a method of effective network management. It also increases the overall network security, and nowadays, it is very commonly used in infrastructure. If not set up correctly, it can lead your vSphere host to no connectivity, and you can face some very common problems where you are unable to ping or resolve the host names anymore. Some common errors are exposed, such as Destination host unreachable and Connection failed. A Private VLAN (PVLAN) is an extended version of VLAN that divides logical broadcast domain into further segments and forms private groups. PVLANs are divided into primary and secondary PVLANs. Primary PVLAN is the VLAN distributed into smaller segments that are called primary. These then host all the secondary PVLANs within them. Secondary PVLANs live within primary VLANS, and individual secondary VLANs are recognized by VLAN IDs linked to them. Just like their ancestor VLANs, the packets that travel within secondary VLANS are tagged with their associated IDs. Then, the physical switch recognizes if the packets are tagged as isolated, community, or promiscuous. As network troubleshooting involves taking care of many different aspects, one aspect you will come across in the troubleshooting cycle is actually troubleshooting VLANS. vSphere Enterprise Plus licensing is a requirement to connect a host using a virtual distributed switch and VLANs. You can see the three different network segments in the following screenshot. VLAN A connects all the virtual machines on different vSphere hosts; VLAN B is responsible for carrying out management network traffic; and VLAN C is responsible for carrying out vMotion-related traffic. In order to create PVLANs on your vSphere host, you also need the support of a physical switch: For detailed information about the vSphere network, refer to the VMware official networking guide for vSphere 5.5 at http://goo.gl/SYySFL. Verifying physical trunks and VLAN configuration The first and most important step to troubleshooting your VLAN problem is to look into the VLAN configuration of your vSphere host. You should always start by verifying it. Let's walk through how to verify the network configuration of the management network and VLAN configuration from the vSphere client: Open and log in to your vSphere client. Click on the vSphere host you are trying to troubleshoot. Click on the Configuration menu and choose Networking and then Properties of the switch you are troubleshooting. Choose the network you are troubleshooting from the list, and click on Edit. This will open a new window. Verify the VLAN ID for Management Network. Match the ID of the VLAN provided by your network administrator. Verifying VLAN configuration from CLI Following are the steps for verifying VLAN configuration from CLI: Log in to vSphere CLI. Type the following command in the console: esxcfg-vswitch -l Alternatively, in the vMA appliance, type the vicfg-vswitch command—the output is similar for both commands: vicfg-vswitch –l The output of the excfg-vswitch –l command is as follows: Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks vSwitch0         128         7           128               1500    vmnic3,vmnic2   PortGroup Name        VLAN ID  Used Ports  Uplinks   vMotion               2231     1           vmnic3,vmnic2   Management Network    2230     1           vmnic3,vmnic2  ---Omitted output--- The output of the vicfg-vswitch –l command is as follows: Switch Name     Num Ports       Used Ports      Configured Ports    MTU     Uplinks vSwitch0        128             7               128                 1500    vmnic2,vmnic3    PortGroup Name                VLAN ID   Used Ports      Uplinks    vMotion                       2231      1               vmnic2,vmnic3    Management Network            2230      1               vmnic3,vmnic2 --Omitted output--- Match it with your network configuration. If the VLAN ID is incorrect or missing, you can add or edit it using the following command from the vSphere CLI: esxcfg-vswitch –v 2233 –p "Management Network" vSwitch0 To add or edit the VLAN ID from the vMA appliance, use the following command: vicfg-vswitch --vlan 2233 --pg "Management Network" vSwitch0 Verifying VLANs from PowerCLI Verifying information about VLANs from the PowerCLI is fairly simple. Type the following command into the console after connecting with vCenter using Connect-VIServer: Get-VirtualPortGroup –VMHost crimv3esx001.linxsol.com | select Name, VirtualSwitch VLanID Name                                           VirtualSwitch                                  VlanId ----                                                -------------                                     ----- vMotion                                        vSwitch0                                      2231 Management Network                 vSwitch0                                       2233 Verifying PVLANs and secondary PVLANs When you have configured PVLANs or secondary PVLANs in your vSphere infrastructure, you may arrive at a situation where you need to troubleshoot them. This topic will provide you some tips to obtain and view information about PVLANs and secondary PVLANs, as follows: Log in to the vSphere client and click on Networking. Select a distributed switch and right-click on it. From the menu, choose Edit Settings and click on it. This will open the Distributed Switch Settings window. Click on the third tab named Private VLAN. In the section on the left named Primary private VLAN ID, verify the VLAN ID provided by your network engineer. You can verify the VLAN ID of the secondary PVLAN in the next section on the right. Testing virtual machine connectivity Whenever you are troubleshooting, virtual-machine-to-virtual-machine testing is very important. It helps you to isolate the problem domain to a smaller scope. When performing virtual-machine-to-virtual-machine testing, you should always move virtual machines to a single vSphere host. You can then start troubleshooting the network using basic commands, such as ping. If ping works, you are ready to test it further and move the virtual machines to other hosts, and if it still doesn't work, it most likely is a configuration problem of a physical switch or is likely to be a mismatched physical trunk configuration. The most common problem in this scenario is a problematic physical switch configuration. Troubleshooting VMkernel interfaces In this section, we will see how to troubleshoot VMkernel interfaces: Confirm VLAN tagging Ping to check connectivity Vicfg-vmknic Escli network ip interface for local configuration Escli network ip interface list Add or remove Set Escli network ip interface ipv4 get You should know how to use these commands to test if everything is working. You should be able to ping to ensure connectivity exists. We will use the vicfg-vmknic command to configure vSphere VMkernel NICs. Let's create a new VMkernel NIC in a vSphere host using the following steps: Log in to your VMware vSphere CLI. Type the following command to create a new VMkernel NIC: vicfg-vmknic –h crimv3esx001.linxsol.com --add --ip 10.2.0.10 –n 255.255.255.0 'portgroup01' You can enable vMotion using the vicfg-vmknic command as follows: vicfg-vmknic –enable-vmotion. You will not be able to enable vMotion from ESXCLI.vMotion protect migration of your virtual machines with zero down time. You can delete an existing VMkernel NIC as follows: vicfg-vmknic –h crimv3esx001.linxsol.com --delete 'portgroup01' Now check by typing the following command which VMkernel NICs are available in the system: vicfg-vmknic -l Verifying configuration from DCUI When you successfully install vSphere, the first yellow screen that you see is called the vSphere DCUI. DCUI is a frontend management system that helps perform some basic system administration tasks. It also offers the best way to troubleshoot some problems that may be difficult to troubleshoot through vMA, vCLI, or PowerCLI. Further, it is very useful when your host becomes irresponsive from the vCenter or is not accessible from any of the management tools. Some useful tasks that can be performed using the DCUI are as follows: Configuring the Lockdown mode Checking connectivity of Management Network by Ping Configuring and restarting network settings Restarting management agents Viewing logs Resetting vSphere configuration Changing root password Verifying network connectivity from DCUI The vSphere host automatically assigns the first network card available to the system for the management network. Moreover, the default installation of the vSphere host does not let you set up VLAN tags until the VMkernel has been loaded. Verifying network connectivity from the DCUI is important but easy. To do so, follow these steps: Press F2 and enter your root user name and password. Click OK. Use the cursor keys to go down to the Test Management Network option. Click Enter, and you will see a new screen. Here you can enter up to three IP addresses and the host name to be resolved. You can also type your gateway address on this screen to see if you are able to reach to your gateway. In the host name, you can enter your DNS server name to test if the name resolves successfully. Press Esc to get back and Esc again to log off from the vSphere DCUI. Verifying management network from DCUI You can also verify the settings of your management network from the DCUI. Press F2 and enter your root user name and password. Click OK. Use the cursor keys to go down to option Configure Management Network option and click Enter. Click Enter again after selecting the first option Network Adapters. On the next screen, you will see a list of all the network adapters your system has. It will show you the Device Name, Hardware Type, Label, Mac Address of the network card, and the status as Connected or Disconnected. From the given network cards, you can select or deselect any of the network card by pressing the space Bar on your keyboard. Press Esc to get back and Esc again to log off from the vSphere DCUI. As you can see in the preceding screenshot, you can also configure the IP address and DNS settings for your vSphere host. You can also use DCUI to configure VLANs and DNS Suffix for your vSphere host. Summary In this article, for troubleshooting, we took a deep dive into the troubleshooting commands and some of the monitoring tools to monitor network performance. The various platforms to execute different commands help you to isolate your troubleshooting techniques. For example, for troubleshooting a single vSphere host, you may like to use esxcli, but for a bunch of vSphere hosts you would like to automate scripting tasks from PowerCLI or from a vMA appliance. Resources for Article: Further resources on this subject: UPGRADING VMWARE VIRTUAL INFRASTRUCTURE SETUPS [article] VMWARE VREALIZE OPERATIONS PERFORMANCE AND CAPACITY MANAGEMENT [article] WORKING WITH VIRTUAL MACHINES [article]
Read more
  • 0
  • 0
  • 4747
article-image-optimizing-netscaler-traffic
Packt
14 Oct 2015
11 min read
Save for later

Optimizing NetScaler Traffic

Packt
14 Oct 2015
11 min read
In this article by Marius Sandbu, author of the book Implementing NetScaler VPX™ - Second Edition, explains the purpose of NetScaler is to act as a logistics department; it serves content to different endpoints using different protocols across different types of media, and it can either be a physical device or a device on top of a hypervisor within a private cloud infrastructure. Since there are many factors are at play here, there is room for tuning and improvement. Some of the topics we will go through in this article are as follows: Tuning for virtual environments Tuning TCP traffic (For more resources related to this topic, see here.) Tuning for virtual environments When setting up a NetScaler in a virtual environment, we need to keep in mind that there are many factors that influence how it will perform, for instance, the underlying CPU of the virtual host, NIC throughput and capabilities, vCPU over allocation, NIC teaming, MTU size, and so on. So, always important to remember the hardware requirements when setting up a NetScaler VPX on a virtualization host. Another important factor to keep in mind when setting up a NetScaler VPX is the concepts of Package Engines. By default, when we set up or import a NetScaler, it is set up with two vCPU. The first of these two is dedicated for management purpose and the second vCPU is dedicated to doing all the packet processing, such as content switching, SSL offloading, ICA-proxy, and so on. It is important to note that the second vCPU might be seen as 100% utilized in the hypervisor performance monitoring tools, but the correct way to check if it is being utilized is by using the CLI command stat system Now, by default, VPX 10 and VPX 200 only have support for one packet engine. This is because of the fact that due to its bandwidth limitations it does not require more packet engine CPUs to process the packets. On the other hand, VPX 1000 and VPX 3000 have support for up to 3 packet engines. This, in most cases, is needed to be able to process all the packets that are going through the system, if the bandwidth is going to be utilized to its fullest. In order to add a new packet engine, we need to assign more vCPUs to the VPX and more memory. Packet engines also have the benefit of load balancing processing between them, so instead of having a vCPU that is 100% utilized, we can even the load between multiple vCPUs and get an even better performance and bandwidth. The following chart shows the different editions and support for multiple packet engines: License/Memory 2 GB 4 GB 6 GB 8 GB 10 GB 12 GB VPX 10 1 1 1 1 1 1 VPX 200 1 1 1 1 1 1 VPX 1000 1 2 3 3 3 3 VPX 3000 1 2 3 3 3 3 It is important for us to remember that multiple PEs are only available for VMware, XenServer, and Hyper-V, but not on KVM. If we plan on using NIC-teaming on the underlying virtualization host, there are some important aspects to consider. Most of the different vendors have guidelines which describe the kinds of load balancing techniques that are available in the hypervisor. For instance, Microsoft has a guide that describes their features. You can find the guide at http://www.microsoft.com/en-us/download/details.aspx?id=30160. One of the NIC teaming options, called Switch Independent Dynamic Mode, has an interesting side effect; it replaces the source MAC address of the virtual machine with the one of the primary NIC on the host, hence we might experience packet loss on a VPX. Therefore, it is recommended in most cases that we have LACP/LAG, or in case of Hyper-V, use the HyperVPort distribution feature instead. Note that the features such as SRV-IO or PCI pass through are not supported for NetScaler VPX. NetScaler 11 also introduced the support for Jumbo Frames for the VPX, which allows for a much higher payload in an Ethernet frame. Instead of the traditional 1500 bytes, we can scale up to 9000 bytes of payload. This allows for a much lower overhead since the frames contain more data. This requires that the underlying NIC on the hypervisor supports this feature and is enabled as well, and this in most cases just work for communication with backend resources and not with users accessing public resources. This is because of the fact that most routers and ISP block such high MTU. This feature can be configured on at the Interface level in NetScaler, which can be done under System | Network | Interface then choose Select Interface and click on Edit. Here, we have an option called Maximum Transmission Unit which can be adjusted up to 9,216 Bytes. It is important to note that NetScaler communicates with backend resources using Jumbo frames, and then adjusts the MTU when communicating back with clients. It can also communicate with Jumbo frames in both paths, in case the NetScaler is set up as a backend load balancer. It is important to note that NetScaler only supports Jumbo frames load balancing for the following protocols: TCP TCP based protocols, like HTTP SIP RADIUS TCP tuning Much of the traffic which is going through NetScaler is based on the TCP protocol. Either it is ICA-Proxy, HTTP and so on. TCP is a protocol that provides reliable, error-checked delivery of packets back and forth. This ensures that data is successfully transferred before being processed further. TCP has many features to adjust bandwidth during transfer, congestion checking, adjusting segment sizing, and so on. We will delve a little into all these features in this section. We can adjust the way NetScaler uses TCP using TCP profiles, by default all services and vServers that are created on the NetScaler uses the default TCP profile nstcp_default_profile. Note that these profiles can be found by navigating to System | Profiles | TCP Profiles. Make sure not to alter the default TCP profile without properly consulting the network team, as this affects the way in which TCP works for all default services on the NetScaler. This default profile has most of the different TCP features turned off; this is to ensure compatibility with most infrastructures. The profile has not been adjusted much since it was first added in NetScaler. Citrix also has a lot of other different profiles depending on the use cases, so we are going to look a bit closer on the different options we have here. For instance, the profile nstcp_default_XA_XD_profile, which is intended for ICA-proxy traffic has some differences from the default profile, which are as follows: Window Scaling Selective Acknowledgement Forward Acknowledgement Use of Nagles Algorithm Window Scaling is a TCP option that allows the receiving point to accept more data than it is allowed in the TCP RFC for window size before getting an acknowledgement. By default, the window size is set to accept 65.536 bytes. With Window scaling enabled, it basically bitwise shifts the window size. This is an option that needs to be enabled on both endpoints in order to be used, and will only be sent in the initial three-way handshake. Select Acknowledgement (SACK) is a TCP option that allows for better handling of TCP retransmission. In a scenario of two hosts communicating where SACK is not enabled, and suddenly one of the hosts drops out of the network and loses some packets when they come back online it receives more packets from the other host. In this case the first host will ACK from the last packet it got from the other host before it dropped out. With SACK enabled, it will notify the other host of the last packet it got before it dropped out, and the other packets it received when he got back online. This allows for faster recovery of the communication, since the other host does not need to resend all the packets. Forward Acknowledgement (FACK) is a TCP option which works in conjunction with SACK and helps avoid TCP congestion by measuring the total number of data bytes that are outstanding in the network. Using the information from SACK, it can more precisely calculate how much data it can retransmit. Nagles Algoritm is a TCP feature that tries to cope with the small packet problem. Applications like Telnet often sends each keystroke within its own packet, creating multiple small packets containing only 1 byte of data, which results in a 41-byte packet for one keystroke. The algorithm works by combining a number of small outgoing messages into the same message, thus avoiding an overhead. Since ICA is a protocol that operates with many small packets, which might create congestion, Nagle is enabled in the TCP profile. Also, since many will be connecting using 3G or Wi-Fi, which might, in some cases, be unreliable to change channel, we need options that require the clients to be able to re-establish a connection quickly, which allows the use of SACK and FACK. Note that Nagle might have negative performance on applications that have their own buffering mechanism and operate inside LAN. If we take a look at another profile like nstcp_default_lan, we can see that FACK is disabled; this is because resources needed to calculate the amount of outstanding data in a high-speed network might be too much. Another important aspect of these profiles is the TCP congestion algorithms. For instance, nstcp_default_mobile uses the Westwood congestion algorithm; this is because it is much better at handling large bandwidth-delay paths, such as the wireless. The following congestion algorithms are available in NetScaler: Default (Based on TCP Reno) Westwood (Based on TCP Westwood+) BIC CUBIC Nile (Based on TCP Illinois) It is worth noting here that Westwood is aimed for 3G/4G connections or other slow wireless connections. BIC is aimed for high bandwidth connections with high latency, such as WAN connections. CUBIC is almost like BIC, but not as aggressive when it comes to fast-ramp and retransmissions. However, it is important to note that CUBIC is the default TCP algorithm in Linux kernels from 2.6.19 to 3.1 Nile is a newly created algorithm created by Citrix, which is based upon TCP Illinois and is targeted at high-speed, long-distance networks. It achieves higher throughput than standard TCP and is also compatible with standard TCP. So, here we can customize the algorithm that is better suited for a service. For instance, if we have a vServer that serves content to mobile devices, then we could use the nstcp_default_mobile TCP profile. There are also some other parameters that are important to keep in mind while working with the TCP profile. One of these parameters is multipath TCP. This is feature which allows endpoint which has multiple paths to a service. It is typically a mobile device which has WLAN and 3G capabilities, and allows the device to communicate with a service on a NetScaler using both channels at the same time. This requires that the device supports communication on both methods and that the service or application on the device supports Multipath TCP. So let's take an example of how a TCP profile might look like. If we have a vServer on NetScaler that is used to service an application to mobile devices. Meaning that the most common way that users can access this service is using 3G or Wi-Fi. The web service has its own buffering mechanism meaning that it tries not to send small packets over the link. The application is Multipath-TCP aware. In this scenario, we can leverage the nstcp_default_mobile profile, since it has most of the defaults for a mobile scenario, but we can also enable multipath TCP and create a new profile of it and bind it to the vServer. In order to bind a TCP profile to a vServer, we have go to a particular vServer | Edit | Profiles | TCP Profiles, as shown in the following screenshot: Note that AOL did a presentation of their own TCP customization on NetScaler; the presentation can be found at http://www.slideshare.net/masonke/net-scaler-tcpperformancetuningintheaolnetwork. It is also important to note that TCP should always be done in cooperation with the network team. Summary In this article, we have learned about Tuning for virtual environments and TCP tuning. Resources for Article: Further resources on this subject: Designing, Sizing, Building, and Configuring Citrix VDI-in-a-Box[article] XenMobile™ Solutions Bundle[article] Load balancing MSSQL [article]
Read more
  • 0
  • 0
  • 5839

article-image-managing-pools-desktops
Packt
07 Oct 2015
14 min read
Save for later

Managing Pools for Desktops

Packt
07 Oct 2015
14 min read
In this article by Andrew Alloway, the author of VMware Horizon View High Availability, we will review strategies for providing High Availability for various types of VMware Horizon View desktop pools. (For more resources related to this topic, see here.) Overview of pools VMware Horizon View provides administrators with the ability to automatically provision and manage pools of desktops. As part of our provisioning of desktops, we must also consider how we will continue service for the individual users in the event of a host or storage failure. Generally High Availability requirements fall into two categories for each pool. We can have stateless desktops where the user information is not stored on the VM between sessions and Stateful desktops where the user information is stored on the desktop between sessions. Stateless desktops In a stateless configuration, we are not required to store data on the Virtual Desktops between user sessions. This allows us to use Local Storage instead of shared storage for our HA strategies as we can tolerate host failures without the use of shared disk. We can achieve a stateless desktop configuration using roaming profiles and/or View Persona profiles. This can greatly reduce cost and maintenance requirements for View Deployments. Stateless desktops are typical in the following environments: Task Workers: A group of workers where the tasks are well known and they all share a common set of core applications. Task workers can use roaming profiles to maintain data between user sessions. In a multi shift environment, having stateless desktops means we only need to provision as many desktops that will be used consecutively. Task Worker setups are typically found in the following scenarios: Data entry Call centers Finance, Accounts Payables, Accounts Receivables Classrooms (in some situations) Laboratories Healthcare terminals Kiosk Users: A group of users that do not login. Logins are typically automatic or without credentials. Kiosk users are typically untrusted users. Kiosk VMs should be locked down and restricted to only the core applications that need to be run. Kiosks are typically refreshed after logoff or at scheduled times after hours. Kiosks can be found in situations such as the following: Airline Check-In stations Library Terminals Classrooms (in some situations) Customer service terminals Customer Self-Serve Digital Signage Stateful desktops Statefull desktops have some advantages from reduced iops and higher disk performance due to the ability to choose thick provisioning. Stateful desktops are desktops that require user data to be stored on the VM or Desktop Host between user sessions. These machines typically are required by users who will extensively customize their desktop in non-trivial ways, require complex or unique applications that are not shared by a large group or require the ability to modify their VM Stateful Desktops are typically used for the following situations: Users who require the ability to modify the installed applications Developers IT Administrators Unique or specialized users Department Managers VIP staff/managers Dedicated pools Dedicated pools are View Desktops provisioned using thin or thick provisioning. Dedicated pools are typically used for Stateful Desktop deployments. Each desktop can be provisioned with a dedicated persistent disk used for storing the User Profile and data. Once assigned a desktop that user will always log into the same desktop ensuring that their profile is kept constant. During OS refresh, balances and recomposes the OS disk is reverted back to the base image. Dedicated Pools with persistent disks offer simplicity for managing desktops as minimal profile management takes place. It is all managed by the View Composer/View Connection Server. It also ensures that applications that store profile data will almost always be able to retrieve the profile data on the next login. Meaning that the administrator doesn't have to track down applications that incorrectly store data outside the roaming profile folder. HA considerations for dedicated pools Dedicated pools unfortunately have very difficult HA requirements. Storing the user profile with the VM means that the VM has to be stored and maintained in an HA aware fashion. This almost always results in a shared disk solution being required for Dedicated Pools. In the event of a host outage other hosts connected to the same storage can start up the VM. For shared storage, we can use NFS, iSCSI, Fibre Channel, or VMware Virtual SAN storage. Consider investing in storage systems with primary and backup controllers as we will be dependent on the disk controllers being always available. Backups are also a must with this system as there is very little recovery options in the event of a storage array failure. Floating Pools Floating pools are a pool of desktops where any user can be assigned to any desktop in the pool upon login. Floating pools are generally used for stateless desktop deployments. Floating pools can be used with roaming profiles or View Persona to provide a consistent user experience on login. Since floating pools are treated as disposable VMs, we open up additional options for HA. Floating pools are given 2 local disks, the OS disk which is a replica from the assigned base VM, and the Disposable Disk where the page file, hibernation file, and temp drive are located. When Floating pools are refreshed, recomposed or rebalanced, all changes made to the desktop by the users are lost. This is due to the Disposable Disk being discarded between refreshes and the OS disk being reverted back to the Base Image. As such any session information such as Profile, Temp directory, and software changes are lost between refreshes. Refreshes can be scheduled to occure after logoff, after every X days or can be manually refreshed. HA considerations for floating pools Floating pools can be protected in several ways depending on the environment. Since floating pools can be deployed on local storage we can protect against a host failure by provisioning the Floating Pool VMs on multiple separate hosts. In the event of a host failure the remaining Virtual Desktops will be used to log users in. If there is free capacity in the cluster more Virtual Desktops will be provisioned on other hosts. For environments with shared storage Floating Pools can still be deployed on the shared storage but it is a good idea to have a secondary shared storage device or a highly available storage device. In the event of a storage failure the VMs can be started on the secondary storage device. VMware Virtual SAN is inherently HA safe and there is no need for a secondary datastore when using Virtual SAN. Many floating pool environments will utilize a profile management solution such as Roaming Profiles or View Persona Management. In these situations it is essential to setup a redundant storage location for View Profiles and or Roaming Profiles. In practice a Windows DFS share is a convenient and easy way to guard profiles against loss in the event of an outage. DFS can be configured to replicate changes made to the profile in real time between hosts. If the Windows DFS server is provisioned as VMs on shared storage make sure to create a DRS rule to separate the VMs onto different hosts. Where possible DFS servers should be stored on separate disk arrays to ensure they data is preserved in the event of the Disk Array, or Storage Processor failure. For more information regarding Windows DFS you can visit the link below https://technet.microsoft.com/en-us/library/jj127250.aspx Manual pools Manual pools are custom dedicated desktops for each user. A VM is manually built for each user who is using the manual pool. Manual Pools are Stateful pools that generally do not utilize profile management technologies such as View Persona or Roaming Profiles. Like Dedicated pools once a user is assigned to a VM they will always log into the same VM. As such HA requirements for manual pools are very similar to dedicated pools. Manual desktops can be configured in almost any maner desired by the administrator. There are no requirements for more than one disk to be attached to the Manual Pool desktop. Manual pools can also be configured to utilize physical hardware as the Desktop such as Blade Servers, Desktop Computers or even Laptops. In this situation there are limited high availability options without investing in exotic and expensive hardware. As best practice the physical hosts should be built with redundant power supplies, ECC RAM, mirrored hard disks pending budget and HA requirements. There should be a good backup strategy for managing physical hosts connected to the Manual Pools. HA considerations for manual pools Manual pools like dedicated pools have a difficult HA requirement. Storing the user profile with the VM means that the VM has to be stored and maintained in an HA aware fashion. This almost always results in a shared disk solution being required for Manual Pools. In the event of a host outage other hosts connected to the same storage can start up the VM. For shared storage, we can use NFS, iSCSI, Fibre Channel, or VMware VSAN storage. Consider investing in storage systems with primary and backup controllers as we will be dependent on the disk controllers being always available. Backups are also a must with this system as there is very little recovery options in the event of a storage array failure. VSAN deployments are inherently HA safe and are excellent candidates for Manual Pool storage. Manual pools given their static nature also have the option of using replication technology to backup the VMs onto another disk. You can use VMware vSphere Replication to do automatic replication or use a variety of storage replication solutions offered by storage and backup vendors. In some cases it may be possible to use fault tolerance on the Virtual Desktops for truly high availability. Note that this would limit the individual VMs to a single vCPU which may be undesirable. Remote Desktop services pools Remote Desktop Services Pools (RDS Pools) are pools where the remote session or application is hosted on a Windows Remote Desktop Server. The application or remote session is run under the users' credentials. Usually all the user data is stored locally on the Remote Desktop Server but can also be stored remotely using Roaming Profiles or View Persona Profiles. Folder Redirection to a central network location is also used with RDS Pools. Typical uses for Remote Desktop Services is for migrating users off legacy RDS environments, hosting applications, and providing access to troublesome applications or applications with large memory foot prints. The Windows Remote Desktop Server can be either a VM or a standalone physical host. It can be combined with Windows Clustering technology to provide scalability and high availability. You can also deploy a load balancer solution to manage connections between multiple Windows Remote Desktop Servers. Remote Desktop services pool HA considerations Remote Desktop services HA revolves around protecting individual RDS VMs or provisioning a cluster of RDS servers. When a single VM is deployed wilth RDS generally it is best to use vSphere HA and clustering featurs to protect the VM. If the RDS resources are larger than practical for a VM then we must focus on protecting individual host or clustering multiple hosts. When the Windows Remote Desktop Server is deployed as a VM the following options are available: Protect the VM with VMware HA, using shared storage This allows vCenter to fail over the VM to another host in the event of a host failure. vSphere will be responcible for starting the VM on another host. The VM will resume from a crashed state. Replicate the Virtual Machine to separate disks on separate hosts using VMware Virtual SAN: Same as above but in this case the VM has been replicated to another host using Virtual SAN technology. The remote VM will be started up from a crashed state, using the last consistent harddrive image that was replicated. Using replication technologies such as vSphere Replication: The VM will be periodically synchronized to a remote host. In the event of a host failure we can manually activate the remotely synchronized VM. Use a Vendors Storage Level replication: In this case we allow our storage vendor to provide replication technology to provide a redundant backup. This protects us in the event of a storage or host failure. These failures can be automated or manual. Consult with your Storage Vendor for more information. Protect the VM using backup technologies: This provides redundancy in the sense that we won't loose the VM if it fails. Unfortuantely you are at the mercy of your restore process to bring the VM back to life. The VM will resume from a crashed state. Always keep backups of production servers. For RDS servers running on a dedicated server we could utilize the following: Redundant power supplies: Redundant power supplies will keep the server going while a PSU is being replaced or becomes defective. It is also a good idea to have 2 separate power sources for each power supply. Simple things like a power bar going faulty or triping a breaker could bring down the server if there are not two independent power sources. Uninteruptable Power Supply: Battery backups are always a must for production level equipment. Make sure to scale the UPS to provide adequate power and duration for your environment. Redundant network interfaces: In rare sucumstances a NIC can go bad or a cable can be damaged. In this case redundant NICs will prevent a server outage. Remember that to protect against a switch outage we should plug the NICs into separate switches. Mirrored or redundant disks: Harddrives are one of the most common failures in computers. Mirrored harddrives or RAID configurations are a must for production level equipment. 2 or more hosts: Clustering physical servers will ensure that host failures won't cause downtime. Consider multi site configurations for even more redundancy. Shared Strategies for VMs and Hardware: Provide High Availability to the RDS using Microsoft Network Load Balancer (NLB): Microsoft Network Load Balancer can provide load balancing to the RDS servers directy. In this situation the clients would connect to a single IP managed by the NLB which would randomly be assigned to a server. Provide High Availability using a load balancer to manage sessions between RDS servers: Using a hardware or software load balancer is can be used instead of Microsoft Network Load Balancers. Load Balancer vendors provide a high variety of capabilities and features for their load balancers. Consult your load balancer vendor for best practices. Use DNS Round Robin to alternate between RDS hosts: On of the most cost effective load balancing methods. It has the drawback of not being able to balance the load or to direct clients away from failed hosts. Updating DNS may delay adding new capacity to the cluster or delay removing a failed host from the cluster. Remote Desktop Connection Broker with High Availability: We can provide RDS failover using the Connection Broker feature of our RDS server. For more details see the link below. For more information regarding Remote Desktop Connection Broker with High Availability see: https://technet.microsoft.com/en-us/library/ff686148%28WS.10%29.aspx Here is an example topology using physical or virtual Microsoft RDS Servers. We use a load balancing technology for the View Connection Servers as described in the previous chapter. We then will connect to the RDS via either a load balancer, DNS round robin, or Cluster IP. Summary In this article, we covered the concept of stateful and stateless desktops and the consequences and techniques for supporting each in a highly available environment. Resources for Article: Further resources on this subject: Working with Virtual Machines[article] Storage Scalability[article] Upgrading VMware Virtual Infrastructure Setups [article]
Read more
  • 0
  • 0
  • 1662