Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Cloud Computing

121 Articles
article-image-determining-resource-utilization-requirements
Packt
17 Feb 2014
11 min read
Save for later

Determining resource utilization requirements

Packt
17 Feb 2014
11 min read
(For more resources related to this topic, see here.) For those hoping to find a magical catch-all formula that will work in every scenario, you'll have to keep looking. Remember every environment is unique, and even where similarities may arise, the use case your organization has, will most likely be different from another organization. Beyond your specific VM resource requirements, the hosts you are installing ESXi on will also vary; the hardware available to you will affect your consolidation ratio (the number of virtual machines you can fit on a single host). For example, if you have 10 servers that you want to virtualize, and you have determined each requires 4 GB of RAM, you might easily virtualize those 10 servers on a host with 48 GB of memory. However, if your host only has 16 GB of memory, you may need two or three hosts in order to achieve the required performance. Another important aspect to consider is when to collect resource utilization statistics about your servers. Think about the requirements you have for a specific server; let's use your finance department as an example. You can certainly collect resource statistics over a period of time in the middle of the month, and that might work just fine; however, the people in your finance department are more likely to utilize the system heavily during the first few days of the month as they are working on their month end processes. If you collect resource statistics on the 15th, you might miss a huge increase in resource utilization requirements, which could lead to the system not working as expected, making unhappy users. One last thing before we jump into some example statistics; you should consider collecting these statistics over at least two periods for each server: First, during the normal business hours of your organization or the specific department, during a time when systems are likely to be heavily utilized The second round should include an entire day or week so you are aware of the impact of after hours tasks such as backups and anti-virus scans on your environment It's important to have a strong understanding of the use cases for all the systems you will be virtualizing. If you are running your test during the middle of the month, you might miss the increase of traffic for systems utilized heavily only at the end of the month, for example, accounting systems. The more information you collect, the better prepared you will be to determine your resource utilization requirements. There are quite a few commercial tools available to help determine the specific resource requirements for your environment. In fact, if you have an active project and/or budget, check with your server and storage vendor as they can most likely provide tools to assess your environment over a period of time to help you collect this information. If you work with a VMware Partner or the VMware Professional Services Organization (PSO), you could also work with them to run a tool called VMware Capacity Planner. This tool is only available to partners who have passed the corresponding partner exams. For purposes of this article, however, we will look at the statistics we can capture natively within an operating system, for example, using Performance Monitor on Windows and the sar command in Linux. If you are an OS X user, you might be wondering why we are not touching OS X. This is because while Apple allows virtualizing OS X 10.5 and later, it is only supported on the Apple hardware and is not likely an everyday use case. If your organization requires virtualizing OSX, ESXi 5.1 is supported on specific Mac Pro desktops with Intel Xeon 5600 series processors and 5.0 is supported on Xserve using Xeon 5500 series processors. The current Apple license agreement allows virtualizing OSX 10.5 and up; of course, you should check for the latest agreement to ensure you are adhering to the license agreement. Monitoring common resource statistics From a statistics perspective, there are four main types of resources you generally monitor: CPU, memory, disk, and network. Unless you have a very chatty application, network utilization is generally very low, but this doesn't mean we won't check on it; however, we probably won't dedicate as much time to it as we do for CPU, memory, and disk. As we think about the CPU and memory, we generally look at utilization in terms of percentages. When we look at example servers, you will see that having an accurate inventory of the physical server is important so we can properly gauge the virtual CPU and memory requirements when we virtualize. If a physical server has dual quad core CPUs and 16 GB of memory, it does not necessarily mean we want to provide the same amount of virtual resources. Disk performance is where many people spend the least amount of time, and those people generally have the most headaches after they have virtualized. Disk performance is probably the most critical aspect to think about when you are planning your virtualization project. Most people only think of storage in terms of storage capacity, generally gigabytes (GB) or terabytes (TB). However, from a server perspective, we are mostly concerned with the amount of input and output per second, otherwise known as IOPS and throughput. We break down IOPS in into reads and writes per second and then their ratio by comparing one with the other. Understanding your I/O patterns will help you design your storage architecture to properly support all your applications. Storage design and understanding is an art and science by itself. Sample workload Let's break this down into a practical example so we can see how we are applying these concepts. In this example, we will look at two different types of servers that are likely to have various resource requirements: Windows Active Directory Domain Controller and a CentOS Apache web server. In this scenario, let's assume that each of these server operating systems and applications are running on dedicated hardware, that is, they are not yet virtual machines. The first step you should take, if you do not have this already, is to document the physical systems, their components, and other relevant information such as computer or DNS name, IP address (es), location, and so on. For larger environments, you may also want to document installed software, user groups or departments, and so on. Collecting statistics on Windows On Windows servers, your first step would be to start performance monitoring. Perform the following steps to do so: Navigate to Start | Run and enter perfmon. Once the Performance Monitor window opens, expand Monitoring Tools and click on Performance Monitor. Here, you could start adding various counters; however, as of Windows 2008/Windows 7, Performance Monitor includes Data Collector Sets. Expand the Data Collector Sets folder and then the System folder; right-click on System Performance and select Start. Performance Monitor will start to collect key statistics about your system and its resource utilization. When you are satisfied that you have collected an appropriate amount of data, click on System Performance and select Stop. Your reports will be saved into the Reports folder; navigate to Reports| System, click on the System Performance folder, and finally double-click on the report to see the report. In the following screenshot for our domain controller, you can see we were using 10 percent of the total CPU resources available, 54 percent of the memory, a low 18 IOPS, and 0 percent of the available network resources (this is not really uncommon; I have had busy application servers that barely break 2 percent). Now let's compare what we are utilizing with the actual physical resources available to the server. This server has two dual core processors (four total cores) running at 2 GHz per core (8 GHz total available), 4 GB of memory, two 200 GB SAS drives configured in a RAID 1, and a 1 Gbps network card. Here, performance monitor shows averages, but you should also investigate peak usage. If you scroll down in the report, you will find a menu labeled CPU. Navigate to CPU | Process. Here you will see quite a bit of data, more than the space we have to review in this book; however, if you scroll down, you will see a section called Processor User Time by CPU. Here, your mean (that is, average) column should match fairly closely to the report overview provided for the total, but we also want to look at any spikes we may have encountered. As you can see, this CPU had one core that received a maximum of 35 percent utilization, slightly more than the average suggested. If we take the average CPU utilization at 10 percent of the total CPU, it means we will theoretically require only 800 MHz of CPU power, something a single physical core could easily support. The, memory is also using only half of what is available, so we can most likely reduce the amount of memory to 3 GB and still have room for various changes in operating conditions we might have not encountered during our collection window. Finally, having only 18 IOPS used means that we have plenty of performance left in the drives; even a SATA 7200 RPM drive can provide around 80 IOPS. Collecting statistics on Linux Now let's look at the Linux web server to see how we can collect this same set of information using sar in an additional package with sysstat that can monitor resource utilization over time. This is similar to what you might get from top or iotop. The sysstat package can easily be added to your system by running yum install sysstat, as it is a part of the base repository (yum install sysstat is command format). Once the sysstat package is installed, it will start collecting information about resource utilization every 10 minutes and keep this information for a period of seven days. To see the information, you just need to run the sar command; there are different options to display different sets of information , which we will look at next. Here, we can see that our system is idle right now by viewing the %idle column. A simple way to generate some load on your system is to run dd if=/dev/zero of=/dev/null, which will spike your CPU load to 100 percent, so, don't do this on production systems! Let's look at the output with some CPU load. In this example, you can see that the CPU was under load for about half of the 10-minute collection window. One problem here is that unless the CPU spike, in this case to 100 percent, was not consistent for at least 10 minutes, we would potentially miss these spikes using sar with a 10-minute window. This is easily changed by editing /etc/cron.d/sysstat, which tells the system to run this every 10 minutes. During a collection window, one or two minutes may provide more valuable detail. In this example, you can see I am now logging at a five-minute interval instead of 10, so I will have a better chance to find maximum CPU usage during my monitoring period. Now, we are not only concerned with the CPU, but we also want to see memory and disk utilization. To access those statistics, run sar with the following options: The sar –r command will show RAM (memory) statistics. At a basic level, the items we are concerned with here would be the percentage of memory used, which we could use to determine how much memory is actually being utilized. The sar –b command will show disk I/O. From a disk perspective, sar –b will tell us the total number of transactions per second (tps), read transactions per second (rtps), and write transactions per second (wtps). As you can see, you are able to natively collect quite a bit of relevant data about resource utilization on our systems. However, without the help of a vendor or VMware PSO who has access to VMware Capacity Planner, another commercial tool, or a good automation system, this can become difficult to do on a large scale (hundreds or thousands of servers), but certainly not impossible. Resources for Article: Further resources on this subject: Windows 8 with VMware View [Article] Troubleshooting Storage Contention [Article] Networking Performance Design [Article]
Read more
  • 0
  • 0
  • 3844

article-image-platform-service-and-cloudbees
Packt
19 Dec 2013
10 min read
Save for later

Platform as a Service and CloudBees

Packt
19 Dec 2013
10 min read
(For more resources related to this topic, see here.) Platform as a Service (PaaS) is a crossover between IaaS and SaaS. This is a fuzzy definition, but it defines well the existing actors in this industry well and possible confusions. A general presentation of PaaS uses a pyramid. Depending on what the graphics try to demonstrate, the pyramid can be drawn upside down, as shown in the following diagram: Cloud pyramids The pyramid on the left-hand side shows XaaS platforms based on the target users' profiles. It demonstrates that IaaS is the basis for all Cloud services. It provides the required flexibility for PaaS to support applications that are exposed as SaaS to the end users. Some SaaS actually don't use a PaaS and directly rely on IaaS, but that doesn't really matter here. The pyramid on the right-hand side represents the providers and the three levels suggests the number of providers in each category. IaaS only makes sense for highly concentrated, large-scale providers. PaaS can have more actors, probably focused on some ecosystem, but the need is to have a neutral and standard platform that is actually attractive for developers. SaaS is about all the possible applications running in Cloud. The top-level shape should then be far larger than what the graphic shows. So, which platform? With the previous definition of platform, you just have a faint idea; your understanding about PaaS is more than IaaS and less than SaaS. The missing definition is to know what the platform is about. A platform is a standardization of the runtime for which a developer is waiting to do his/her job. This depends on the software ecosystem you're considering. For a Java EE developer, a platform means having at least a servlet container, managing DataSource to access the database, and having few comparable resources wrapped as standard Java EE APIs. A Play! framework developer will consider this as overweight and only ask for a JVM with web socket's support. A PHP developer will expect a Linux/Apache/MySQL/PHP (LAMP) stack, similar to the one he/she has been using for years, with a traditional server hosting service. So, depending on the development ecosystem you're considering, platforms don't have the exact same meaning, but they all share a common principle. A platform is the common denominator for a software language ecosystem, where the application is all that a specific developer will write or choose on their own. Java EE developers will ask for a container, and Ruby developers will ask for an RVM environment. What they run on top is their own choice. With this definition, you understand that a platform is about the standardization of runtime for a software ecosystem. Maybe some of you have patched OpenJDK to enable some magic features in the JVM (really?), but most of us just use the standard Oracle Java distribution. Such a standardization makes it possible to share resources and engineering skills on a large scale, to reduce cost, and provide a reliable runtime. Cloud and clustering Another consideration for a platform is clustering. Cloud is based on slicing resources into small virtual elements and letting the users select as many as they need. In most cases, this requires the application to support a clustering mode, as using more resources will require you to scale out on multiple hosts. Clustering has never been a trivial thing, and many developers aren't familiar with the related constraints. The platform can help them by providing specialized services to distribute the load around the cluster's nodes. Some PaaS such as CloudBees or Google App Engine provide such features, while some don't. This is the major difference between PaaS offers. Some are IaaS-like preinstalled middleware services, while some offer a highly integrated platform. A typical issue faced is that of state management. Java EE developers rely on HttpSession to store user's data and retrieve them on subsequent interaction. Modern frameworks tend to be stateless, but the state needs to be managed anyway. PaaS has to provide options to developers, so that they can choose the best strategy to match their own business requirements. This is a typical clustering issue that is well addressed by PaaS because the technical solutions (sticky session, session replication, distributed storage engines, and so on) have been implemented once with all the required skills to do it right, and can be used by all platform users. Thanks to a PaaS, you don't need to be a clustering guru. This doesn't mean that it will magically let your legacy application scale out, but it gives you adequate tools to design the application for scalability. Private versus public Clouds Many companies are interested in Cloud, thanks to the press for publishing all product announcements as the new revolution, and would like to benefit from them but as a private resource. If you go back to the comparison in the Preface with an electricity production, this may make sense if you're well established. Amazon or Google should have private power plants to supply giant data centers can make sense—anyway it doesn't seems that they do but as backends. For most of companies, this would be a surprising company choice. The main reason is that the principle of the Cloud relies on the last letter of XaaS (S) that stands for Service. You can install an OpenStack or VMware farm on your data center, but then you won't have an IaaS. You will have some virtualization and flexibility that probably is far better than traditional dedicated hardware, but you miss the major change. You still will have to hire operators to administer the servers and software stack. You will even have a more complex software stack (search for an OpenStack administrator and you'll understand). Using Cloud makes sense because there are thousands of users all around the world sharing the same lower-level resources, and a centralized, highly specialized team to manage them all. Building your own, private PaaS is yet another challenge. This is not a simple middleware stack. This is not about providing virtual machine images with a preinstalled Tomcat server. What about maintenance, application scalability, deployment APIs, clustering, backup, data replication, high availability,monitoring, and support? Support is a major added value of cloud services—I'm not just saying this because I'm a support engineer—but because when something fails, you need someone to help. You can't just wait with the promise for a patch provided by the community. The guy who's running your application needs to have significant knowledge of the platform. That's one reason that CloudBees is focusing on Java first, as this is the ecosystem and environment we know best (even we have some Erlang and Ruby engineers whose preferred game is to troll on this displeasing language). With a private Cloud, you probably can have level-one support with an internal support team, but you can't handle all the issues. As for resource concentration, to build an impressive knowledge base. All those topics are ignored in most cases as people only focus on the app:deploy automation, as opposed to the old-style deployments to dedicated hardware. If this is what you're looking for, you should know that Maven was able to do this for years on all the Java EE containers using cargo. You can check the same at http://cargo.codehaus.org. Cloud isn't just about abstracting the runtime behind an API; it's about changing the way in which developers manage and access runtime so that it becomes a service they can consume without any need to worry about what's happening behind the scene. Security The reason that companies claim to prefer a private cloud solution is security. Amazon datacenters are far more secure than any private datacenter, due to both strong security policy and anonymous user data. Security is not about exploiting encryption algorithms, like in Hollywood movies, but about social attacks that are far more fragile. Few companies take care of administrative, financial, familial, or personal safety. Thanks to the combination of VPN, HTTPS, fixed IPs, and firewall filters, you can safely deploy an application on Amazon Cloud as an extension to your own network, to access data from your legacy Oracle or SAP mainframe hosted in your datacenter. As a mobile application demonstrates, your data is already going out from your private network. There's no concrete reason why your backend application can't be hosted outside your walls. CloudBees – embrace the development stack CloudBees PaaS has something special in its DNA that you won't find in other PaaS; focusing on the Java ecosystem first, even with polyglot support, CloudBees understands well the Java ecosystem's complexity and its underlying practices. Heroku was one of the first successful PaaS, focusing on Ruby runtime. Deployment of a Ruby application is just about sending source code to the platform using the following command: git push heroku master Ruby is a pleasant ecosystem because there are no such long debates on building and provisioning tools that we know of, unlike in JavaWorld, GemFile, and Rake, period. In the Java ecosystem, there is a need to generate, compile the source code, and then sometime post the process classes, hence a large set of build tools are required. There's also a need to provision runtime with dozens of dependencies, so a set of dependency management tools, inter-project relations, and so on are required. With Agile development practices, automated testing has introduced a huge set of test frameworks that developers want to integrate into the deployment process. The Java platform is not just about hosting a JVM or a servlet container, it's about managing Ant, Maven, SBT, or Gradle builds, as well as Grails-, Play-, Clojure-, and Scala-specific tooling. It's about hosting dependency repositories. It's about handling complex build processes to include multiple levels of testing and code analysis. The CloudBees platform has two major components: RUN@cloud is a PaaS, as described earlier, to host applications and provide high-level runtime services DEV@cloud is a continuous integration and deployment SaaS based on Jenkins Jenkins is not the subject of this article, but it is the de facto standard for but not limited to continuous integration in the Java ecosystem. With a large set of plugins, it can be extended to support a large set of tools, processes, and views about your project. The CloudBees team includes major Jenkins committers (including myself #selfpromotion), and so it has a deep knowledge on Jenkins ecosystem and is best placed to offer it as a Cloud service. We also can help you to diagnose your project workflow by applying the best continuous integration and deployment practices. This also helps you to get more efficient and focused results on your actual business development. The following screenshot displays the continuous Cloud delivery concept in CloudBees: With some CloudBees-specific plugins to help, DEV@cloud Jenkins creates a smooth code-build-deploy pipeline, comparable to Heroku's Git push, but with full control over the intermediary process to convert your source code to a runnable application. This is such a significant component to build a full stack for Java developers that CloudBees is the official provider for the continuous integration service for Google App Engine (http://googleappengine.blogspot.fr/2012/10/jenkins-meet-google-app-engine.html), Cloud Foundry (http://blog.cloudfoundry.com/2013/02/28/continuous-integration-to-cloud-foundry-com-using-jenkins-in-the-cloud/), and Amazon. Summary This article introduced the Cloud principles and benefits, and compared CloudBees to its competitors. Resources for Article: Further resources on this subject: Framework Comparison: Backbase AJAX framework Vs Other Similar Framework (Part 2) [Article] Integrating Spring Framework with Hibernate ORM Framework: Part 2 [Article] Working with Zend Framework 2.0 [Article]
Read more
  • 0
  • 0
  • 2125

article-image-introduction-vmware-horizon-mirage
Packt
17 Dec 2013
12 min read
Save for later

An Introduction to VMware Horizon Mirage

Packt
17 Dec 2013
12 min read
(For more resources related to this topic, see here.) What is Horizon Mirage? Horizon Mirage is a Windows desktop image management solution that centralizes a copy of each desktop image onto the Horizon Mirage Server infrastructure hosted in the datacenter. Apart from copying the image, Horizon Mirage also separates and categorizes it into logical layers. It's like adding a level of abstraction but without a hypervisor. These layers fall into two categories: those owned and managed by the end user, and those owned and managed by the IT department. This allows the IT managed layers, such as the operating system, to be independently updated while leaving the end users' files, profiles, and applications which they have installed all intact. These layers are continuously synchronized with the image in the datacenter, creating either full desktop snapshots or snapshots based on the upload policy applied to that particular device. These snapshots are backed up and ready for recovery or rolled back in case of failure of the endpoint device. So, the question that I get asked all the time is, "How is this different from VDI, and does that mean I don't need Horizon View anymore?" To answer this question there are a couple of points to raise. Firstly, Horizon Mirage does not require a hypervisor; it's an image management tool that can manage an image on a physical desktop PC or laptop. In fact, one of the use cases for Mirage is to provide a user with an IT-managed Windows desktop when there is limited connectivity or bandwidth between the datacenter and the local site, or the end user needs to work offline. The latest version of Horizon Mirage (4.3) launched on 19 November, 2013 also supports a managed image running as a persistent VDI desktop on Horizon View. To summarize, Horizon Mirage has the following three key areas it will deliver against: Horizon View desktops allowing management of VDI-based desktop images Physical desktop and laptop image management and backup and recovery for Microsoft Windows-based endpoints The Bring Your Own Device (BYOD) feature delivering a virtual Windows desktop machine onto a device running VMware Fusion Professional or VMware Workstation The following diagram illustrates the three areas of Horizon Mirage integration: The second and the biggest difference is where the image executes. By that I mean where does that desktop actually run? In a VDI environment, the desktop runs as a virtual machine centrally on a server in the datacenter—central management and central execution. In a Horizon Mirage deployment, the desktop is a physical endpoint device, and so it runs either natively or on that device—central management and local execution. The following diagram shows how Horizon Mirage executes locally and Horizon View executes centrally: Horizon Mirage terminologies and naming conventions Before we start to describe some of the use cases and how Horizon Mirage works, it's worth spending five minutes on covering some of the terminologies used. Endpoint In this context, an endpoint refers to an end user's device that is being managed by Mirage; so, in this case, it can either be a Windows desktop PC/laptop or a virtual instance of Windows running inside VMware Workstation or Fusion Professional. Centralized Virtual Desktop (CVD) The name CVD is a bit misleading really, because the desktop is not actually a virtual machine in the true sense of a VM running on a hypervisor. It more than likely refers to the level of abstraction used in creating the different layers of a desktop image. A CVD is a centralized copy or backup of a desktop or laptop machine that has been copied onto the Horizon Mirage Server in the datacenter. The CVD comprises the following core components: Base layers Driver profiles User-installed applications Machine states User settings and data Application layers Collections A collection is a logical grouping of similar endpoints or CVDs. This could be based on departmental machines, for example, the machines used by the Sales and Marketing departments. Collections can be static or dynamic. Reference CVD A reference CVD or reference machine is effectively the centralized copy of the endpoint that you use to build your "gold image" or "master image". It is used to create your base layers. These base layers are then deployed to a user's CVD which in turn synchronizes with the endpoint. All of your endpoints will now be running the same base layer, that is, the core operating systems and applications. Base layer A base layer is effectively a template from which to build a desktop. By removing any existing identity information, you are able to centrally deploy a base layer to multiple endpoints. A base layer comprises the core operating system, service packs, patches, and any core application. It is captured from a reference CVD. The best practice is to have as few base layers as possible. The ideal solution is being able to have just one single base layer for all endpoints. Application layer An application layer is a feature introduced in Version 4.0 that allows applications to be captured and deployed as separate layers. This allows you to manage and deliver applications independent of the base layer by assigning them to a user's CVD. Driver library / driver profile The driver library is where the Horizon Mirage Server stores the drivers from the different hardware vendors/makes/models of endpoints you have in your environment. The best practice is to download these directly from the vendor's website or driver DVD. A driver profile will contain details of all the drivers required for a particular piece of hardware. For example, you may have a driver profile for a Dell Latitude E6400 containing all the relevant drivers for that particular hardware platform. Managing drivers separately effectively decouples the base layer from the hardware, allowing it to build base layers that are independent of the endpoint hardware. It also means that driver conflicts are prevented when you refresh or migrate users to new or different hardware platforms. Hardware-specific device drivers are stored in the driver library and correct drivers are automatically applied to endpoints based on a set of rules that you create to match drivers to endpoints. The driver library can also detect missing or broken drivers on endpoints and fix them; however, it does not upgrade or take other actions on existing healthy drivers. Single Instance Store (SIS) The SIS contains the deduplicated data from all the uploaded CVDs, including operating systems, applications, and user files/data. It significantly reduces the storage requirements for CVDs, because it only holds one copy of each unique file. For example, if you centralize a Windows 7 endpoint which is 100 GB in size, the first upload will be 100 GB. Any subsequent uploads will only store the files that are different; so, in this case, if the second endpoint is also Windows 7, those files that are already stored will not be copied and, instead, pointers to those files will be created. So, in this example, maybe only 200 MB will be copied. The Horizon Mirage Management Server will show you the dedupe ratio for a centralized CVD. Horizon Mirage use cases In this section, we are going to cover, at a high level, some of the key use cases for Horizon Mirage. These are reflected in the Horizon Mirage Common Wizards page in the Management Server. I have broken these down into three categories: manage, migrate, and protect. Manage These features and components are all about delivering image management to your endpoint devices. Endpoint repair As the user's desktop is backed up in the datacenter, it's very easy to replace missing or corrupt files from the backed up image down to the endpoint device. This could be fixing a single file, application, or directory folder, or a complete restore of the endpoint. For example, a user accidentally deletes the Excel executable file and Excel now fails to load. The IT helpdesk can compare the image in the datacenter with the current state of the endpoint and work out that it's the Excel.exe file that is missing. This file will now be copied back to the endpoint and Excel is now up and running again. Single image management With Horizon Mirage you have the ability to create a layered approach to the operating environment by abstracting operating systems, applications, drivers, user data, and so on. This allows you to manage fewer images; in fact, the idea (in Horizon Mirage-speak) is that you have only one core operating system image or base layer. To build a complete endpoint, you assign any additional layers to a user's CVD, for example, assigning an application layer or driver profile. The layers are merged together and then synchronized with the endpoint device. It's like ordering a pizza. You start with the base, choose your toppings, bake it, and then it gets delivered to you. Application layering A new feature in Horizon Mirage is the ability to deliver an application as an individual layer. Application layers can be added to a user's CVD to create a complete desktop with the correct applications for the user based on their entitlement or the department that they work in. For those familiar with VMware ThinApp and the capturing process, Horizon Mirage captures an application layer in a similar way; however, Mirage does not build a virtual bubble like ThinApp. A Horizon Mirage application layer natively installs the application onto the endpoint device, so you first need to make sure the application is compatible with the operating system. Remote office management with Mirage Branch Reflectors Remote office locations have always been a problem for a traditional VDI solution because connectivity is usually limited or, in some cases, non-existent. So how can you deliver a centralized image with limited connectivity or slow WAN connections? With the Branch Reflectors you can "nominate" a desktop PC in the remote location to act as an intermediary between the local endpoints and the datacenter. The local endpoints connect locally to a Branch Reflector rather than the Horizon Mirage Server and, in turn, the Branch Reflector synchronizes with the datacenter, downloading only the differences between the IT-managed layers and the local endpoints. These newly built layers can then be distributed to the local endpoints. One thing to remember is that this feature does not back up the endpoint devices. It is purely for updating an image and so, for example, is ideal for remotely migrating an operating system. Migrate The second category, migrate, covers two migration areas, one for the desktop operating system and the second for hardware refresh projects or migrating between platforms. Windows XP / Windows Vista to Windows 7 Probably the most important feature of Horizon Mirage today is its ability to migrate operating systems, especially as we are rapidly approaching April 8, 2014 when Windows XP goes end-of-support. By using the layered approach to desktop images, Mirage is able to manage the operating system as a separate layer and can therefore update this layer while leaving the user data, profile, and settings all in place. The migration process is also less intrusive to the end user because the files are copied in the background, keeping the user downtime to complete the migration to a minimum. Hardware refresh Similar to the way that Horizon Mirage can migrate an operating system using a layered approach, it can also manage drivers as a separate layer. This means that Horizon Mirage can also be used if you are refreshing your hardware platform, allowing you to build specific driver profiles to match different hardware models from multiple vendors. It also means that, if your endpoint device became corrupt, unusable, or was even stolen, you could replace it with something entirely different, yet still use your same image. Protect The third and final category is protection. This is something that customers don't typically deploy for their desktop and laptop estates. It's usually left to the user to make sure their machine is backed up and protected. Centralized endpoint backup By installing the Horizon Mirage Client onto an endpoint, meaning that the endpoint will now be managed by Horizon Mirage, the first thing that happens is that the endpoint is copied to the Horizon Mirage Server in the datacenter in the form of a CVD for that endpoint/user. In addition to this, Horizon Mirage can create snapshots of the image and create points in time from which to roll back. Desktop recovery Backing up desktops is only half the story; you also need to think about restoring. Horizon Mirage offers the option of restoring specific layers, while preserving the remaining layers. You can restore an endpoint to a previous snapshot without overwriting user data. If a computer is stolen, damaged, or lost, you can restore the entire image to a replacement desktop computer or laptop. It doesn't even need to be the same make and model. Or, you could just select which layers to restore. For example, if a particular application became corrupted, you could just replace that application layer. In this use case, it doesn't just apply to the physical machines. You could restore a physical computer to a virtual desktop machine either temporarily or as a permanent migration, maybe as a stepping-stone to deploying a full VDI solution. Summary In this article we learned about what Horizon Mirage is and the terminologies used along with the three use case categories: Manage, Migrate, and Protect. Resources for Article: Application Packaging in VMware ThinApp 4.7 Essentials [Article] VMware View 5 Desktop Virtualization [Article] Windows 8 with VMware View [Article]
Read more
  • 0
  • 0
  • 5201

article-image-further-developments
Packt
16 Dec 2013
11 min read
Save for later

Further Developments

Packt
16 Dec 2013
11 min read
(For more resources related to this topic, see here.) This article zooms out to the level of abstraction to take a little peak into the future by discussing platform services on top of IBM® SmartCloud® Enterprise, acquisitions, and other valuable sources of information. With IBM® SmartCloud® Enterprise, you gain access to both infrastructure services (IaaS) and platform services (PaaS), all in one cloud solution. The platform services, available through the SCE management console under Service Instances, are all bundled under the name of IBM SmartCloud Application Services. Although we cannot fully predict the future, there are some valuable sources of information that I want to share with you so you can stay updated on the latest changes and announcements for future developments. Second, there are some patterns and trends that allow us to look into our crystal ball. IBM® SmartCloud® Application Services IBM® SmartCloud® Application Services, the platform services layer on top of IBM ® SmartCloud® Enterprise, was introduced in December 2012. It delivers a collaborative, cloud-based environment that supports the full lifecycle of accelerated application development, deployment, and delivery. It provides two separate—but complementary—services: Collaborative Lifecycle Management Service and IBM® SmartCloud® Application Workload Service: Collaborative Lifecycle Management Service(CLMS) is a set of seamlessly integrated Rational® tools which provides a real-time cloud-based collaborative environment for accelerated application development and delivery as a platform service. It is designed to help coordinate software development activities throughout the lifecycle of an application, from requirements tracking through design, implementation, build, test, deployment, and maintenance. IBM® SmartCloud® Application Workload Service (SCAWS) allows us to use design patterns. A pattern consists of proven best practices and expertise for complex system tasks that have been captured, lab-tested, and optimized into a deployable form. These patterns can be very powerful because they can include policy-based automated scaling and easy duplication between IBM public and private cloud environments. The IBM SmartCloud® Enterprise Monthly Cost Estimator, supports all IBM® SmartCloud® Application Services prices and content. To learn more on its features and how get started on running your first service instances: pic.dhe.ibm.com/infocenter/scasic/v1r0m0/index.jsp Improving software delivery with DevOps To support DevOps, collaborative tools are needed to support the agile service delivery approach, hence accelerating application deployment from weeks to minutes. When we combine the functionality offered by CLMS and SCAWS together, this is exactly what we get. We get one integrated DevOps solution that promotes communication, collaboration, and integration between software developers and IT operations to more rapidly produce quality software products and services. The name DevOps is derived from a combination of the two words development and operations. DevOps is more than a new development methodology like agile software development; it's about communication and collaboration between boththe two earlier stated stakeholders and the business. It is mainly targeted at product delivery, quality testing, feature development, and maintenance releases in order to improve reliability, security, and faster development and deployment cycles. Download this complementary e-book to gain a better understanding of DevOps and learn how it can improve the IT processes in your organization: ow.ly/lCGGz Valuable sources of information IBM® SmartCloud® Enterprise and Application Services Staying up-to-date with the latest IBM® SmartCloud® Enterprise (and Application Services) developments and announcements is vital for knowing what you can expect in the near future. The three most important sources of information are: The SCE management console, specifically on the Support page, where many resources are directly available or just one click away in the Documentation Library, Video Library, and Asset Catalog. The developerWorks® website, the IBM technical resource and professional network for the developer and IT professional, offers great in-depth articles on many of the capabilities that IBM® SmartCloud® Enterprise has. It can be accessed at www.ibm.com/developerworks. The Thoughts on Cloud blog is also a great place to see what's happening and what can be done with IBM and IBM Business Partner solutions. As with the developerWorks website, the information provided here covers more about IBM® SmartCloud® Enterprise, which gives us a broader perspective on what's happening with IBM and cloud computing. It is available at thoughtsoncloud.com. There is also IBM SmartCloud Enterprise newsletter containing a wealth of information on new features, use-cases, and other news. You can subscribe to these, and more, newsletters via the IBM eNewsletter Subscription Services webpage. Then there is the IBM SmartCloud Enterprise Developers Group. This is a technical community composed of individuals interested in the application programming interfaces (APIs) of IBM® SmartCloud® Enterprise. The group includes IBMers and non-IBMers and covers how the APIs can be used to automate processes and build solutions that integrate with IBM® SmartCloud® Enterprise. It is available at www.ibm.com/developerworks/community/blogs/iaas_cloud. IBM Innovation Center events Additionally there are the IBM Innovation Center events, which offer a wide range of no-charge workshops, seminars, and briefings conducted by highly trained subject matter experts. These events help you build technical skills, learn how to market and sell more effectively with IBM, and connect with Business Partners. Some of the virtual events focus on the new possibilities of IBM® SmartCloud® solutions, and are complementary. Global Technology Outlook The Global Technology Outlook (GTO) is IBM Research's vision of the future for information technology (IT) and its implications on industries. This annual exercise highlights emerging software, hardware, and services technology trends that are expected to significantly impact the IT sector in the next 3-10 years. The research document can be downloaded from www.zurich.ibm.com/pdf/isl/infoportal/Global_Technology_Outlook_2013.pdf. IBM Academy of Technology Lastly there is the IBM Academy of Technology (AoT) which, as the name suggests, consists of almost one thousand of IBM's technical leaders. The academy develops a rich technical agenda each year which consists of studies, conferences, and consultancies. More importantly, the academy also produces a series of TechNotes which explore various areas of current and emerging technology. The TechNotes can be found at www-03.ibm.com/ibm/academy/technotes/technotes.shtml. A glimpse into our crystal ball When looking at the version history of IBM® SmartCloud® we can clearly see a pattern of delivering major releases twice a year, more specifically in May and December. Since the May 2013 release has just been announced and implemented—at the time of writing—it can be expected that the next major release will be announced and implemented in December 2013. In terms of functionality, let's look at what announcements there have been on the IBM® SmartCloud® strategy and which trends there are in the marketplace. Platform services Platform as a service is clearly expanding the range of options you have from the cloud. We can see these IBM products and service developments; just look at the landing page of IBM for platform services— www.ibm.com/cloud-computing/us/en/paas.html—but also on almost all other major cloud service providers. Specifically, we want to mention the platform services—IBM® SmartCloud® Application Services and IBM® SmartCloud® for SAP Applications—that IBM is building on top of the infrastructure services—IBM® SmartCloud® Enterprise and IBM® SmartCloud® Enterprise+. It can be expected that the range of platform services will grow over the years, expanding the possibilities that you have of getting managed and hosted middleware services right from the box (as a service). Open cloud standards Commitment to open standards has been a long running focus for IBM, which has become even more visible with the March 2013 announcement that IBM cloud software and services will be based on open standards. This move will ensure that innovation in cloud computing is not hampered by locking businesses into proprietary islands of insecure and difficult-to-manage offerings. As the first step, IBM will base a new private cloud solution—part of the IBM® SmartCloud® Foundation portfolio segment—on the open sourced OpenStack® software. This will allow organizations to build a private cloud without the fear of being locked in, as well as allowing easier integration with public cloud solutions. It can be expected that this will surely resonate in further IBM® SmartCloud® Enterprise developments as well. Apart from the infrastructure focused OpenStack standard, IBM is also working on platform focused standards such as OASIS® TOSCA and data interface standards such as W3C Linked Data. Not to forget the standardization bodies such as The Open Group® and user groups such as Cloud Standards Customer Council (CSCC) and Cloud Computing Use Case Discussion Group. Find an overview of the all open cloud standards and links to each open standard individually can be found at www.ibm.com/cloud-computing/us/en/open-standards.html and a more in-depth view on the more technical open standards at www.ibm.com/developerworks/cloud/library/cl-open-architecture. SoftLayer Technologies Inc. On July 8th, IBM announced that it completed the acquisition of Softlayer Technologies Inc. (Softlayer), which joins IBM's new cloud services division and will be combined with IBM® SmartCloud® into a global platform. As quoted from the press release, found at www-03.ibm.com/press/us/en/pressrelease/41430.wss you can read the potency of the acquisition: "SoftLayer will enable IBM to deliver an industry first: marrying the security, privacy and reliability of private clouds with the economy and speed of a public cloud. SoftLayer offers a breakthrough capability that provides a cloud "on ramp" for born-on-the-web companies, government, and the Fortune 500." An interesting, analyst, view on the Softlayer acquisition and the growth potential for the IBM® SmartCloud® portfolio can be viewed in the video at www.youtube.com/watch?v=a3uscmcQVTI Hybrid cloud Hybrid cloud, combining both public and private cloud services into a solution, looks like the most powerful delivery form for the years to come. Powerful in the sense that you get to choose from the mix of characteristics that both private and public cloud solutions offer. To underline the trend, many analysts believe that 2013 will be the year that hybrid cloud implementations will truly get traction. With IBM's broad and integrated family of cloud technologies, the IBM® SmartCloud® portfolio, IBM is perfectly placed to deliver hybrid cloud solutions. IBM, for instance, has solutions for integrating multi-source, multi-vendor cloud solutions, allowing application portability and central system and server management. Another example of application portability in a hybrid cloud is the use of design patterns—allowing you to define your infrastructure topology, including application code and scalability characteristics into a reusable asset—in both IBM's public and private cloud solutions. The developerWorks article series Inside the hybrid cloud takes you through all aspects of the value and impact of implementing and using a hybrid cloud. The first article, Redefine services and delivery methods, covers the basics of a hybrid cloud implementation, takes you through the services it makes available, and provides a point of view on the potential business value. The article is available at www.ibm.com/developerworks/cloud/library/cl-hybridcloud1. Federation is key to XaaS, the second article, describes hybrid cloud in more detail and focuses on the principle of a federated cloud: orchestrating multiple cloud solutions as if it were one solution. It is available at www.ibm.com/developerworks/cloud/library/cl-hybridcloud2. The article Administration peeks under the hood of hybrid cloud to see what it takes to make the hybrid powerhouse a reality. It can be viewed at www.ibm.com/developerworks/cloud/library/cl-hybridcloud3. Implementation considerations, the final article of the series, looks into the aspect of implementing and consuming a hybrid cloud setup like governance, network connectivity, and access control. This article is available at www.ibm.com/developerworks/cloud/library/cl-hybridcloud4. Summary This article helped us in closing the loop from the IBM cloud strategy and IBM® SmartCloud® portfolio, through fine-grained information on IBM® SmartCloud® Enterprise, and reaching towards a high-level view where IBM® SmartCloud® might be heading in the near future. Resources for Article: Further resources on this subject: What is Oracle Public Cloud? [Article] Photo Stream with iCloud [Article] Introduction to Oracle Service Bus & Oracle Service Registry [Article]
Read more
  • 0
  • 0
  • 1142

article-image-troubleshooting-storage-contention
Packt
14 Nov 2013
6 min read
Save for later

Troubleshooting Storage Contention

Packt
14 Nov 2013
6 min read
(For more resources related to this topic, see here.) Now that we have learned about the various tools we can use to troubleshoot vSphere Storage and tackled the common issues that appear when we are trying to connect to our datastores, it's time for us to look at another type of issue with storage: contention. Storage contention is one of the most common causes of problems inside a virtual environment and is almost always the cause of slowness and performance issues. One of the biggest benefits of virtualization is consolidation: the ability to take multiple workloads and run them on a smaller number of systems, clustered with shared resources, and with one management interface. That said, as soon as we begin to share these resources, contention is sure to occur. This article will help with some of the common issues we face pertaining to storage contention and performance. Identifying storage contention and performance issues One of the biggest causes of poor storage performance is quite often the result of high I/O latency values. Latency in its simplest definition is a measure of how long it takes for a single I/O request to occur from the standpoint of your virtualized applications. As we will find out later, vSphere further breaks the latency values down into more detailed and precise values based on individual components of the stack in order to aid us with troubleshooting. But is storage latency always a bad thing? The answer to that is "it depends". Obviously, a high latency value is one of the least desirable metrics in terms of storage devices, but in terms of applications, it really depends on the type of workload we are running. Heavily utilized databases, for instance, are usually very sensitive when it comes to latency, often requiring very low latency values before exhibiting timeouts and degradation of performance. There are however other applications, usually requiring throughput, which will not be as sensitive to latency and have a higher latency threshold. In all cases, we as vSphere administrators will always want to do our best to minimize storage latency and should be able to quickly identify issues related to latency. As a vSphere administrator, we need to be able to monitor latency in our vSphere environment. This is where esxtop can be our number one tool. We will focus on three counters: DAVG/cmd, KAVG/cmd, and GAVG/cmd, all of which are explained in the following table: When looking at the thresholds outlined in the preceding table, we have to understand that these are developed as more of a recommendation rather than a hard rule. Certainly, 25 ms of device latency isn't good, but it will affect our applications in different ways, sometimes bad, sometimes not at all. The following sections will outline how we can view latency statistics as they pertain to disk adapters, disk devices, and virtual machines. Disk adapter latency statistics By activating the disk adapter display in esxtop, we are able to view our latency statistics as they relate to our HBAs and paths. This is helpful in terms of troubleshooting as it allows us to determine if the issue resides only on a single HBA or a single path to our storage array, as shown in the following screenshot: Use the following steps to activate the disk adapter latency display: Start esxtop by executing the esxtop command. Press d to switch to the disk adapter display. Press f to select which columns you would like to display. Toggle the fields by pressing their corresponding letters. In order to view latency statistics effectively, we need to ensure that we have turned on Adapter Name (A), Path Name (B), and Overall Latency Stats (G) at the very least. esxtop counters are also available for read and write latency specifically along with the overall latency statistics. This can be useful when troubleshooting storage latency as you may be experiencing quite a bit more write latency than read latency which can help you isolate the problems to different storage components. Disk device latency statistics The disk device display is crucial when troubleshooting storage contention and latency issues as it allows us to segregate any issues that may be occurring on a LUN by LUN basis. Use the following steps to activate the disk device latency display: Start esxtop by executing the esxtop command. Press u to switch to the disk device display. Press f to select which columns you would like to display. Toggle the fields by pressing their corresponding letters. In order to view latency statistics effectively, we need to ensure that we have turned on Device Name (A) and Overall Latency Stats (I) at the very least. By default, the Device column is not long enough to display the full ID of each device. For troubleshooting, we will need the complete device ID. We can enlarge this column by pressing L and entering the length as an integer that we want. Virtual machine latency statistics The latency statistics displayed inside the virtual machine display are not displayed using the same column headers as the previous two views. Instead, they are displayed as LAT/rd and LAT/wr. These counters are measured in milliseconds and represent the amount of time it takes to issue an I/O request from the virtual machine. This is a great view that can be used to determine a couple of things. One, is it just one virtual machine that is experiencing latency? And two, is the latency observed on mostly reads or writes? Use the following steps to activate the virtual machine latency display: Start esxtop by executing the esxtop command. Press v to switch to the virtual machine disk display. Press f to select which columns you would like to display. Toggle the fields by pressing their corresponding letters. In order to view latency statistics effectively, we need to ensure that we have turned on VM Name (B), Read Latency Stats (G), and Write Latency Stats (H). Summary Storage contention and performance issues are one of the most common causes of slowness and outages within vSphere. Due to the number of software and hardware components involved in the vSphere storage stack, it's hard for us to pinpoint exactly where the root cause of a storage contention issue is occurring. Using some of the tools, examples, features, and common causes explained in this article, we should be able to isolate issues, making it easier for us to troubleshoot and resolve problems. Resources for Article : Further resources on this subject: Network Virtualization and vSphere [Article] Networking Performance Design [Article] vCloud Networks [Article]
Read more
  • 0
  • 0
  • 3887

article-image-creating-application-scratch
Packt
13 Nov 2013
5 min read
Save for later

Creating an Application from Scratch

Packt
13 Nov 2013
5 min read
(For more resources related to this topic, see here.) Creating an application Using the Command Line Tool, we are going to create a very new application. This application is also going to be a Sinatra application that displays some basic date and time information. First, navigate to a new directory that will be used to contain the code. Everything in this directory will be uploaded to AppFog when we create the new application. $ mkdir insideaf4 $ cd insideaf4 Now, create a new file called insideaf4.rb. The contents of the file should look like the following: require 'sinatra' get '/' do erb :index end This tells Sinatra to listen for requests to the base URL of / and then render the index page that we will create next. If you are using Ruby 1.8.7, you may need to add the following line at the top: require 'rubygems' Next, create a new directory called views under the insideaf4 directory: $ mkdir views $ cd views Now we are going to create a new file under the views directory called index.erb. This file will be the one that displays the date and time information for our example. The following are the contents of the index.erb file: <html><head> <title>Current Time</title> </head><body> <% time = Time.new %> <h1>Current Time</h1> <table border="1" cellpadding="5"> <tr> <td>Name</td> <td>Value</td> </tr> <tr> <td>Date (M/D/Y)</td> <td<%= time.strftime('%m/%d/%Y') % </tr> <tr> <td>Time</td> <td><%= time.strftime('%I:%M %p') </tr> <tr> <td>Month</td> <td><%= time.strftime('%B') %></t </tr> <tr> <td>Day</td> <td><%= time.strftime('%A') %></td </tr> </table> </body></html> This file will create a table that shows a number of different ways to format the date and time. Embeded in the HTML code are Ruby snippets that look like <%= %>. Inside of these snippets we use Ruby's strftime method to display the current date and time in a number of different string formats. At the beginning of the file, we create a new instance of a Time object which is automatically set to the current time. Then we use the strftime method to display different values in the table. For more information on using Ruby dates, please see the documentation available at http://www.ruby-doc.org/core-2.0.0/Time.html. Testing the application Before creating an application in AppFog, it is useful to test it out locally first. To do this you will again need the Sinatra Gem installed. If you need to do that, refer to Appendix, Installing the AppFog Gem. The following is the command to run your small application: $ ruby indiseaf4.rb You will see the Sinatra application start and then you can navigate to http://localhost:4567/ in a browser. You should see a page that has the current date and time information like the following screenshot: To terminate the application, return to the command line and press Control+C. Publishing to AppFog Now that you have a working application, you can publish it to AppFog and create the new AppFog application. Before you begin, make sure you are in the root director of your project. For this example that was the insideaf4 directory. Next, you will need to log in to AppFog. $ af login Attempting login to [https://api.appfog.com] Email: matt@somecompany.com Password: ******** Successfully logged into [https://api.appfog.com] You may be asked for your e-mail and password again, but the tool may remember your session if you logged in recently. Now you can push your application to AppFog, which will create a new application for you. Make sure you are in the correct directory and use the Push command. You will be prompted for a number of settings during the publishing process. In each case there will be a list of options along with a default. The default value will be listed with a capital letter or listed by itself in square brackets. For our purposes, you can just press Enter for each prompt to accept the default value. The only exception to that is the step that prompts you to choose an infrastructure. In that step you will need to make a selection. $ af push insideaf4 Would you like to deploy from the current directory? [Yn]: Detected a Sinatra Application, is this correct? [Yn]: 1: AWS US East - Virginia 2: AWS EU West - Ireland 3: AWS Asia SE - Singapore 4: HP AZ 2 - Las Vegas Select Infrastructure: Application Deployed URL [insideaf4.aws.af.cm]: Memory reservation (128M, 256M, 512M, 1G, 2G) [128M]: How many instances? [1]: Bind existing services to insideaf4? [yN]: Create services to bind to insideaf4? [yN]: Would you like to save this configuration? [yN]: Creating Application: OK Uploading Application: Checking for available resources: OK Packing application: OK Uploading (1K): OK Push Status: OK Staging Application insideaf4: OK Starting Application insideaf4: OK Summary Hence we have learned how to create application using Appfog. Resources for Article : Further resources on this subject: AppFog Top Features You Need to Know [Article] SSo, what is Node.js? [Article] Learning to add dependencies [Article]
Read more
  • 0
  • 0
  • 3818
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-photo-stream-icloud
Packt
12 Nov 2013
5 min read
Save for later

Photo Stream with iCloud

Packt
12 Nov 2013
5 min read
(For more resources related to this topic, see here.) Photo Stream The way that Photo Stream works is really simple. First, you take some pictures using your iOS device. Then, these pictures are automatically uploaded to the iCloud server. Other devices that have Photo Stream enabled receive these pictures immediately. Photo Stream only stores pictures that are taken using the Camera app on iOS devices. Of course, your devices need to be connected to the Internet via cellular data or Wi-Fi. For those who use Wi-Fi-only iOS devices, such as iPod touch and iPad with Wi-Fi, pictures are uploaded later when it's connected to Internet. Photo Stream lets you upload unlimited pictures and it won't count on iCloud storage, but these are stored in Photo Stream for 30 days. After that, your pictures are automatically deleted. Make sure that you have stored all pictures on your Mac or PC so that you don't lose any. All pictures are uploaded in full resolution, but when they are downloaded to iOS devices, the resolution is reduced and optimized for the devices. Setting up Photo Stream All Mac computers with OS X Lion or higher, PCs with an iCloud Control Panel, and iOS devices with iOS 5 or higher are able to store and receive pictures from Photo Stream. To use Photo Stream on your device, you need to activate it on each device. So, you can also decide on which devices you want to store and receive pictures using Photo Stream. Photo Stream on iOS It's hard for me (and maybe for you too), if I don't enable Photo Stream on my iPhone because Photo Stream is the easiest way to share pictures and screenshots across iOS devices. You don't need to share them by e-mail or using other apps. Just let iCloud stream them to all devices. To enable Photo Stream on iOS, navigate to Settings | Photos & Camera. Set the My Photo Stream toggle to the ON position, as shown in the preceding screenshot, and that's all! Photo Stream is now ready to serve you. You can also enable Photo Stream by navigating to Settings | iCloud | Photo Stream and setting the My Photo Stream toggle to the ON position. To view all pictures stored on Photo Stream, you need to open the Photos app, tap on the Albums tab, and then tap on the My Photo Stream tab at the bottom of the screen. You can browse and view the pictures just like browsing pictures on Albums or Events, as shown in the following screenshot: You can share pictures from Photo Stream to Mail, Message, Twitter, or Facebook; many other actions are available as well. You can also delete pictures individually from Photo Stream. Tap on Select and tap on the pictures that you want to delete. Then, tap on the Delete icon to execute the process. Saving pictures from Photo Stream to Camera Roll is really easy. Just tap on Select and choose the pictures that you want to save. If you're finished, tap on the Share icon and choose the Save to Camera Roll icon at the bottom of the screen to store all chosen pictures to Camera Roll. You must choose whether to keep the pictures in an existing album or in a new album. Photo Stream on Mac Photo Stream on Mac is really great. It's integrated with iPhoto; one of the applications in iLife suite for managing photos and videos. You will have it installed by default when you purchase a new Mac, or you can purchase it by yourself via the Mac App Store. With iPhoto, you can add and delete pictures in Photo Stream. One big advantage of having Photo Stream enabled on your Mac is that you don't need to plug in your iOS device to your Mac just for copying pictures taken with it. To enable Photo Stream on Mac, navigate to System Preferences | iCloud and check the Photos checkbox to enable it. You can manage and view your Photo Stream on iPhoto or Aperture. You can also use pictures from Photo Stream on iMovie as part of Media Browser. Viewing Photo Stream on iPhoto After you've enabled Photo Stream from the iCloud preference pane, launch iPhoto on your Mac. You'll see the iCloud icon on the left sidebar. Click on it and iPhoto shows a welcome screen, as shown in the following screenshot. Click on Turn On iCloud to enable Photo Stream, and iPhoto will download all pictures stored on Photo Stream automatically. It usually takes longer to download Photo Stream pictures for the first time: Photo Stream on iPhoto has different behaviors compared to Photo Stream on iOS. All Photo Stream pictures, which have been downloaded to iPhoto, are automatically stored in your iPhoto Library. It's not only stored but also organized with iPhoto as Event. So, you'll see something like "Jan 2013 Photo Stream", which contains Photo Stream pictures from January 2013. Pictures from Photo Stream behave like other pictures on iPhoto. These are available on Media Browser, which is connected with other apps on your Mac. Everything is organized so there is no more dragging and dropping from your mobile device to your Mac. By default, every new picture added to your iPhoto Library is uploaded to Photo Stream. You can disable it by navigating to iPhoto | Preferences| iCloud and unchecking Automatic Upload, as shown in the following screenshot: Summary This article shows how the Photo Stream app is used with iCloud. The article included the setting up of the Photo Stream app and using it with the different apple platforms like iOS and Mac. It also shows how the Photo Stream app is used with iPhoto. Resources for Article: Further resources on this subject: Using OpenShift [Article] Mobile and Social - the Threats You Should Know About [Article] iPhone: Customizing our Icon, Navigation Bar, and Tab Bar [Article]
Read more
  • 0
  • 0
  • 2117

article-image-troubleshooting
Packt
25 Oct 2013
20 min read
Save for later

Troubleshooting

Packt
25 Oct 2013
20 min read
(For more resources related to this topic, see here.) OpenStack is a complex suite of software that can make tracking down issues and faults quite daunting to beginners and experienced system administrators alike. While there is no single approach to troubleshooting systems, understanding where OpenStack logs vital information or what tools are available to help track down bugs will help resolve issues we may encounter. However, OpenStack like all software will have bugs that we are not able to solve ourselves. In that case, we will show you how gathering the required information so that the OpenStack community can identify bugs and suggest fixes is important in ensuring those bugs or issues are dealt with quickly and efficiently. Understanding logging Logging is important in all computer systems, but the more complex the system, the more you rely on logging to be able to spot problems and cut down on troubleshooting time. Understanding logging in OpenStack is important to ensure your environment is healthy and you are able to submit relevant log entries back to the community to help fix bugs. Getting ready Log in as the root user onto the appropriate servers where the OpenStack services are installed. This makes troubleshooting easier as root privileges are required to view all the logs. How to do it... OpenStack produces a large number of logs that help troubleshoot our OpenStack installations. The following details outline where these services write their logs: OpenStack Compute services logs Logs for the OpenStack Compute services are written to /var/log/nova/, which is owned by the nova user, by default. To read these, log in as the root user (or use sudo privileges when accessing the files). The following is a list of services and their corresponding logs. Note that not all logs exist on all servers. For example, nova-compute.log exists on your compute hosts only: nova-compute: /var/log/nova/nova-compute.log Log entries regarding the spinning up and running of the instances nova-network: /var/log/nova/nova-network.log Log entries regarding network state, assignment, routing, and security groups nova-manage: /var/log/nova/nova-manage.log Log entries produced when running the nova-manage command nova-conductor: /var/log/nova/nova-conductor.log Log entries regarding services making requests for database information nova-scheduler: /var/log/nova/nova-scheduler.log Log entries pertaining to the scheduler, its assignment of tasks to nodes, and messages from the queue nova-api: /var/log/nova/nova-api.log Log entries regarding user interaction with OpenStack as well as messages regarding interaction with other components of OpenStack nova-cert: /var/log/nova/nova-cert.log Entries regarding the nova-cert process nova-console: /var/log/nova/nova-console.log Details about the nova-console VNC service nova-consoleauth: /var/log/nova/nova-consoleauth.log Authentication details related to the nova-console service nova-dhcpbridge: /var/log/nova/nova-dhcpbridge.log Network information regarding the dhcpbridge service OpenStack Dashboard logs OpenStack Dashboard (Horizon) is a web application that runs through Apache by default, so any errors and access details will be in the Apache logs. These can be found in /var/log/apache2/*.log, which will help you understand who is accessing the service as well as the report on any errors seen with the service. OpenStack Storage logs OpenStack Object Storage (Swift) writes logs to syslog by default. On an Ubuntu system, these can be viewed in /var/log/syslog. On other systems, these might be available at /var/log/messages. The OpenStack Block Storage service, Cinder, will produce logs in /var/log/cinder by default. The following list is a breakdown of the log files: cinder-api: /var/log/cinder/cinder-api.log Details about the cinder-api service cinder-scheduler: /var/log/cinder-scheduler.log Details related to the operation of the Cinder scheduling service cinder-volume: /var/log/cinder/cinder-volume.log Log entries related to the Cinder volume service OpenStack Identity logs The OpenStack Identity service, Keystone, writes its logging information to /var/log/keystone/keystone.log. Depending on how you have Keystone configured, the information in this log file can be very sparse to extremely verbose including complete plaintext requests. OpenStack Image Service logs The OpenStack Image Service Glance stores its logs in /var/log/glance/*.log with a separate log file for each service. The following is a list of the default log files: api: /var/log/glance/api.log Entries related to the glance API registry: /var/log/glance/registry.log Log entries related to the Glance registry service. Things like metadata updates and access will be stored here depending on your logging configuration. OpenStack Network Service logs OpenStack Networking Service, formerly Quantum, now Neutron, stores its log files in /var/log/quantum/*.log with a separate log file for each service. The following is a list of the corresponding logs: dhcp-agent: /var/log/quantum/dhcp-agent.log Log entries pertaining to the dhcp-agent l3-agent: /var/log/quantum/l3-agent.log Log entries related to the l3 agent and its functionality metadata-agent: /var/log/quantum/metadata-agent.log This file contains log entries related to requests Quantum has proxied to the Nova metadata service. openvswitch-agent: /var/log/quantum/openvswitch-agent.log Entries related the the operation of Open vSwitch. When implementing OpenStack Networking, if you use a different plugin, its log file will be named accordingly. server: /var/log/quantum/server.log Details and entries related to the quantum API service OpenVSwitch Server: /var/log/openvswitch/ovs-vswitchd.log Details and entries related to the OpenVSwitch Switch Daemon Changing log levels By default each OpenStack service has a sane level of logging, which is determined by the level set as Warning. That is, it will log enough information to provide you the status of the running system as well as some basic troubleshooting information. However, there will be times that you need to adjust the logging verbosity either up or down to help diagnose an issue or reduce logging noise. As each service can be configured similarly, we will show you how to make these changes on the OpenStack Compute service. Log-level settings in OpenStack Compute services To do this, log into the box where the OpenStack Compute service is running and execute the following commands: sudo vim /etc/nova/logging.conf Change the following log levels to either DEBUG, INFO or WARNING in any of the services listed: Log-level settings in other OpenStack services Other services such as Glance and Keystone currently have their log-level settings within their main configuration files such as /etc/glance/glance-api.conf. Adjust the log levels by altering the following lines to achieve INFO or DEBUG levels: Restart the relevant service to pick up the log-level change. How it works... Logging is an important activity in any software, and OpenStack is no different. It allows an administrator to track down problematic activity that can be used in conjunction with the community to help provide a solution. Understanding where the services log and managing those logs to allow someone to identify problems quickly and easily are important. Checking OpenStack services OpenStack provides tools to check on its services. In this section, we'll show you how to check the operational status of these services. We will also use common system commands to check whether our environment is running as expected. Getting ready To check our OpenStack Compute host, we must log into that server, so do this now before following the given steps. How to do it... To check that OpenStack Compute is running the required services, we invoke the nova-manage tool and ask it various questions about the environment, as follows: Checking OpenStack Compute Services To check our OpenStack Compute services, issue the following command: sudo nova-manage service list You will see an output similar to the following. The :-) indicates that everything is fine. nova-manage service list The fields are defined as follows: Binary: This is the name of the service that we're checking the status of. Host: This is name of the server or host where this service is running. Zone: This refers to the OpenStack Zone that is running that service. A zone can run different services. The default zone is called nova. Status: This states whether or not an administrator has enabled or disabled that service. State: This refers to whether that running service is working or not. Updated_At: This indicates when that service was last checked. If OpenStack Compute has a problem, you will see XXX in place of :-). The following command shows the same: nova-compute compute.book nova enabled XXX 2013-06-18 16:47:35 If you do see XXX, the answer to the problem will be in the logs at /var/log/nova/. If you get intermittent XXX and :-) for a service, first check whether the clocks are in sync. OpenStack Image Service (Glance) The OpenStack Image Service, Glance, while critical to the ability of OpenStack to provision new instances, does not contain its own tool to check the status of the service. Instead, we rely on some built-in Linux tools. OpenStack Image Service (Glance) doesn't have a tool to check its running services, so we can use some system commands instead, as follows: ps -ef | grep glance netstat -ant | grep 9292.*LISTEN These should return process information for Glance to show it's running, and 9292 is the default port that should be open in the LISTEN mode on your server, which is ready for use. The output of these commands will be similar to the following: ps -ef | grep glance This produces output like the following: To check if the correct port is in use, issue the following command: netstat -ant | grep 9292 tcp 0 0 0.0.0.0:9292 0.0.0.0:* LISTEN Other services that you should check Should Glance be having issues while the above services are in working order, you will want to check the following services as well: rabbitmq: For rabbitmq, run the following command: sudo rabbitmqctl status For example, output from rabbitmqctl (when everything is running OK) should look similar to the following screenshot: If rabbitmq isn't working as expected, you will see output similar to the following indicating that the rabbitmq service or node is down: ntp: For ntp (Network Time Protocol, for keeping nodes in time-sync), run the following command: ntpq -p ntp is required for multi-host OpenStack environments but it may not be installed by default. Install the ntp package with sudo apt-get install -y ntp) This should return output regarding contacting NTP servers, for example: MySQL Database Server: For MySQL Database Server, run the following commands: PASSWORD=openstack mysqladmin -uroot –p$PASSWORD status This will return some statistics about MySQL, if it is running, as shown in the following screenshot: Checking OpenStack Dashboard (Horizon) Like the Glance Service, the OpenStack Dashboard service, Horizon, does not come with a built-in tool to check its health. Horizon, despite not having a built-in utility to check service health, does rely on the Apache web server to serve pages. To check the status of the service then, we check the health of the web service. To check the health of the Apache web service, log into the server running Horizon and execute the following command: ps -ef | grep apache This command produces output like the following screenshot: To check that Apache is running on the expected port, TCP Port 80, issue the following command: netstat -ano | grep :80 This command should show the following output: tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN off (0.00/0/0) To test access to the web server from the command line issue the following command: telnet localhost 80 This command should show the following output: Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. Checking OpenStack Identity (Keystone) Keystone comes with a client side implementation called the python-keystone client. We use this tool to check the status of our Keystone services. To check that Keystone is running the required services, we invoke the keystone command: # keystone user-list This produces output like the following screenshot: Additionally, you can use the following commands to check the status of Keystone. The following command checks the status of the service: # ps -ef | grep keystone This should show output similar to the following: keystone 5441 1 0 Jun20 ? 00:00:04 /usr/bin/python /usr/bin/keystone-all Next you can check that the service is listening on the network. The following command can be used: netstat -anlp | grep 5000 This command should show output like the following: tcp 0 0 0.0.0.0:5000 0.0.0.0: LISTEN 54421/python Checking OpenStack Networking (Neutron) When running the OpenStack Networking service, Neutron, there are a number of services that should be running on various nodes. These are depicted in the following diagram: On the Controller node, check the Quantum Server API service is running on TCP Port 9696 as follows: sudo netstat -anlp | grep 9696 The command brings back output like the following: tcp 0 0 0.0.0.0:9696 0.0.0.0:* LISTEN 22350/python On the Compute nodes, check the following services are running using the ps command: ovsdb-server ovs-switchd quantum-openvswitch-agent For example, run the following command: ps -ef | grep ovsdb-server On the Network node, check the following services are running: ovsdb-server ovs-switchd quantum-openvswitch-agent quantum-dhcp-agent quantum-l3-agent quantum-metadata-agent To check our Neutron agents are running correctly, issue the following command from the Controller host when you have the correct OpenStack credentials sourced into your environment: quantum agent-list This will bring back output like the following screenshot when everything is running correctly: Checking OpenStack Block Storage (Cinder) To check the status of the OpenStack Block Storage service, Cinder, you can use the following commands: Use the following command to check if Cinder is running: ps -ef | grep cinder This command produces output like the following screenshot: Use the following command to check if iSCSI target is listening: netstat -anp | grep 3260 This command produces output like the following: tcp 0 0 0.0.0.0:3260 0.0.0.0:* LISTEN 10236/tgtd Use the following command to check that the Cinder API is listening on the network: netstat -an | grep 8776 This command produces output like the following: tcp 0 0.0.0.0:8776 0.0.0.0:* LISTEN To validate the operation of the Cinder service, if all of the above is functional, you can try to list the volumes Cinder knows about using the following: cinder list This produces output like the following: Checking OpenStack Object Storage (Swift) The OpenStack Object Storage service, Swift, has a few built-in utilities that allow us to check its health. To do so, log into your Swift node and run the following commands: Use the following command for checking the Swift Service Using Swift Stat: swift stat This produces output like the following: Using PS: There will be a service for each configured container, account, object-store. ps -ef | grep swift This should produce output like the following screenshot: Use the following command for checking the Swift API: ps -ef | grep swift-proxy This should produce the following screenshot: Use the following command for checking if Swift is listening on the network: netstat -anlp | grep 8080 This should produce output like the following: tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 9818/python How it works... We have used some basic commands that communicate with OpenStack services to show they're running. This elementary level of checking helps with troubleshooting our OpenStack environment. Troubleshooting OpenStack Compute services OpenStack Compute services are complex, and being able to diagnose faults is an essential part of ensuring the smooth running of the services. Fortunately, OpenStack Compute provides some tools to help with this process, along with tools provided by Ubuntu to help identify issues. How to do it... Troubleshooting OpenStack Compute services can be a complex issue, but working through problems methodically and logically will help you reach a satisfactory outcome. Carry out the following suggested steps when encountering the different problems presented. Steps for when you cannot ping or SSH to an instance When launching instances, we specify a security group. If none is specified, a security group named default is used. These mandatory security groups ensure security is enabled by default in our cloud environment, and as such, we must explicitly state that we require the ability to ping our instances and SSH to them. For such a basic activity, it is common to add these abilities to the default security group. Network issues may prevent us from accessing our cloud instances. First, check that the compute instances are able to forward packets from the public interface to the bridged interface. Use the following command for the same: sysctl -A | grep ip_forward net.ipv4.ip_forward should be set to 1. If it isn't, check that /etc/sysctl.conf has the following option uncommented. Use the following command for it: net.ipv4.ip_forward=1 Then, run to following command to pick up the change: sudo sysctl -p Other network issues could be routing issues. Check that we can communicate with the OpenStack Compute nodes from our client and that any routing to get to these instances has the correct entries. We may have a conflict with IPv6, if IPv6 isn't required. If this is the case, try adding --use_ipv6=false to your /etc/nova/nova.conf file, and restart the nova-compute and nova-network services. We may also need to disable IPv6 in the operating system, which can be achieved using something like the following line in /etc/modprobe.d/ipv6.conf: install ipv6 /bin/true If using OpenStack Neutron, check the status of the neutron services on the host and the correct IP namespace is being used (see Troubleshooting OpenStack Networking). Reboot your host. Methods for viewing the Instance Console log When using the command line, issue the following commands: nova list nova console-log INSTANCE_ID For example: nova console-log ee0cb5ca-281f-43e9-bb40-42ffddcb09cd When using Horizon, carry out the following steps: Navigate to the list of instance and select an instance. You will be taken to an Overview screen. Along the top of the Overview screen is a Log tab. This is the console log for the instance. When viewing the logs directly on a nova-compute host, look for the following file: The console logs are owned by root, so only an administrator can do this. They are placed at: var/lib/nova/instances/<instance_id>/console.log. Instance fails to download meta information If an instance fails to communicate to download the extra information that can be supplied to the instance meta-data, we can end up in a situation where the instance is up but you're unable to log in, as the SSH key information is injected using this method. Viewing the console log will show output like in the following screenshot: If you are not using Neutron, ensure the following: nova-api is running on the Controller host (in a multi_host environment, ensure there's a nova-api-metadata and a nova-network package installed and running on the Compute host). Perform the following iptables check on the Compute node: sudo iptables -L -n -t nat We should see a line in the output like in the following screenshot: If not, restart your nova-network services and check again. Sometimes there are multiple copies of dnsmasq running, which can cause this issue. Ensure that there is only one instance of dnsmasq running: ps -ef | grep dnsmasq This will bring back two process entries, the parent dnsmasq process and a spawned child (verify by the PIDs). If there are any other instances of dnsmasq running, kill the dnsmasq processes. When killed, restart nova-network, which will spawn dnsmasq again without any conflicting processes. If you are using Neutron: The first place to look is in the /var/log/quantum/metadata_agent.log on the Network host. Here you may see Python stack traces that could indicate a service isn't running correctly. A connection refused message may appear here suggesting the metadata agent running on the Network host is unable to talk to the Metadata service on the Controller host via the Metadata Proxy service (also running on the Network host). The metadata service runs on port 8775 on our Controller host, so checking that is running involves checking the port is open and it's running the metadata service. To do this on the Controller host, run the following: sudo netstat -antp | grep 8775 This will bring back the following output if everything is OK: tcp 0 0 0.0.0.0:8775 0.0.0.0:* LISTEN If nothing is returned, check that the nova-api service is running and if not, start it. Instance launches; stuck at Building or Pending Sometimes, a little patience is needed before assuming the instance has not booted, because the image is copied across the network to a node that has not seen the image before. At other times though, if the instance has been stuck in booting or a similar state for longer than normal, it indicates a problem. The first place to look will be for errors in the logs. A quick way of doing this is from the controller server and by issuing the following command: sudo nova-manage logs errors A common error that is usually present is usually related to AMQP being unreachable. Generally, these errors can be ignored unless, that is, you check the time stamp and these errors are currently appearing. You tend to see a number of these messages related to when the services first started up so look at the timestamp before reaching conclusions. This command brings back any log line with the ERROR as log level, but you will need to view the logs in more detail to get a clearer picture. A key log file, when troubleshooting instances that are not booting properly, will be available on the controller host at /var/log/nova/nova-scheduler.log. This file tends to produce the reason why an instance is stuck in Building state. Another file to view further information will be on the compute host at /var/log/nova/nova-compute.log. Look here at the time you launch the instance. In a busy environment, you will want to tail the log file and parse for the instance ID. Check /var/log/nova/nova-network.log (for Nova Network) and /var/log/quantum/*.log (for Neutron) for any reason why instances aren't being assigned IP addresses. It could be issues around DHCP preventing address allocation or quotas being reached. Error codes such as 401, 403, 500 The majority of the OpenStack services are web services, meaning the responses from the services are well defined. 40X: This refers to a service that is up but responding to an event that is produced by some user error. For example, a 401 is an authentication failure, so check the credentials used when accessing the service. 500: These errors mean a connecting service is unavailable or has caused an error that has caused the service to interpret a response to cause a failure. Common problems here are services that have not started properly, so check for running services. If all avenues have been exhausted when troubleshooting your environment, reach out to the community, using the mailing list or IRC, where there is a raft of people willing to offer their time and assistance. See the Getting help from the community recipe at the end of this article for more information. Listing all instances across all hosts From the OpenStack controller node, you can execute the following command to get a list of the running instances in the environment: sudo nova-manage vm list To view all instances across all tenants, as a user with an admin role execute the following command: nova list --all-tenants These commands are useful in identifying any failed instances and the host on which it is running. You can then investigate further. How it works... Troubleshooting OpenStack Compute problems can be quite complex, but looking in the right places can help solve some of the more common problems. Unfortunately, like troubleshooting any computer system, there isn't a single command that can help identify all the problems that you may encounter, but OpenStack provides some tools to help you identify some problems. Having an understanding of managing servers and networks will help troubleshoot a distributed cloud environment such as OpenStack. There's more than one place where you can go to identify the issues, as they can stem from the environment to the instances themselves. Methodically working your way through the problems though will help lead you to a resolution.
Read more
  • 0
  • 0
  • 2874

Packt
23 Oct 2013
7 min read
Save for later

Installation and Deployment of Citrix Systems®' CPSM

Packt
23 Oct 2013
7 min read
(For more resources related to this topic, see here.) Metrics to verify before CPSM deployment Until now, you have learned about the most obvious requirements to install CPSM; now, in the upcoming session, we will have a look at how to verify essentials for CPSM deployment. Verification of environment prerequisites We will look at the core components that should essentially be verified right at the outset before the installation. The first component that needs to be verified is the Active Directory (AD) schema, which is necessary to accommodate Citrix CloudPortal Services Manager. As you are aware, the operation can be performed using the Microsoft Exchange installation tools. The following steps need to be performed: Open the command prompt on your planned Exchange server. Then execute the following command: setup /p /on:OrganizationName The second component that needs to be cross-checked is whether DNS aliases have been configured. Citrix CloudPortal Services Manager uses DNS aliases to discover the servers where the platform modules will be positioned. For this, the following steps need to be performed: On AD, create CNAME records. There should be one record against each of your servers as shown in the following table: Server EX Name Database server CORTEXSQL Provisioning server CORTEXPROVISIONING Web server CORTEXWEB Reporting Services CORTEXREPORTS Use the Citrix CloudPortal Services Manager Setup utility to verify the preceding items. The utility probes our settings and if it is positive, displays a green check mark next to each confirmed item. If it is negative, the Setup utility shows a Validate button, so you can execute the checks over again. Perform the following steps: From your file cluster or from the installation media, execute Setup.exe. On the CloudPortal Services Manager splash screen, click on Get Started. On the Choice Deployment Task screen, choose Install CloudPortal Services Manager. In the CloudPortal Services Manager screen, choose check environment prerequisites. The Prepare Environment screen displays the status of the verified items. As the next step, we will now create the system database. The heart of the deployment is the Config.xml file, which will be useful throughout the wizard run-through. How to deploy SQL Server and Reporting Services For Cloud IT providers, it is recommended that they use the SQL Server deployment and Reporting Services. This should be done in a dedicated cluster for high availability, especially when providing for multiple consumers. With regards to installation, configuration, and performance tuning of SQL Server and Reporting Services, please refer to http://technet.microsoft.com/en-us/library/ms143219(v=sql.105).aspx. The next step is to create the DB. We have to perform this activity post deployment of SQL Server and SQL Server Reporting Services. The system databases are created using the Services Manager Configuration Tool, which is installed as a part of this process. Perform the following steps: From the source location where the installation media is located, execute the Setup.exe file. On the CloudPortal Services Manager splash screen, click on Get Started. On the Choose Deployment Task screen, choose Install CloudPortal Services Manager. On the Install CloudPortal Services Manager screen, choose Deploy Server Roles & Primary Location. On the Deploy Server Roles & Primary Location screen, choose Create System Databases. Now let us install the Citrix CloudPortal Services Manager Configuration Tool: When prompted, click on Install to deploy the Configuration utility. On the License Agreement screen, read and accept the license agreement and then select commit next. On the Ready to install screen, click on Install. The setup utility installs the Configuration Tool and the prerequisites that are required as well. Now, let us click on Finish to continue creating the system databases. The next step of the installation is to create a Configuration File screen. Browse to the directory where you want to store the Config.xml file and provide a filename. Then click on Next. Now, let us go to the Create Primary Databases screen and configure the following information about the SQL Server that will store system configuration information: Server address: This is used to specify the DB server using the DNS alias, IP address, or the FQDN. Server port: This is used to declare the port number used by SQL Server. The port for a default instance of SQL Server is 1433. Authentication mode: This is used to choose whether to apply Integrated Windows and SQL or SQL authentication. By default, Integrated is chosen. (Mixed Mode is recommended to be used). Connect as: This is used to declare Consumer name and password of the SQL administrator Consumer (Super account). Fields are accessible when we choose the SQL authentication mode for our installation. Auto-create SQL logins: This checkbox is available only if we want the required SQL Server Consumer accounts to be created automatically. If you do not choose this checkbox, we can later provide the login details manually on the Configure Database Logins screen. Run through the Test Connection to make sure the Configuration utility can make contact with the SQL Server and then click on Next. On the Configure Database Logins screen, proceed with Generate IDs chosen if you want passwords created automatically for CortexProp, OLMReports, and OLM DB accounts. Clear this choice if you want to provide the passwords for these accounts. CortexProp, OLM DB, and OLMReports accounts are formed to make sure the cross-domain right of entry is available to the server DBs. On the Summary screen, assess the DB configuration in sequence. If you want to change anything, click on Back to return to the suitable configuration screen. Upon completion of the entire configuration as per the guideline, go ahead and click on Commit. The Applying Configuration screen displays the progress. After the server DBs are effectively created, click on Finish. After the system databases are created, you can install Provisioning Directory Web Service and the web platform server roles on the other servers. Installation of the CPSM role using GUI By now you would have crystal clear understanding of the system requirements for a CPSM installation. In order to start the installation using GUI, we need to perform the following activity on the server you will be using to host each server role you planned: Deploy and configure the Reporting server role after the primary location has been configured. If you deploy Reporting Services before the primary location has been configured, configuration of Reporting Services fails. From the source location where the installation media is located, execute the Setup.exe file. On the Setup Tool splash screen, click on Get Started. On the Choose Deployment Task screen, choose Install CloudPortal Services Manager and click on Next. Now on the Install CloudPortal Services Manager screen, choose Deploy Server Roles & Primary Location and click on Next. Now on the Deploy Server Roles & Primary Location screen, choose Install Server Roles and click on Next. Now on the License Agreement screen, agree to the license agreement and then click on Next. On the Choose Server Roles screen, choose the roles to install and then click on Next. On the Review Prerequisites screen, evaluate the prerequisite objects that will be deployed and then click on Next. On the Ready to install screen, evaluate the chosen roles and prerequisites that will be deployed. Click on Install. The Deploying Server Roles screen shows the installation of the prerequisites and the chosen roles, and the outcome. On the Deployment Complete screen, click on Finish. Summary This article serves as a brief reference for readers to understand about the system, to verify the essentials, and install and configure CPSM using GUI and CLI. Resources for Article: Further resources on this subject: Content Switching using Citrix Security [Article] Managing Citrix Policies [Article] Getting Started with the Citrix Access Gateway Product Family [Article]
Read more
  • 0
  • 0
  • 1422

article-image-network-virtualization-and-vsphere
Packt
21 Oct 2013
12 min read
Save for later

Network Virtualization and vSphere

Packt
21 Oct 2013
12 min read
(For more resources related to this topic, see here.) Network virtualization is what makes vCloud Director such an awesome tool. When we talk about isolated networks, we are talking about vCloud Director making use of different methods of the Network layer 3 encapsulation (OSI/ISO model). Basically, it's the same concept that was introduced with VLANs. VLANs split up the network communication in a network in different totally-isolated communication streams. vCloud makes use of these isolated networks to create networks in Organizations and vApps. vCloud Director has three different network items listed as follows: External Network: This is a network that exists outside vCloud, for example, a production network. It is basically a port group in vSphere that is used in vCloud to connect to the outside world. An External Network can be connected to multiple Organization Networks. External Networks are not virtualized and are based on existing port groups on vSwitch or a Distributed Switch (also called a vNetwork Distributed Switch or vNDS). Organization Network: This is a network that exists only inside one organization. You can have multiple Organization Networks in an organization. Organizational networks come in three different types: Isolated: An isolated Organization Network exists only in this organization and is not connected to an External Network; however, it can be connected to vApp Networks or VMs. This network type uses network virtualization and its own network settings. Routed Network (Edge Gateway): An Organization Network connects to an existing Edge Device. An Edge Gateway allows defining firewall, NAT rules, DHCP services, Static Routes, as well as VPN connections and the load balance functionality. Routed Gateways connect External Networks to vApp Networks and/or VMs. This network uses virtualized networks and its own network settings. Directly connected: This Organization Network is an extension of an External Network into the organization. They directly connect External Networks to vApp Networks or VMs. These networks do NOT use network virtualization and they make use of the network settings of an External Network. vApp Network: This is a virtualized network that only exists inside a vApp. You can have multiple vApp Networks inside one vApp. A vApp Network can connect to VMs and to Organization Networks. It has its own network settings. When connecting a vApp Network to an Organization Network, you can create a router between the vApp and the Organization Network, which lets you define DHCP, firewall, NAT rules, and Static Routing. To create isolated networks, vCloud Director uses Network Pools. Network Pools are a collection of VLANs, port groups, and VLANs that can use layer 2 in the layer 3 encapsulation. The content of these pools can be used by Organizations and vApp Networks for network virtualization. Network Pools There are four kinds of Network Pools that can be created: Virtual eXtensible LANs (VXLAN): VXLAN networks are layer 2 networks that are encapsulated in layer 3 packets. VMware calls this Software Defined Networking (SDN). VXLANs are automatically created by vCloud Director (vCD); however, they don't work out of the box and require some extra configuration in vCloud Network and Security (refer to the Making VXLANs work recipe). Network isolation-backed: These have basically the same concept as VXLANs; however, they work out of the box and use MAC-in-MAC encapsulation. The difference is that VXLANs can transcend routers whereas Network isolation-backed networks can't (refer to the Creating isolated networks without 1,000 VXLANs recipe). vSphere port groups-backed: vCD uses pre-created port groups to build the vApp or Organization Networks. You need to pre-provision one port group for every vApp/Organization Network you would like to use. VLAN-backed: vCD uses a pool of VLAN numbers to automatically provision port groups on demand; however, you still need to configure the VLAN trunking. You will need to reserve one VLAN for every vApp/Organization Network you would like to use. VXLANs and Network isolation-backed networks solve the problems of pre-provisioning and reserving a multitude of VLANs, which makes them extremely important. However, using a port group or VLAN Network Pools can have additional benefits that we will explore later. So let's get started! Now let's have a closer look at what one can do with networks in vCloud, but before we dive into the recipes, let's make sure we are all on the same page. Usage of different Network types vCloud Director has three different network items. An External Network is basically a port group in vSphere that is imported into vCloud. An Organization Network is an isolated network that exists only in an organization. The same is true for vApp Networks, which exists only in vApps. In each example you will also see a diagram of the specific network: Isolated vApp Network Isolated vApp Networks exist only inside vApps. They are useful if one needs to test how VMs behave in a network or to test using an IP range that is already in use (for example, production). The downside of them is that they are isolated, meaning that it is hard to get information or software in and out. Have a look at the Forwarding an RDP (or SSH) session into an isolated vApp and accessing a fully isolated vApp or Organization Network recipes in this article to find some answers to this problem. VMs directly connected to an External Network VMs inside a vApp are connected to a Direct Organization Network that is again directly connected to an External Network, meaning that they will use the IPs from the External Network Pool. Typically, these VMs are used for production, making it possible for customers to choose vCloud for fast provisioning of preconfigured templates. As vCloud manages the IPs for a given IP range (Static Pool), it can be quite easy to fast provision multiple VMs this way. vApp Network connected via vApp router to an External Network VMs are connected to a vApp Network that has a vApp router defined as its gateway. The gateway connects to a Direct Organization Network. The gateway will automatically be given an IP from the External Network Pool. The IPs of the VMs inside the vApp will be managed by the vApp Static Pool. These configurations come in handy to reduce the amount of physical networking that has to be provisioned. The vApp router can act as a router with defined firewall rules, it can do S-NAT and D-NAT as well as define static routing and DHCP services. So instead of using a physical VLAN or subnet, one can hide away applications this way. As an added benefit, these applications can be used as templates for fast deployment. VMs directly connected to an isolated Organization Network VMs are connected directly to an isolated Organization Network. Connecting VMs directly to an isolated Organization Network normally only makes sense if there's more than one vApp/VM connected to the same Organization Network. These network constructs come in handy when we want to repeatedly test complex applications that require certain infrastructure services such as Active Directory, DHCP, DNS, database, and Exchange Servers. Instead of deploying the needed infrastructure inside the testing vApp, we create a new vApp that contains only the infrastructure. By connecting the test vApp to the infrastructure vApp via an isolated Organization Network, the test vApp can now use the infrastructure. This makes it possible to re-use these infrastructure services not only for one vApp but also for many vApps, reducing the amount of resources needed for testing. By using vApp sharing options, you can even hide away the infrastructure vApp from your users. vApp connected via a vApp router to an isolated Organization Network VMs are connected to a vApp Network that has a vApp router as its gateway. The vApp router gets its IP automatically from the Organization Network pool. The VMs will get their IPs from the vApp Network pool. Basically, it is a combination of the network examples—VMs directly connected to an isolated Organization Network and a vApp Network connected via a vApp router to an External Network. A test vApp or an infrastructure vApp can be packaged this way and be made ready for fast deployment. VMs connected directly to an Edge device. VMs are directly connected to the Edge Organization Network and get their IPs from the Organization Network pool. Their gateway is the Edge device that connects them to the External Networks through the Edge firewall. A typical example for this is the usage of the Edge load balancing feature in order to load balance VMs inside the vApp. Another example is that organizations that are using the same External Network are secured against each other using the Edge firewall. This is mostly the case if the External Network is the Internet and each organization is an external customer. A vApp connected to an Edge via a vApp router. VMs are connected to a vApp Network that has the vApp router as its gateway. The vApp router will automatically get an IP from the Organization Network, which again has its gateway as the Edge. The VMs will get their IPs from the vApp Network pool. This is a more complicated variant of the previous example, allowing customers to package their VMs, secure them against other vApps or VMs, or subdivide their allocated networks. IP management Let's have a look at IP management with vCloud. vCloud has the following three different settings for IP management of VMs: DHCP: You will need to provide a DHCP as vCloud doesn't automatically create one. However, a vApp router or an Edge can create one. Static-IP Pool: The IP for the VM comes from the Static IP Pool of the network it is connected to. In addition to the IP, the subnet mask, DNS, gateway, and domain suffix will be configured on the VM according to the IP settings. Static-Manual: The IP can be defined manually; it doesn't come from the pool. The IP you define must be part of the network segment that is defined by the gateway and the subnet mask. In addition to the IP, the subnet mask, DNS, gateway, and domain suffix will be configured on the VM according to the IP settings. All these settings require Guest Customization to be effective. If no Guest Customization is selected, or if the VM doesn't have VMware tools installed, it doesn't work, and whatever the VM was configured with as a template will be used. Instead of wasting space and retyping what you need for each recipe every time, the following are some of the basic ingredients you will have to have ready for this article. An organization in which at least one OvDC is present. The OvDC needs to be configured with at least three free isolated networks that have a network pool defined. Some VM templates of an OS type you find easy to use (Linux or Windows) An External Network that connects you to the outside world (as in outside vCloud), for example, your desktop, and has at least five IPs in the Static IP Pool. One thing that needs to be said about vApps is that they actually come in two completely different versions: the vSphere vApp and the vCloud vApp. vSphere and vCloud vApps The vSphere vApp concept was introduced in vSphere 4.0 as a container for VMs. In vSphere, a vApp is essentially a resource pool with some extras, such as the starting and stopping order and (if you configured it) network IP allocation methods. The idea is for the vApp to be an entity of VMs that build one unit. Such vApps can then be exported or imported using OVF (Open Virtualization Format). A very good example of a vApp is VMware Operations Manager. It comes as a vApp in an OVF and contains not only the VMs but also the startup sequence as well as setup scripts. When the vApp is deployed for the first time, additional information such as network settings are asked and then implemented. A vSphere vApp is a resource pool; it can be configured so that it will only demand resources that it is using; on the other hand, resource pool configuration is something that most people struggle with. A vSphere vApp is only a resource pool; it is not automatically represented as a folder within the VMs and Template view of vSphere, but is viewed there as a vApp, as shown in the following screenshot: The vCloud vApp is a very different concept. First of all, it is not a resource pool. The VMs of the vCloud vApp live in the OvDC resource pool. However, the vCloud vApp is automatically a folder in the VMs and Template view of vSphere. It is a construct that is created by vCloud, and consists of VMs, a start and stop sequence, and networks. The network part is one of the major differences (next to the resource pool). In vSphere, only basic network information (IP's assignment, gateway, and DNS) is stored in the vApp. A vCloud vApp actually encapsulates the networks. The vCloud vApp networks are full networks, meaning they contain the full information for a given network including network settings and IP pools. This information is kept while importing and exporting vCloud vApps, as shown in the following screenshot: While I'm referring to vApps in this article, I will always mean vCloud vApps. If vCenter vApps feature anywhere in this article, they will be written as vCenter vApp. Summary In this article we learned different VMware concepts that will help in improving productivity. We also went through recipes that deal with the daily tasks and also present new ideas and concepts that you may not have thought of before. Resources for Article: Further resources on this subject: Windows 8 with VMware View [Article] Cloning and Snapshots in VMware Workstation [Article] vCloud Networks [Article]
Read more
  • 0
  • 0
  • 4661
article-image-what-oracle-public-cloud
Packt
14 Oct 2013
8 min read
Save for later

What is Oracle Public Cloud?

Packt
14 Oct 2013
8 min read
(For more resources related to this topic, see here.) Brief review of cloud computing concepts National Institute for Standard and Technology (NIST), USA has defined cloud computing, and this is the best starting point to understand the fundamentals of cloud computing. The definition is as follows: "Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (for example, networks, servers, storage, applications, and services) that can rapidly be provisioned and released with minimal management effort or service provider interaction." The cloud model proposed by NIST is composed of five essential characteristics, three service models, and four deployment models. Essential characteristics The five essential services defined by NIST are as follows: On-demand self-service: This enables users to provision and manage computing resources, such as server time, amount of storage, and network bandwidth, without a need for human administrators. Broad network access: Cloud services and capabilities must be available and accessed from heterogeneous platforms, such as personal computers and mobile devices. Resource pooling: Provider resources are shared among multiple consumers, and these physical and virtual resources are automatically managed according to consumer demand. Rapid elasticity: Resources can be elastically provisioned and released to rapidly scale out and scale in as per user need. Measured service: Cloud provider resources should be tracked for usage by its consumers for the purpose of billing, generally on a pay-per-use basis. Service models Cloud services are available in the following models: Software as a Service (SaaS): When application software is deployed over a cloud, it is called Software as a Service; for example, Oracle planning and budgeting and Salesforce CRM. Consumers can access this application software from heterogeneous devices without having to worry about the management of hardware, network, and operating system. The user only needs to manage some application-specific settings, if required. Platform as a Service (PaaS): Platform as a Service provides an application development platform in the form of a cloud service; for example, Oracle Java Cloud Services, the Google App engine, and so on. Consumers need not bother about the management of hardware, network, and operating system and only supposed to manage application deployment and configuration settings. Infrastructure as a Service(IaaS): Similarly, hardware infrastructure is made available to its consumers using Infrastructure as a Service clouds, for example, Oracle IaaS/Amazon's Elastic Compute Cloud. Consumers need not to manage the hardware infrastructure but have full control over the operating system, development platform, and application. Deployment models Cloud services can be deployed using one of the following four Cloud deployment models: Private cloud: In a private cloud, the cloud facilities are operated only for a specific organization. It can be owned or managed by the organization, a third party, or a combined entity and can be deployed within the organization or on some other location. Community cloud: A community cloud is shared by multiple organizations of similar concerns such as goals, mission, and data. It can be owned or managed by several organizations from the community, a third-party, or a combined entity and can be deployed within the organizations or on some other location. Public cloud: A public cloud is open for a large user group or can provide open access for general users. The public cloud is generally managed by business, academic, or government organizations. Hybrid cloud: A hybrid cloud is a blending of private, community, or public clouds. Oracle Cloud Services Oracle offers a range of services for all the cloud service models. These services are as follows: Oracle SaaS Services include customer relationship management, human capital management, and enterprise resource planning. These applications cover almost all of the requirements of any commercial application. Oracle PaaS Services offer Java Service, Database Service, and Developer Service. At IaaS level, Oracle offers Oracle servers, Oracle Linux, Oracle Enterprise Manager, and so on. Oracle also offers some common services such as Oracle Social Services, Oracle Management Services, Oracle Storage Services , and Oracle Messaging Services. Oracle Cloud Services have been organized by Oracle into various categories, such as Oracle Application Services, Oracle Platform Services, Oracle Social Services, Oracle Common Infrastructure Services and Oracle Managed Cloud Services. Oracle Application Services Oracle application services provide a broad range of industry-strength, commercial Oracle applications on the cloud. These cloud services enable its consumers to easily use, enhance, and administer the applications. The Oracle Application Services are part of a full suite of business applications. It includes: Enterprise Resource Planning (ERP) Service: Oracle's ERP Cloud Service offers a full set of financial and operational capabilities. It contains almost all the applications required by an organization of any size. Planning and budgeting: This application supports planning and budgeting of work flow in the organization. Financial reporting: This application provides timely, accurate financial and management reports required for decision making. Human capital management: This application supports work force management and simplifies the human resource management process. Talent management: This application empowers businesses to recruit and retain the best resource using its effective functionalities. Sales and marketing: This application can be used to capture sales and customer data and presents analytic reports to improve sales. Customer service and support: This application supports various functionalities related to customer satisfaction and support, such as contact center, feedback management, incidence management, and other functions related to customer services. Oracle Platform Services Oracle platform services enable developers to develop rich applications for their organizations. These services support a wide range of technologies for application development, including: Java Service: This service provides an enterprise-level Java application development platform. The applications can be deployed on an Oracle WebLogic server. It provides flexibility to consumers without a vender lock-in problem. This service is accessible through various consoles and interfaces. Database Service: Using this service, consumers can access an Oracle database on a cloud by Oracle Application Services, RESTful web services, Java Services, and so on. It provides complete SQL and PL/SQL support over the cloud. It supports various application development tools, including SQL developer, an application builder named as Oracle Application Express bundled with the Oracle Cloud, and the RESTful web service wizard. Developer Service: This service provides software development Platforms as a Service to its consumers. The facilities provided by this service include project configuration, user management, source control repositories, defect tracking systems, and documentation systems through wiki. Oracle Social Services Oracle social services offer facilities and tools for social presence, social marketing, and social data research. Social Services include: Social networks: This service is a private network that provides tools to capture and store the information flow within the organization, among people, enterprise applications, and business processes. Social marketing: This service provides applications for team management, content management, information publication, and so on. The information can be managed from various social networking sites, including Facebook, Twitter, and Google+. Social engagement and monitoring: This service supports a new type of customer relationship by automatically identifying the consumer opportunity and threats to your organization based on the data available on social networks. Oracle Common Infrastructure Services This group of Oracle Services includes two common services that can be used by and integrated with the other services. These services are: Oracle Storage Service: This service provides online storage facilities to store and manage the data/contents on the cloud. This makes the application deployment cost effective and efficient. This service offers a single point of control, secure, scalable, and high performance access to consumer data. Oracle Messaging Service: This service enables communication between various software components using the common messaging API. It also provides an infrastructure for software components to communicate with each other by sending and receiving the messages to establish a dynamic and automated workflow environment. Oracle Managed Cloud Services Oracle Managed Cloud Services offer a variety of services to customers for smooth transition to the Oracle Cloud and its successful maintenance. These services provide Oracle's expertise in the form of management services to its user. They include the facility for transition, recovery, security, and testing of the services to be migrated. Transition Service: This service is a set of services to facilitate the smooth transition of normal non-cloud applications to the cloud. It includes a Transition Advisory, migration to proven configuration, CEMLI (Customizations, Extensions, Modifications, Localizations, and Integrations) migrations, upgrade assistance, and DBA support services. Disaster Recovery Service: This service provides robust solutions for preventing, detecting, and recovering from sudden outages to keep the functionality running. Security Service: This service assures its customers about the security and compliance of their data. It provides federal security services, Payment Card Industry (PCI) compliance , Health Insurance Portability and Accountability Act(HIPAA) compliance, identity management, strong authentication, Single Sign-On, identity analytic services, and so on. Testing Service: This service helps customers to ensure that the provider's infrastructure is capable of fulfilling their needs at peak load time and assures customers that the system will meet their expectations. Summary In this article we have discussed the various services offered by the Oracle Public Cloud. We have described the categorization of these services. Resources for Article: Further resources on this subject: Remote Job Agent in Oracle 11g Database with Oracle Scheduler [Article] Introduction to Oracle Service Bus & Oracle Service Registry [Article] Configuration, Release and Change Management with Oracle [Article]
Read more
  • 0
  • 0
  • 5599

article-image-custom-components-in-visualforce
Packt
08 Oct 2013
14 min read
Save for later

Custom Components in Visualforce

Packt
08 Oct 2013
14 min read
(For more resources related to this topic, see here.) Custom components allow custom Visualforce functionality to be encapsulated as discrete modules, which provides two main benefits: Functional decomposition, where a lengthy page is broken down into custom components to make it easier to develop and maintain Code re-use, where a custom component provides common functionality that can be re-used across a number of pages A custom component may have a controller, but unlike Visualforce pages, only custom controllers may be used. A custom component can also take attributes, which can influence the generated markup or set property values in the component's controller. Custom components do not have any associated security settings; a user with access to a Visualforce page has access to all custom components referenced by the page. Passing attributes to components Visualforce pages can pass parameters to components via attributes. A component declares the attributes that it is able to accept, including information about the type and whether the attribute is mandatory or optional. Attributes can be used directly in the component or assigned to properties in the component's controller. In this recipe we will create a Visualforce page that provides contact edit capability. The page utilizes a custom component that allows the name fields of the contact, Salutation, First Name , and Last Name, to be edited in a three-column page block section. The contact record is passed from the page to the component as an attribute, allowing the component to be re-used in any page that allows editing of contacts. How to do it… This recipe does not require any Apex controllers, so we can start with the custom component. Navigate to the Visualforce Components setup page by clicking on Your Name | Setup | Develop | Components. Click on the New button. Enter ContactNameEdit in the Label field. Accept the default ContactNameEdit that is automatically generated for the Name field. Paste the contents of the ContactNameEdit.component file from the code download into the Visualforce Markup area and click on the Save button. Once a custom component is saved, it is available in your organization's component library, which can be accessed from the development footer of any Visualforce page. For more information visit http://www.salesforce.com/us/developer/docs/pages/Content/pages_quick_start_component_library.htm. Next, create the Visualforce page by navigating to the Visualforce setup page by clicking on Your Name | Setup | Develop | Pages. Click on the New button. Enter ContactEdit in the Label field. Accept the default Contact Edit that is automatically generated for the Name field. Paste the contents of the ContactEdit.page file from the code download into the Visualforce Markup area and click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Pages. Locate the entry for the Contact Edit page and click on the Security link. On the resulting page, select which profiles should have access and click on the Save button. How it works… Opening the following URL in your browser displays the ContactEdit page: https://<instance>/apex/ContactEdit. Here, <instance> is the Salesforce instance specific to your organization, for example, na6.salesforce.com. The custom component that renders the input fields in the Name section defines a single, required attribute of type Contact. <apex:attribute name="Contact" type="Contact" description="The contact to edit" required="true" /> The description of the attribute must always be provided, as this is included in the component reference. The type of the attribute must be a primitive, sObject, one-dimensional list, map, or custom Apex class. The Contact attribute can then be used in merge syntax inside the component. <apex:inputField value="{!Contact.Salutation}"/> <apex:inputField value="{!Contact.FirstName}"/> <apex:inputField value="{!Contact.LastName}"/> The page passes the contact record being managed by the standard controller to the component via the Contact attribute. <c:ContactNameEdit contact="{!Contact}"/> See also The Updating attributes in component controllers recipe in this article shows how a custom component can update an attribute that is a property of the enclosing page controller. Updating attributes in component controllers Updating fields of sObjects passed as attributes to custom components is straightforward, and can be achieved through simple merge syntax statements. This is not so simple when the attribute is a primitive and will be updated by the component controller, as parameters are passed by value, and thus, any changes are made to a copy of the primitive. For example, passing the name field of a contact sObject, rather than the contact sObject itself, would mean that any changes made in the component would not be visible to the containing page. In this situation, the primitive must be encapsulated inside a containing class. The class instance attribute is still passed by value, so it cannot be updated to point to a different instance, but the properties of the instance can be updated. In this recipe, we will create a containing class that encapsulates a Date primitive and a Visualforce component that allows the user to enter the date via day/month/year picklists. A simple Visualforce page and controller will also be created to demonstrate how this component can be used to enter a contact's date of birth. Getting ready This recipe requires a custom Apex class to encapsulate the Date primitive. To do so, perform the following steps: First, create the class that encapsulates the Date primitive by navigating to the Apex Classes setup page by clicking on Your Name | Setup | Develop | Apex Classes. Click on the New button. Paste the contents of the DateContainer.cls Apex class from the code download into the Apex Class area. Click on the Save button. How to do it… First, create the custom component controller by navigating to the Apex Classes setup page by clicking on Your Name | Setup | Develop | Apex Classes. Click on the New button. Paste the contents of the DateEditController.cls Apex class from the code download into the Apex Class area. Click on the Save button. Next, create the custom component by navigating to the Visualforce Components setup page by clicking on Your Name | Setup | Develop | Components. Click on the New button. Enter DateEdit in the Label field. Accept the default DateEdit that is automatically generated for the Name field. Paste the contents of the DateEdit.component file from the code download into the Visualforce Markup area and click on the Save button. Next, create the Visualforce page controller extension by navigating to the Apex Classes setup page by clicking on Your Name | Setup | Develop | Apex Classes. Click on the New button. Paste the contents of the ContactDateEditExt.cls Apex class from the code download into the Apex Class area. Click on the Save button. Finally, create a Visualforce page by navigating to the Visualforce setup page by clicking on Your Name | Setup | Develop | Pages. Click on the New button. Enter ContactDateEdit in the Label field. Accept the default ContactDateEdit that is automatically generated for the Name field. Paste the contents of the ContactDateEdit.page file from the code download into the Visualforce Markup area and click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Pages. Locate the entry for the ContactDateEdit.page file and click on the Security link. On the resulting page, select which profiles should have access and click on the Save button. How it works… Opening the following URL in your browser displays the ContactDateEdit page: https://<instance>/apex/ContactDateEdit?id=<contact_id>. Here, <instance> is the Salesforce instance specific to your organization, for example, na6.salesforce.com, and <contact_id> is the ID of any contact in your Salesforce instance. The Visualforce page controller declares a DateContainer property that will be used to capture the contact's date of birth. public DateContainer dob {get; set;} private Contact cont; private ApexPages.StandardController stdCtrl {get; set;} public ContactDateEditExt(ApexPages.StandardController std) { stdCtrl=std; cont=(Contact) std.getRecord(); dob=new DateContainer(cont.BirthDate); } Note that as DateContainer is a class, it must be instantiated when the controller is constructed. The custom component that manages the Date of Birth section defines the following two attributes: A required attribute of type DateContainer, which is assigned to the dateContainer property of the controller The title of for the page block section that will house the picklists; as this is a reusable component, the page supplies an appropriate title Note that this component is not tightly coupled with a contact date of birth field; it may be used to manage a date field for any sObject. <apex:attribute type="DateContainer" name="dateContainerAtt" description="The date" assignTo="{!dateContainer}" required="true" /> <apex:attribute type="String" description="Page block section title" name="title" /> The component controller defines properties for each of the day, month, and year elements of the date. Each setter for these properties attempts to construct the date if all of the other elements are present. This is required as there is no guarantee of the order in which the setters will be called when the Save button is clicked and the postback takes place. public Integer year {get; set { year=value; updateContainer(); } } private void updateContainer() { if ( (null!=year) && (null!=month) && (null!=day) ) { Date theDate=Date.newInstance(year, month, day); dateContainer.value=theDate; } } When the contained date primitive is changed in the updateContainer method, this is reflected in the page controller property, which can then be used to update a field in the contact record. public PageReference save() { cont.BirthDate=dob.value; return stdCtrl.save(); } See also The Passing attributes to components recipe in this article shows how an sObject may be passed as an attribute to a custom component. Passing action methods to components A controller action method is usually invoked from the Visualforce page that it is providing the logic for. However, there are times when it is useful to be able to execute a page controller action method directly from a custom component contained within the page. One example is for styling reasons, in order to locate the command button that executes the action method inside the markup generated by the component. In this recipe we will create a custom component that provides contact edit functionality, including command buttons to save or cancel the edit, and a Visualforce page to contain the component and supply the action methods that are executed when the buttons are clicked. How to do it… This recipe does not require any Apex controllers, so we can start with the custom component. Navigate to the Visualforce Components setup page by clicking on Your Name | Setup | Develop | Components. Click on the New button. Enter ContactEdit in the Label field. Accept the default ContactEdit that is automatically generated for the Name field. Paste the contents of the ContactEdit.component file from the code download into the Visualforce Markup area and click on the Save button. Next, create the Visualforce page by navigating to the Visualforce setup page by clicking on Your Name | Setup | Develop | Pages. Click on the New button. Enter ContactEditActions in the Label field. Accept the default ContactEditActions that is automatically generated for the Name field. Paste the contents of the ContactEditActions.page file from the code download into the Visualforce Markup area and click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Pages. Locate the entry for the ContactEditActions page and click on the Security link. On the resulting page, select which profiles should have access and click on the Save button. How it works… Opening the following URL in your browser displays the ContactEditActions page: https://<instance>/apex/ContactEditActions?id=<contact_id>. Here, <instance> is the Salesforce instance specific to your organization, for example, na6.salesforce.com, and <contact_id> is the ID of any contact in your Salesforce instance. The Visualforce page simply includes the custom component, and passes the Save and Cancel methods from the standard controller as attributes. <apex:page standardController="Contact"> <apex:pageMessages /> <apex:form > <c:ContactEdit contact="{!contact}" saveAction="{!save}" cancelAction="{!cancel}" /> </apex:form> </apex:page> The ContactEdit custom component declares attributes for the action methods of type ApexPages.Action. <apex:attribute name="SaveAction" description="The save action method from the page controller" type="ApexPages.Action" required="true"/> <apex:attribute name="CancelAction" description="The cancel action method from the page controller" type="ApexPages.Action" required="true"/> These attributes can then be bound to the command buttons in the component in the same way as if they were supplied by the component's controller. <apex:commandButton value="Save" action="{!SaveAction}" /> <apex:commandButton value="Cancel" action="{!CancelAction}" immediate="true" /> There's more… While this example has used action methods from a standard controller, any action method can be passed to a component using this mechanism, including methods from a custom controller or controller extension. See also The Updating attributes in component controllers recipe in this article shows how a custom component can update an attribute that is a property of the enclosing page controller. Data-driven decimal places Attributes passed to custom components from Visualforce pages can be used wherever the merge syntax is legal. The <apex:outputText /> standard component can be used to format numeric and date values, but the formatting is limited to literal values rather than merge fields. In this scenario, an attribute indicating the number of decimal places to display for a numeric value cannot be used directly in the <apex:outputText /> component. In this recipe we will create a custom component that accepts attributes for a numeric value and the number of decimal places to display for the value. The decimal places attribute determines which optional component is rendered to ensure that the correct number of decimal places is displayed, and the component will also bracket negative values. A Visualforce page will also be created to demonstrate how the component can be used. How to do it… This recipe does not require any Apex controllers, so we can start with the custom component. Navigate to the Visualforce Components setup page by clicking on Your Name | Setup | Develop | Components. Click on the New button. Enter DecimalPlaces in the Label field. Accept the default DecimalPlaces that is automatically generated for the Name field. Paste the contents of the DecimalPlaces.component file from the code download into the Visualforce Markup area and click on the Save button. Next, create the Visualforce page by navigating to the Visualforce setup page by clicking on Your Name | Setup | Develop | Pages. Click on the New button. Enter DecimalPlacesDemo in the Label field. Accept the default DecimalPlacesDemo that is automatically generated for the Name field. Paste the contents of the DecimalPlacesDemo.page file from the code download into the Visualforce Markup area and click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Pages. Locate the entry for the DecimalPlacesDemo page and click on the Security link. On the resulting page, select which profiles should have access and click on the Save button. How it works… Opening the following URL in your browser displays the DecimalPlacesDemo page: https://<instance>/apex/DecimalPlacesDemo. Here, <instance> is the Salesforce instance specific to your organization, for example, na6.salesforce.com. The Visualforce page iterates a number of opportunity records and delegates to the component to output the opportunity amount, deriving the number of decimal places from the amount. <c:DecimalPlaces dp="{!TEXT(MOD(opp.Amount/10000, 5))}" value="{!opp.Amount}" /> The component conditionally renders the appropriate output panel, which contains two conditionally rendered <apex:outputText /> components, one to display a positive value to the correct number of decimal places and another to display a bracketed negative value. <apex:outputPanel rendered="{!dp=='1'}"> <apex:outputText rendered="{!AND(NOT(ISNULL(VALUE)), value>=0)}" value="{0, number, #,##0.0}"> <apex:param value="{!value}"/> </apex:outputText> <apex:outputText rendered="{!AND(NOT(ISNULL(VALUE)), value<0)}" value="({0, number, #,##0.0})"> <apex:param value="{!ABS(value)}"/> </apex:outputText> </apex:outputPanel>
Read more
  • 0
  • 0
  • 3034

article-image-load-balancing-and-ha-owncloud
Packt
20 Aug 2013
13 min read
Save for later

Load Balancing and HA for ownCloud

Packt
20 Aug 2013
13 min read
(For more resources related to this topic, see here.) The key strategy If we look closely for the purpose of load balancing, we will see three components in an ownCloud instance, which are as follows: A user data storage (till now we were using system hard disk) A web server, for example Apache or IIS A database, MySQL would be a good choice for demonstration The user data storage Whenever user creates any file or directory in ownCloud or uploads something, the data gets stored in the data directory. If we have to ensure that our ownCloud instance is capable to store the data then we have to make this redundant. Lucky for us, ownCloud supports a lot of other options out of the box, other than the local disk storage. We can use a Samba backend, an ftp backend, an OpenStack Swift backend, Amazon S3, Web DAV, and a lot more. Configuring WebDAV Web Distributed Authoring and Versioning (WebDAV) is an extension of HTTP. It is described by the IETF in RFC 4918 at http://tools.ietf.org/html/rfc4918. It provides the functionality of editing and managing documents over the web. It essentially makes the web readable and writable. To enable custom backend support, we will first have to go to the Familiar Apps section, and need to enable the External Storage Support app. After this app is enabled, when we open the ownCloud admin panel, we will see an external storage section on the page. Just choose WebDAV from the drop-down menu and fill in the credentials. Choose mount point as 0 and put the root as $user/. We are doing this so that for each user, a directory will be created on the WebDAV with their username and whenever users log in, they will be sent to this directory. Just to verify, check out the config/mount.php fi le for ownCloud. The web server Assuming that we have taken care of backend storage, let's now handle the frontend web server. A very obvious way is to do the DNS level load balancing by round robin or geographical distribution. In round-robin DNS scheme the resolution of a name returns a list IP addresses instead of a single IP. These IP addresses may be returned in the round-robin fashion, which means that every time the IP addresses will be permuted in the list. This helps in distribution of the traffic since usually the first IP is used. Another way to give out the list is to match the IP address of the client to the closest IP in the list, and then make that the first IP in the response of the DNS query. The biggest advantage of DNS-based load distribution is that it is application agnostic. It does not care if the request is for an Apache server running PHP or an IIS server running ASP. It just rotates the IP, and the server is responsible to handle the request appropriately. So far, it sounds all good but then why don't we use it all the time? Is it sufficient to balance the entire load? Well, this strategy is great for load distribution, but what will happen in case one of the servers fails? We will run into a major problem then, because usually DNS servers do not do health checks. So in case one if our servers fail, we have to either fix it very fast, which is not easy always or we have to remove that IP from the DNS, but then the DNS answers are cached by several intermediate caching (only DNS servers). They will continue to serve the stale IPs and our clients will continue visiting bad server. Another way is to move the IP from the bad server to the good server. So now this good server will have two IP addresses. That means that it has to handle twice the load, since DNS will keep on sending traffic after permuting the IPs in round-robin fashion. Due to these and several other problems with DNS level load balancing, we generally either avoid using it or use it along with other load-balancing mechanisms. Load balancing Apache is quite easy using Windows GUI For the sake of this example, let's assume that we have ownCloud served by two Apache web servers at 192.168.10.10 and 192.168.10.11. Starting with Apache 2.1, a module known as mod_proxy_balancer was introduced. For CentOS, the default apache package ships this module with itself, so installing is not a problem. If we have Apache running from the yum repo, then we already have this module with us. Now, mod_proxy_balancer supports three algorithms for load distribution, which are as follows: Request Counting With this algorithm, incoming requests are distributed among backend workers in such a way that each backend gets a proportional number of requests defined in the configuration by the loadfactor variable. For example, consider this Apache config snippet: <Proxy balancer://ownCloud>BalancerMember http://192.168.10.11/ loadfactor=1 # Balancer member 1BalancerMember http://192.168.10.10/ loadfactor=3 # Balancer member 2ProxySet lbmethod=byrequests</Proxy> In this example, one request out of every four will be sent to 192.168.10.11, and three will be sent to 192.168.10.10. This might be an appropriate configuration for a site with two servers, one of which is more powerful than the other. Weighted Traffic Counting The Weighted Traffic Counting algorithm is similar to Request Counting algorithm with a minor difference, that is, Weighted Traffic Counting considers the number of bytes instead of number of requests. In the following configuration example, the number of bytes processed by 192.168.10.10 will be three times that of 192.168.10.11: <Proxy balancer://ownCloud>BalancerMember http://192.168.10.11/ loadfactor=1 # Balancer member 1BalancerMember http://192.168.10.10/ loadfactor=3 # Balancer member 2ProxySet lbmethod=bytraffic</Proxy> Pending Request Counting The Pending Request Counting algorithm is the latest and the most sophisticated algorithm provided by Apache for load balancing. It is available from Apache 2.2.10 onward. In this algorithm, the scheduler keeps track of the number of requests that are assigned to each backend worker at any given time. Each new incoming request will be sent to the backend that has a least number of pending requests. In other words, to the backend worker that is relatively least loaded. This helps in keeping the request queues even among the backend workers, and each request generally goes to the worker that can process it the fastest. If two workers are equally light-loaded, the scheduler uses the Request Counting algorithm to break the tie, which is as follows: <Proxy balancer://ownCloud>BalancerMember http://192.168.10.11/ # Balancer member 1BalancerMember http://192.168.10.10/ # Balancer member 2ProxySet lbmethod=bybusyness</Proxy> Enable the Balancer Manager Sometimes, we may need to change our load balancing configuration, but that may not be easy to do without affecting the running servers. For such situations, the Balancer Manager module provides a web interface to change the status of backend workers on the fly. We can use Balancer Manager to put a worker in offline mode or change its loadfactor, but we must have mod_status installed in order to use Balance Manager. A sample config, which should be defined in /etc/httpd/httpd.conf, might look similar to the following code: <Location /balancer-manager>SetHandler balancer-managerOrder Deny,AllowDeny from allAllow from .owncloudbook.com</Location> Once we add directives similar to the preceding ones to httpd.conf, and then restart Apache, we can open the Balancer Manager by pointing a browser at http://owncloudbook.com/balancer-manager. Load balancing IIS Load balancing IIS quite easily uses Windows GUI. Windows Server editions come with a set of nifty tools for this known as Network Load Balancer(NLB). It balances the load by distributing incoming requests among a cluster of servers. Each server in a cluster emits a heartbeat, a kind of "I am operational" message. NLB ensures that no request goes to a server which is not sending this heartbeat, thereby ensuring that all that the requests are processed by operational servers. Let's now configure the NLB by performing the following steps: We need to turn it on first. We can do so by following the given steps: Go to Server Manager. Click on the Features section in the left-side bar. Then click on the Add Features. Select Network Load Balancing from the list. Once we have chosen Network Load Balancing, we will click on Next >, and then click on the Install to get this feature on the servers. Once we are done here, we will open Network Load Balancing Manager from the Administrative Tools section in the Start menu. In the manager window, we need to right-click on the Network Load Balancing Clusters option to create a new cluster, as shown in the following screenshot: Now we need to give the address of the server which is actually running the web server, and then connect to it, as shown in the following screenshot: Choose the appropriate interface. In this example, we have only one, and then click on the Next > button. On the next window, we will be shown host parameters, where we have to assign a priority to this host, as shown in the following screenshot: Now click on the Add button, and a dialogue will open where we have to assign an IP, which will be shared by all the hosts, as shown in the following screenshot.(Network Load Balancing Manager will configure this IP on all the machines.) On the next dialogue choose a cluster IP, as shown in the following screenshot. This will be the IP, which will be used by the users to log in to the ownCloud. Now that we have given it an IP, we will define cluster parameters to use unicast. Multicasts and broadcasts can be used, but they are not supported by all vendors and require more effort. Now everything is done. We are ready to use the Network Load Balancing feature. These steps are to be repeated on all the machines which are going to be a part of this cluster. So there! We have also loaded balanced IIS. The MySQL database MySQL Cluster is a separate component of MySQL, which is not shipped with the standard MySQL server but can be downloaded freely from http://dev.mysql.com/downloads/cluster/. MySQL Cluster helps in better scalability and ensuring high uptime. It is write scalable and ACID compliant, and doesn't have a single disadvantage because of the way it is designed with multi masters and high distribution of data. This is perfect for our requirements, so let's start with its installation. Basic terminologies Management node: This node performs the basic management functions. It starts and stops other nodes and performs backup. It is always a good idea to start this node before starting anything else in the cluster. Data node: This node will store the cluster data. They should always be more than one to provide redundancy. SQL node: This node accesses the cluster data. It uses the NDBCLUSTER storage engine. The default MySQL server does not ship with the NDBCLUSTER storage engine and other required features. So it is mandatory to download a server binary, which can support MySQL Cluster feature. We have to download the appropriate source for MySQL Cluster from http://dev.mysql.com/downloads/cluster/, if Linux is the host OS or the binary if Windows is in consideration. For the purpose of this demonstration, we will assume one Management node, one SQL node, and two Data nodes. We will also make a note that node is a logical word here. It need not be a physical machine. In fact, they can reside on the same machine as separate processes, but then the whole purpose of high availability will be defeated. Let's start by installing the MySQL cluster nodes. Data node Setting up Data node is fairly simple. Just copy the ndbd and ndbmtd binaries from the bin directory of the archive to /usr/loca/bin/ and make them executable as follows: cp bin/ndbd /usr/local/bin/ndbdcp bin/ndbmtd /usr/local/bin/ndbmtdchmod +x bin/ndbd /usr/local/bin/ndbdchmod +x bin/ndbmtd /usr/local/bin/ndbmtd Management node Management node needs only two binaries, ndb_mgmd and ndb_mgm cp bin/ndb_mgm* /usr/local/binchmod +x /usr/local/bin/ndb_mgm* SQL node First of all, we need to create a user for MySQL as follows: useradd mysql Now extract the tar.gz archive file downloaded before. Conventionally, MySQL documentation uses /usr/local/ directory to unpack the archive, but it can be done anywhere. We'll follow MySQL conventions here and also create a symbolic link to ease the access and better manageability as follows: tar -C /usr/local -xzvf mysql-cluster-gpl-7.2.12-linux2.6.tar.gzln -s /usr/local/mysql-cluster-gpl-7.2.12-linux2.6-i686 /usr/local/mysql We need to set write permissions for MySQL user, which we created before, as follows: chown -R root /usr/local/mysqlchown -R mysql /usr/local/mysql/datachgrp -R mysql /usr/local/mysql The preceding commands will ensure that the permission to start and stop the MySQL instance's remains with the root user, but MySQL user can write data to the data directory. Now, change the directory to the scripts directory and create the system databases as follows: scripts/mysql_install_db --user=mysql Configuring the Data node and SQL node We can configure the Data node and SQL node as follows: vim /etc/my.cnf[mysqld]# Options for mysqld process:ndbcluster # run NDB storage engine[mysql_cluster]# Options for MySQL Cluster processes:ndb-connectstring=192.168.20.10 # location of management server Configuring the Management node We can configure the Management node as follows: vim /var/lib/mysql-cluster/config.ini[ndbd default]# Options affecting ndbd processes on all data nodes:NoOfReplicas=2 # Number of replicasDataMemory=200M # How much memory to allocate for data storageIndexMemory=50M # How much memory to allocate for index storage # For DataMemory and IndexMemory, we have used the # default values. Since the "world" database takes up # only about 500KB, this should be more than enough for # this example Cluster setup.[tcp default]# TCP/IP options:portnumber=2202 [ndb_mgmd]# Management process options:hostname=192.168.20.10 # Hostname or IP address of MGM nodedatadir=/var/lib/mysql-cluster # Directory for MGM node log files[ndbd]# Options for data node "A": # (one [ndbd] section per data node)hostname=192.168.20.12 # Hostname or IP addressdatadir=/usr/local/mysql/data # Directory for this data node's data files[ndbd]# Options for data node "B":hostname=192.168.0.40 # Hostname or IP addressdatadir=/usr/local/mysql/data # Directory for this data node's data files[mysqld]# SQL node options:hostname=192.168.20.11 # Hostname or IP address Summary Now we have gained an idea about how to ensure high availability of ownCloud server components. We have seen the load balancing for backend data store as well as frontend web server, and the database. We have seen some common ways and we can now provide a reliable ownCloud service to our users. Resources for Article: Further resources on this subject: Introduction to Cloud Computing with Microsoft Azure [Article] Cross-premise Connectivity [Article] Cloud-enabling Your Apps [Article]
Read more
  • 0
  • 0
  • 5344
article-image-apache-cloudstack-architecture
Packt
17 Jun 2013
17 min read
Save for later

Apache CloudStack Architecture

Packt
17 Jun 2013
17 min read
(For more resources related to this topic, see here.) Introducing cloud Before embarking on a journey to understand and appreciate CloudStack, let's revisit the basic concepts of cloud computing and how CloudStack can help us in achieving our private, public, or hybrid cloud objectives. Let's start this article with a plain and simple definition of cloud. Cloud is a shared multi-tenant environment built on a highly efficient, highly automated, and preferably virtualized IT infrastructure where IT resources can be provisioned on demand from anywhere over a broad network, and can be metered. Virtualization is the technology that has made the enablement of these features simpler and convenient. A cloud can be deployed in various models; including private, public, community or hybrid clouds. These deployment models can be explained as follows: Private cloud: In this deployment model, the cloud infrastructure is operated solely for an organization and may exist on premise or off premise. It can be managed by the organization or a third-party cloud provider. Public cloud: In this deployment model, the cloud service is provided to the general public or a large industry group, and is owned and managed by the organization providing cloud services. Community cloud: In this deployment model, the cloud is shared by multiple organizations and is supported by a specific community that has shared concerns. It can be managed by the organization or a third party provider, and can exist on premise or off premise. Hybrid cloud: This deployment model comprises two or more types of cloud (public, private, or community) and enables data and application portability between the clouds. A cloud—be it private, public, or hybrid—has the following essential characteristics: On-demand self service Broad network access Resource pooling Rapid elasticity or expansion Measured service Shared by multiple tenants Cloud has three possible service models, which means there are three types of cloud services that can be provided. They are: Infrastructure as a service (IaaS): This type of cloud service model provides IT infrastructure resources as a service to the end users. This model provides the end users with the capability to provision processing, storage, networks, and other fundamental computing resources that the customer can use to run arbitrary software including operating systems and applications. The provider manages and controls the underlying cloud infrastructure and the user has control over the operating systems, storage and deployed applications. The user may also have some control over the networking services. Platform as a service (PaaS): In this service model, the end user is provided with a platform that is provisioned over the cloud infrastructure. The provider manages the network, operating system, or storage and the end user has control over the applications and may have control over the hosting environment of the applications. Software as a service (SaaS): This layer provides software as a service to the end users, such as providing an online calculation engine for their end users. The end users can access these software using a thin client interface such as a web browser. The end users do not manage the underlying cloud infrastructure such as network, servers, OS, storage, or even individual application capabilities but may have some control over the application configurations settings. As depicted in the preceding diagram, the top layers of cloud computing are built upon the layer below it. In this book, we will be mainly dealing with the bottom layer—Infrastructure as a service. Thus providing Infrastructure as a Service essentially means that the cloud provider assembles the building blocks for providing these services, including the computing resources hardware, networking hardware and storage hardware. These resources are exposed to the consumers through a request management system which in turn is integrated with an automated provisioning layer. The cloud system also needs to meter and bill the customer on various chargeback models. The concept of virtualization enables the provider to leverage and pool resources in a multi-tenant model. Thus, the features provided by virtualization resource pooling, combined with modern clustering infrastructure, enable efficient use IT resources to provide high availability and scalability, increase agility, optimize utilization, and provide a multi-tenancy model. One can easily get confused about the differences between the cloud and a virtualized Datacenter; well, there are many differences, such as: The cloud is the next stage after the virtualization of datacenters. It is characterized by a service layer over the virtualization layer. Instead of bare computing resources, services are built over the virtualization platforms and provided to the users. Cloud computing provides the request management layer, provisioning layer, metering and billing layers along with security controls and multi-tenancy. Cloud resources are available to consumers on an on demand model wherein the resources can be provisioned and de-provisioned on an as needed basis. Cloud providers typically have huge capacities to serve variable workloads and manage variable demand from customers. Customers can leverage the scaling capabilities provided by cloud providers to scale up or scale down the IT infrastructure needed by the application and the workload. This rapid scaling helps the customer save money by using the capacity only when it is needed. The resource provisioning in the cloud is governed by policies and rules, and the process of provisioning is automated. Metering, Chargeback, and Billing are essential governance characteristics of any cloud environment as they govern and control the usage of precious IT resources. Thus setting up a cloud is basically building capabilities to provide IT resources as a service in a well-defined manner. Services can be provided to end users in various offerings, depending upon the amount of resources each service offering provides. The amount of resources can be broken down to multiple resources such as the computing capacity, memory, storage, network bandwidth, storage IOPS, and so on. A cloud provider can provide and meter multiple service offerings for the end users to choose from. Though the cloud provider makes upfront investments in creating the cloud capacity, however from a consumer's point of view the resources are available on demand on a pay per use model. Thus the customer gets billed for consumption just like in case of electricity or telecom services that individuals use. The billing may be based on hours of compute usage, the amount of storage used, bandwidth consumed, and so on. Having understood the cloud computing model, let's look at the architecture of a typical Infrastructure as a Service cloud environment. Infrastructure layer The Infrastructure layer is the base layer and comprises of all the hardware resources upon which IT is built upon. These include computing resources, storage resources, network resources, and so on. Computing resources Virtualization is provided using a hypervisor that has various functions such as enabling the virtual machines of the hosts to interact with the hardware. The physical servers host the hypervisor layer. The physical server resources are accessed through the hypervisor. The hypervisor layer also enables access to the network and storage. There are various hypervisors on the market such as VMware, Hyper-V, XenServer, and so on. These hypervisors are responsible for making it possible for one physical server to host multiple machines, and for enabling resource pooling and multi tenancy. Storage Like the Compute capacity, we need storage which is accessible to the Compute layer. The Storage in cloud environments is pooled just like the Compute and accessed through the virtualization layer. Certain types of services just offer storage as a service where the storage can be programmatically accessed to store and retrieve objects. Pooled, virtualized storage is enabled through technologies such as Network Attached Storage (NAS) and Storage Area Network (SAN) which helps in allowing the infrastructure to allocate storage on demand that can be based on policy, that is, automated. The storage provisioning using such technologies helps in providing storage capacity on demand to users and also enables the addition or removal of capacity as per the demand. The cost of storage can be differentiated according to the different levels of performance and classes of storage. Typically, SAN is used for storage capacity in the cloud where statefulness is required. Direct-attached Storage (DAS) can be used for stateless workloads that can drive down the cost of service. The storage involved in cloud architecture can be redundant and prevent the single point of failure. There can be multiple paths for the access of disk arrays to provide redundancy in case connectivity failures. The storage arrays can also be configured in a way that there is incremental backup of the allocated storage. The storage should be configured such that health information of the storage units is updated in the system monitoring service, which ensures that the outage and its impact are quickly identified and appropriate action can be taken in order to restore it to its normal state. Networks and security Network configuration includes defining the subnets, on-demand allocation of IP addresses, and defining the network routing tables to enable the flow of data in the network. It also includes enabling high availability services such as load balancing. Whereas the security configuration aims to secure the data flowing in the network that includes isolation of data of different tenants among each other and with the management data of cloud using techniques such as network isolation and security groups. Networking in the cloud is supposed to deal with the isolation of resources between multiple tenants as well as provide tenants with the ability to create isolated components. Network isolation in the cloud can be done using various techniques of network isolation such as VLAN, VXLAN, VCDNI, STT, or other such techniques. Applications are deployed in a multi-tenant environment and consist of components that are to be kept private, such as a database server which is to be accessed only from selected web servers and any other traffic from any other source is not permitted to access it. This is enabled using network isolation, port filtering, and security groups. These services help with segmenting and protecting various layers of application deployment architecture and also allow isolation of tenants from each other. The provider can use security domains, layer 3 isolation techniques to group various virtual machines. The access to these domains can be controlled using providers' port filtering capabilities or by the usage of more stateful packet filtering by implementing context switches or firewall appliances. Using network isolation techniques such as VLAN tagging and security groups allows such configuration. Various levels of virtual switches can be configured in the cloud for providing isolation to the different networks in the cloud environment. Networking services such as NAT, gateway, VPN, Port forwarding, IPAM systems, and access control management are used in the cloud to provide various networking services and accessibility. Some of these services are explained as follows: NAT: Network address translation can be configured in the environment to allow communication of a virtual machine in private network with some other machine on some other network or on the public Internet. A NAT device allows the modification of IP address information in the headers of IP packets while they are transformed from a routing device. A machine in a private network cannot have direct access to the public network so in order for it to communicate to the Internet, the packets are sent to a routing device or a virtual machine with NAT configured which has direct access to the Internet. NAT modifies the IP packet header so that the private IP address of the machine is not visible to the external networks. IPAM System/DHCP: An IP address management system or DHCP server helps with the automatic configuration of IP addresses to the virtual machines according to the configuration of the network and the IP range allocated to it. A virtual machine provisioned in a network can be assigned an IP address as per the user or is assigned an IP address from the IPAM. IPAM stores all the available IP addresses in the network and when a new IP address is to be allocated to a device, it is taken from the available IP pool, and when a device is terminated or releases the IP address, the address is given back to the IPAM system. Identity and access management: A access control list describes the permissions of various users on different resources in the cloud. It is important to define an access control list for users in a multi-tenant environment. It helps in restricting actions that a user can perform on any resource in the cloud. A role-based access mechanism is used to assign roles to users' profile which describes the roles and permissions of users on different resources. Use of switches in cloud A switch is a LAN device that works at the data link layer (layer 2) of the OSI model and provides multiport bridge. Switches store a table of MAC addresses and ports. Let us see the various types of switches and their usage in the cloud environment: Layer 3 switches: A layer-3 switch is a special type of switch which operates at layer 3—the Network layer of the OSI model. It is a high performance device that is used for network routing. A layer-3 switch has a IP routing table for lookups and it also forms a broadcast domain. Basically, a layer-3 switch is a switch which has a router's IP routing functionality built in. A layer-3 switch is used for routing and is used for better performance over routers. The layer-3 switches are used in large networks like corporate networks instead of routers. The performance of the layer-3 switch is better than that of a router because of some hardware-level differences. It supports the same routing protocols as network routers do. The layer-3 switch is used above the layer-2 switches and can be used to configure the routing configuration and the communication between two different VLANs or different subnets. Layer 4-7 switches: These switches use the packet information up to OSI layer 7 and are also known as content switches, web-switches, or application switches. These types of switches are typically used for load balancing among a group of servers which can be performed on HTTP, HTTPS, VPN, or any TCP/IP traffic using a specific port. These switches are used in the cloud for allowing policy-based switching—to limit the different amount of traffic on specific end-user switch ports. It can also be used for prioritizing the traffic of specific applications. These switches also provide forwarding decision making like NAT services and also manages the state of individual sessions from beginning to end thus acting like firewalls. In addition, these switches are used for balancing traffic across a cluster of servers as per the configuration of the individual session information and status. Hence these types of switches are used above layer-3 switches or above a cluster of servers in the environment. They can be used to forward packets as per the configuration such as transferring the packets to a server that is supposed to handle the requests and this packet forwarding configuration is generally based on the current server loads or sticky bits that binds the session to a particular server. Layer-3 traffic isolation provides traffic isolation across layer-3 devices. It's referred to as Virtual Routing and Forwarding (VRF). It virtualizes the routing table in a layer-3 switch and has set of virtualized tables for routing. Each table has a unique set of forwarding entries. Whenever traffic enters, it is forwarded using the routing table associated with the same VRF. It enables logical isolation of traffic as it crosses a common physical network infrastructure. VRFs provide access control, path isolation, and shared services. Security groups are also an example of layer-3 isolation capabilities which restricts the traffic to the guests based on the rules defined. The rules are defined based on the port, protocol, and source/destination of the traffic. Virtual switches: The virtual switches are software program that allows one guest VM to communicate with another and is similar to the Ethernet switch explained earlier. Virtual switches provide a bridge between the virtual NICs of the guest VMs and the physical NIC of the host. Virtual switches have port groups on one side which may or may not be connected to the different subnets. There are various types of virtual switches used with various virtualization technologies such as VMware Vswitch, Xen, or Open Vswitch. VMware also provides a distributed virtual switch which spans multiple hosts. The virtual switches consists of port groups at one end and an uplink at the other. The port groups are connected to the virtual machines and the uplink is mapped to the physical NIC of the host. The virtual switches function as a virtual switch over the hypervisor layer on the host. Management layer The Management layer in a cloud computing space provides management capabilities to manage the cloud setup. It provides features and functions such as reporting, configuration for the automation of tasks, configuration of parameters for the cloud setup, patching, and monitoring of the cloud components. Automation The cloud is a highly automated environment and all tasks such as provisioning the virtual machine, allocation of resources, networking, and security are done in a self-service mode through automated systems. The automation layer in cloud management software is typically exposed through APIs. The APIs allow the creation of SDKs, scripts, and user interfaces. Orchestration The Orchestration layer is the most critical interface between the IT organization and its infrastructure, and helps in the integration of the various pieces of software in the cloud computing platform. Orchestration is used to join together various individual tasks which are executed in a specified sequence with exception handling features. Thus a provisioning task for a virtual machine may involve various commands or scripts to be executed. The orchestration engine binds these individual tasks together and creates a provisioning workflow which may involve provisioning a virtual machine, adding it to your DNS, assigning IP Addresses, adding entries in your firewall and load balancer, and so on. The orchestration engine acts as an integration engine and also provides the capabilities to run an automated workflow through various subsystems. As an example, the service request to provision cloud resources may be sent to an orchestration engine which then talks to the cloud capacity layer to determine the best host or cluster where the workload can be provisioned. As a next step, the orchestration engine chooses the component to call to provision the resources. The orchestration platform helps in easy creation of complex workflows and also provides ease of management since all integrations are handled by a specialized orchestration engine and provide loose coupling. The orchestration engine is executed in the cloud system as an asynchronous job scheduler which orchestrates the service APIs to fulfill and execute a process. Task Execution The Task execution layer is at the lower level of the management operations that are performed using the command line or any other interface. The implementation of this layer can vary as per the platform on which the execution takes place. The activity of this layer is activated by the layers above in the management layer. Service Management The Service Management layer helps in compliance and provides means to implement automation and adapts IT service management best practices as per the policies of the organization, such as the IT Infrastructure Library (ITIL). This is used to build processes to implement different types of incident resolutions and also provide change management. The self service capability in cloud environment helps in providing users with a self-service catalog which consists of various service options that the user can request and provision resources from the cloud. The service layer can be comprised of various levels of services such as basic provisioning of virtual machines with some predefined templates/configuration, or can be of an advanced level with various options for provisioning servers with configuration options as well.
Read more
  • 0
  • 0
  • 5776

article-image-creating-your-first-vm-using-vcloud-technology
Packt
22 Apr 2013
14 min read
Save for later

Creating your first VM using vCloud technology

Packt
22 Apr 2013
14 min read
(For more resources related to this topic, see here.) Step 1 – Understanding vCloud resources This step will introduce you to how resources work in vCloud Director. The following diagram shows how resources are managed in vCloud and how they work together. The diagram is simplified and doesn't show all the vCloud properties; however, it is sufficient to explain the resource design. PvDC A Provider Virtual Data Center (PvDC) represents a portion of all the virtual resources of a vSphere environment. It will take all the CPU and memory resources from a given resource pool or cluster and present them to the vCloud as consumable resources. A typical cluster or resource pool contains multiple datastores, storage profiles, and networks as well as multiple hosts. A PvDC will automatically gain access to these resources. It is basically the link between vSphere and the vCloud world. Org An organization (Org) is a container that holds users and groups and regulates their access to the vCloud resources. Users can be either locally created or imported from Lightweight Directory Access Protocol (LDAP) or Active Directory (AD); however, groups can only be imported. It is possible to assign different LDAP, e-mail, and notification settings to each organization. This is one of the most powerful features of vCloud Director. Its usage becomes clear if you think about a public cloud model. You could link different organizations into the different customers' LDAP/ AD and e-mail systems (assuming a VPN tunnel between vCloud and the customer network), extending the customer's sphere of influence into the cloud. If a customer doesn't have or doesn't want to use his / her own LDAP/AD, he / she could make use of the local user function. OvDC An Organizational Virtual Data Center (OvDC) is a mixture of an Org with a PvDC. The Org defines who can do what and the PvDC defines where it is happening. Each OvDC is assigned one of the three allocation models as well as storage profiles. The three allocation models are designed to provide different methods of resource allocation. Let's first look at the difference between the models: Reservation pool: This allocates a fixed amount of resources (in GHz and GB) from the PvDC to the OvDC. This model is good if the users want to define a per-VM resource allocation. Only this model enables the Resource Allocation tab in VMs. Allocation pool: This is similar to reservation pool; however, you can also assign how many resources are guaranteed (reserved) for this OvDC. This model is good for overcommitting resources. Pay-as-you-go (PAYG): This is similar to the allocation pool; however, recourses are only consumed if vApps/VMs are running. All other models reserve resources even if the OvDC doesn't contain any running VMs. This model is useful if the number of resources is unknown or fluctuating. There are different settings that one can choose from for each model.   Allocation Pool PAYG Reservation Pool CPU allocation (GHz) Yes Yes and unlimited Yes CPU resources guaranteed (percentage) Yes Yes NA vCPU max speed (GHz) Yes Yes NA Memory allocation (GB) Yes Yes and unlimited Yes Memory resources guaranteed (percentage) Yes Yes NA Maximum number of VMs (number or unlimited) Yes Yes Yes vApp You might have encountered the name before in vCenter; however, the vApp of vCD and the vApp of vCenter are totally different beasts. vApps in vCenter are essentially resource pools with extras, such as a startup sequence. A vApp in vCD is a container that exists only in vCD. However, it can also contain isolated networks and allows the configuration of a start-and-stop sequence for its member VMs. In addition to all this, you can allow this vApp to be shared with other members of your organization. VM The most atomic part of a vCD is the VM. VMs live in vApps. Here you can configure all the settings you are familiar with from vSphere, and some more. You are able to add/update/delete vHardware as well as define guest customization. Step 2 – Connecting vCenter to vCD Let's start with the process of assigning resources to the vCloud. The first step is to assign a vCenter to this vCD installation. For future reference, one vCD installation can use multiple vCenters. As a starting point for steps 2 to 5, we will use the home screen, as shown in the following screenshot: On the Home screen (or if you like, the welcome screen), click on the first link, Attach a vCenter. A pop up will ask you for the following details for your vCenter: Host name or IP address: Enter the fully qualified domain name (FQDN) or IP address of your vCenter Port number: Port 443 is the correct default port User name and Password: Enter the username and password of an account that has administrator rights in vCenter vCenter name: Give your vCenter a name with which you would like to identify it in vCloud Description: A description isn't required; however, it doesn't hurt either vSphere Web Client URL: Now enter the URL for the web vCenter client https://(FDQN or IP)/vsphere-client After vCD has accepted the information and contacted vCenter, we now need to enter all the details for the vShield installation (from the Step 2 – downloading vCloud Director subsection in the Installation section). Enter the FQDN or IP address of the vShield VM And if you didn't change the default password, you can log in with admin as ID and default as the password vCD contacts vShield and that's that. You have now connected vCD to vCenter and are now able to use resources presented by this vCenter in your vCloud. Step 3 – Creating a PvDC Now we will create our first PvDC and assign resources to our vCloud. To create a new PvDC, you click on the second link, Create a Provider VDC (refer to the first image in the Step 2 – connecting vCenter to vCD subsection of the Quick start – creating your first VM section). Enter a name for the new PvCD. A good idea is to develop a naming standard for any item in vCenter and vCD. My PvDC will be called PvDC_myLab. Choose the highest supported virtual hardware version that your vCenter/ESXi supports. If you are running VMware 5.1, it is Version 9. In the next window, we choose the cluster or the resource pool that vCloud should use to create the PvDC. Please note that you need to create a resource pool before starting this wizard, or else it won't show up. For this example, I choose the cluster myCluster. In the next window, we are prompted to choose a storage profile. For the time being, just choose any and continue. Now vCD shows us all the ESXi hosts that belong to the cluster or the resource pool we selected. vCD will need to install some extra software on them and will need to connect directly to the ESXi hosts. That's why it is asking for the credentials of the ESXi hosts. Finish the wizard. At the end of this wizard, vCD will put the ESXi into maintenance mode to install the extra software package. If you only have one ESXi host and it is also running vCD and vCenter, you will have to manually install the vCD software package (not in the scope of this book). You have now successfully carved off a slice of resources to be used inside your vCloud. Storage profiles vSphere storage profiles are defined in vCenter. The idea is to group datastores together by their capabilities or by a user-defined label. For example, group datstores by their types (NFS, Fiber, SATA, or SSD), different RAID types, or by features that are provided, such as backup or replication. Enterprises use storage profiles such as gold, silver, and bronze, depending on the speed of the disks (SATA or SSD) and on whether a datastore is backed up or replicated for DR purposes. vCloud Director can assign different storage profiles to PvDCs and OvDCs. If an OvDC has multiple storage profiles assigned to it, you can choose a specific storage profile to be the default for this OvDC. Also, when you create a vApp in this OvDC, you can choose the storage profile with which you want to store the vApp. Step 4 – Creating an Org And now we will create an organization (Org). On the Home panel, click on the fifth link, Create a new organization (refer to the first image in the Step 2 – connecting vCenter to vCD subsection of the Quick start – creating your first VM section). Give the Org a name, for example, MyOrg and the organization's full name. In the next window, choose the first option, Do not use LDAP. Next, we could add a local user but we won't. So let's just click on Next. Our first Org should be able to share. So click on Allow publishing…, and then click on Next. We keep clicking on Next. The first Org will use the e-mail and notification settings of vCD. Now we need to configure the leases. You can just click on Next, or if you like, set all leases to unlimited. The last window shows us all the settings we have selected, and by clicking on Finish, our first organization will be created. System Org You have actually created a second Org as the first Org is called system and was created when we installed vCD. If you look at your home screen, you will see that there is a small tab that says System. The system Org is the mother of all Orgs. It's where other Orgs, PvDCs, OvDCs, and basically all settings are defined in vCloud Director. The system organization can only be accessed by vCloud system administrators. Step 5 – Creating an OvDC Now that we have our first Org, we can proceed with assigning resources to it for consumption. To do that, we need to create an Organization Virtual Data Center (OvDC). On the Home Screen, we click on the sixth link, Allocate resources to an organization. First we have to select the Org to which we want to assign the resources. As we only have one Org, the choice is easy. Next, we are asked which PvDC we want to take the resources from. Again, we only have one PvDC, so we choose that one. Note that the screen shows you what percentage of various resources of this PvDC are already committed and which networks are associated with this PvDC. Don't be alarmed that no networks are showing; we haven't configured any yet. Next we choose the allocation model. We have discussed the details of all the three models earlier: allocation pool, pay-as-you-go, and reservation pool. Choose Pay-as-you-go and click on Next. Have a look at the settings and click on Next. The next window lets you define which storage profile you would like to use for this OvDC. If you don't have a storage profile configured (as I do in my lab), just select any and click on the Add button. Enable Thin Provisioning to save on storage. This setting is the same as the normal thin provisioning in vSpere. Enable Fast Provisioning. This setting will use vCloud-linked clones (explained later). This window lets us configure the network resources for the organization. As we haven't configured any networking yet, just click on Next. We will discuss the network options in the next section about networks. We don't want to create an edge gateway so we leave the setting as it is and click Next. Again, more information about this is to follow in the next section. Finally, we will give this OvDC a name and finish the creation. I normally add a little descriptor in the name to say what allocation model I used, for example, res, payg, or allo. We have now successfully assigned memory, CPU, and storage to be consumed by our organization. Linked clones Linked clones save an enormous amount of storage space. When a VM is created from a template, a full clone of the template is created. When linked clones are used, only the changes to the VM are written to disk. As an example, we have a VM with 40 GB storage capacity (ignore thin provisioning for this example). A full clone would need another 40 GB of disk space. If linked clones are used, only a few MB will be used. As more changes are made to the cloned VM, it will demand more storage (up to the maximum of 40 GB). If this reminds you of the way snapshots work in vSphere, that's because that is what is actually used in the background. vCloud linked clones are not the same technology as VMware View linked clones; they are a more advanced version of the VMware Lab Manager linked clones. Step 6 – Creating a vApp Now that we have resources within our organization, we can create a vAPP and the VM inside it. vApps are created inside organizations, so we first need to access the organization that was created in the Step 4 – creating an Org subsection of the Quick start – creating your first VM section. Click on the Manage & Monitor tab and double-click on the Organizations menu item. Now double-click on the organization we created in the Step 4 – creating an Org subsection of the Quick start – creating your first VM section. You will see that a new tab is opened with the name of the new Org. You are now on the home screen of this Org. We will take the easy road here. Click on Build New vApp. Give your first vApp a name (for example, MyFirstVapp), a description, and if you like, explore the settings of the leases. After you click on Next, we are asked to choose a template. As we currently don't have one, we click on New Virtual Machine in the left-hand side menu of the screen. We will learn about templates in the Top features you need to know about section. A pop up will appear and we will then select all the settings we would expect when creating a new VM, such as name and hostname, CPU, memory, OS type and version, hard disk, and network. Note that if you are using virtual ESXi servers in your lab, you may be limited to 32-bit VMs only. After clicking on OK, we will find ourselves back at the previous screen. However, our VM should now show up in the lower table. Click on Next. We can now choose in which OvDC and in what storage profile we will deploy the vApp. The choices should be very limited at the moment, so just click on Next. Next, we are asked to choose a network. As we don't have one, we just click on Next. Another window will open; click on Next. Normally, we could define a fencing here. At last, we see a summary of all the settings, and clicking on Finish will create our first vApp. After the vApp is created, you can power it on and have a closer look. Click on the play button to power the vApp on. Wait for a few seconds, and then click on the black screen of the VM. A console pop up should come up and show you the BIOS of the booting VM. If that's not happening, check your browser security settings. That's it! You have installed vCD, and you've configured your resources and created your first vApp. Summary This article explained how we could create VM in vCloud technology. Resources for Article : Further resources on this subject: VMware View 5 Desktop Virtualization [Article] Supporting hypervisors by OpenNebula [Article] Tips and Tricks on BackTrack 4 [Article]
Read more
  • 0
  • 0
  • 3125