Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Cloud Computing

121 Articles
article-image-disaster-recovery-hyper-v
Packt
29 Jan 2013
9 min read
Save for later

Disaster Recovery for Hyper-V

Packt
29 Jan 2013
9 min read
(For more resources related to this topic, see here.) Hyper-V and Windows Server 2012 come with tools and solutions to make sure that your virtual machines will be up, running, and highly available. Components such as Failover Cluster can ensure that your servers are accessible, even in case of failures. However, disasters can occur and bring all the servers and services offline. Natural disasters, viruses, data corruption, human errors, and many other factors can make your entire system unavailable. People think that High Available (HA) is a solution for Disaster Recovery (DR) and that they can use it to replace DR. Actually HA is a component of a DR plan, which consists of process, policies, procedures, backup and recovery plan, documentation, tests, Service Level Agreements (SLA), best practices, and so on. The objective of a DR is simply to have business continuity in case of any disaster. In a Hyper-V environment, we have options to utilize the core components, such as Hyper-V Replica, for a DR plan, which replicates your virtual machines to another host or cluster and makes them available if the first host is offline, or even backs up and restores to bring VMs back, in case you lose everything. This module will walk you through the most important processes for setting up disaster recovery for your virtual machines running on Hyper-V. Backing up Hyper-V and virtual machines using Windows Server Backup Previous versions of Hyper-V had complications and incompatibilities with the built-in backup tool, forcing the administrators to acquire other solutions for backing up and restoring. Windows Server 2012 comes with a tool known as Windows Server Backup (WSB), which has full Hyper-V integration, allowing you to back up and restore your server, applications, Hyper-V, and virtual machines. WSB is easy and provides for a low cost scenario for small and medium companies. This recipe will guide you through the steps to back up your virtual machines using the Windows Server Backup tool. Getting ready Windows Server Backup does not support tapes. Make sure that you have a disk, external storage, network share, and free space to back up your virtual machines before you start. How to do it... The following steps will show you how to install the Windows Server Backup feature and how to schedule a task to back up your Hyper-V settings and virtual machines: To install the Windows Server Backup feature, open Server Manager from the taskbar. In the Server Manager Dashboard, click on Manage and select Add Roles and Features. On the Before you begin page, click on Next four times. Under the Add Roles and Features Wizard, select Windows Server Backup from the Features section, as shown in the following screenshot: Click on Next and then click on Install. Wait for the installation to be completed. After the installation, open the Start menu and type wbadmin.msc to open the Windows Server Backup tool. To change the backup performance options, click on Configure Performance from the pane on the right-hand side in the Windows Server Backup console. In the Optimize Backup Performance window, we have three options to select from—Normal backup performance, Faster backup performance, and Custom, as shown in the following screenshot: In the Windows Server Backup console, in the pane on the right-hand side, select the backup that you want to perform. The two available options are Backup Schedule to schedule an automatic backup and Backup Once for a single backup. The next steps will show how to schedule an automatic backup. In the Backup Schedule Wizard, in the Getting Started page, click on Next. In the Select Backup Configuration page, select Full Server to back up all the server data or click on Custom to select specific items to back up. If you want to backup only Hyper-V and virtual machines, click on Custom and then Next. In Select Items for Backup, click on Add Items. In the Select Items window, select Hyper-V to back up all the virtual machines and the host component, as shown in the following screenshot. You can also expand Hyper-V and select the virtual machines that you want to back up. When finished, click on OK. Back to the Select Items for Backup, click on Advanced Settings to change Exclusions and VSS Settings. In the Advanced Settings window, in the Exclusions tab, click on Add Exclusion to add any necessary exclusions. Click on the VSS Settings tab to select either VSS full Backup or VSS copy Backup as shown in the following screenshot. Click on OK. In the Select Items for Backup window, confirm the items that will be backed up and click on Next. In the Specify Backup Time page, select Once a day and the time for a daily backup or select More than once a day and the time and click on Next. In the Specify Destination Type page, select the option Back up to a hard disk that is dedicated for backups (recommended), back up to a volume, or back up to a shared network folder, as shown in the following screenshot and click on Next. If you select the first option, the disk you choose will be formatted and dedicated to storing the backup data only. In Select Destination Disk, click on Show All Available Disks to list the disks, select the one you want to use to store your backup, and click on OK. Click on Next twice. If you have selected the Back up to a hard disk that is dedicated for backups (recommended) option, you will see a warning message saying that the disk will be formatted. Click on Yes to confirm. In the Confirmation window, double-check the options you selected and click on Finish, as shown in the following screenshot: After that, the schedule will be created. Wait until the scheduled time to begin and check whether the backup has been finished successfully. How it works... Many Windows administrators used to miss the NTBackup tool from the old Windows Server 2003 times because of its capabilities and features. The Windows Server Backup tool, introduced in Windows Server 2008, has many limitations such as no tape support, no advanced schedule options, fewer backup options, and so on. When we talk about Hyper-V in this regards, the problem is even worse. Windows Server 2008 has minimal support and features for it. In Windows Server 2012, the same tool is available with some limitations; however, it provides at least the core components to back up, schedule, and restore Hyper-V and your virtual machines. By default, WSB is not installed. The feature installation is made by Server Manager. After its installation, the tool can be accessed via console or command lines. Before you start the backup of your servers, it is good to configure the backup performance options you want to use. By default, all the backups are created as normal. It creates a full backup of all the selected data. This is an interesting option when low amounts of data are backed up. You can also select the Faster backup performance option. This backs up the changes between the last and the current backup, increasing the backup time and decreasing the stored data. This is a good option to save storage space and backup time for large amounts of data. A backup schedule can be created to automate your backup operations. In the Backup Schedule Wizard, you can back up your entire server or a custom selection of volumes, applications, or files. For backing up Hyper-V and its virtual machines, the best option is the customized backup, so that you don't have to back up the whole physical server. When Hyper-V is present on the host, the system shows Hyper-V, and you will be able to select all the virtual machines and the host component configuration to be backed up. During the wizard, you can also change the advanced options such as exclusions and Volume Shadow Copy Services (VSS) settings. WSB has two VSS backup options—VSS full backup and VSS copy backup. When you opt for VSS full backup, everything is backed up and after that, the application may truncate log files. If you are using other backup solutions that integrate with WSB, these logs are essential to be used in future backups such as incremental ones. To preserve the log files you can use VSS copy backup so that other applications will not have problems with the incremental backups. After selecting the items for backup, you have to select the backup time. This is another limitation from the previous version—only two schedule options, namely Once a day or More than once a day. If you prefer to create different backup schedule such as weekly backups, you can use the WSB commandlets in PowerShell. Moving forward, in the backup destination type, you can select between a dedicated hard disk, a volume, or a network folder to save your backups in. When confirming all the items, the backup schedule will be ready to back up your system. You can also use the option Backup once to create a single backup of your system. There's more... To check whether previous backups were successful or not, you can use the details option in the WSB console. These details can be used as logs to get more information about the last (previous), next, and all the backups. To access these logs, open Windows Server Backup, under Status select View details. The following screenshot shows an example of the Last backup. To see which files where backed up, click on the View list of all backed up files link. Checking the Windows Server Backup commandlets Some options such as advanced schedule, policies, jobs, and other configurations can only be created through commandlets on PowerShell. To see all the available Windows Server Backup commandlets, type the following command: Get-Command –Module WindowsServerBackup See also The Restoring Hyper-V and virtual machines using Windows Server Backup recipe in this article
Read more
  • 0
  • 0
  • 4149

article-image-article-key-features-explained
Packt
26 Dec 2012
21 min read
Save for later

Key Features Explained

Packt
26 Dec 2012
21 min read
Service Bus The Windows Azure Service Bus provides a hosted, secure, and widely available infrastructure for widespread communication, large-scale event distribution, naming, and service publishing. Service Bus provides connectivity options for Windows Communication Foundation (WCF) and other service endpoints, including REST endpoints, that would otherwise be difficult or impossible to reach. Endpoints can be located behind Network Address Translation (NAT) boundaries, or bound to frequently changing, dynamically assigned IP addresses, or both. Getting started To get started and use the features of Services Bus, you need to make sure you have the Windows Azure SDK installed. Queues Queues in the AppFabric feature (different from Table Storage queues) offer a FIFO message delivery capability. This can be an outcome for those applications that expect messages in a certain order. Just like with ordinary Azure Queues, Service Bus Queues enable the decoupling of your application components and can still function, even if some parts of the application are offline. Some differences between the two types of queues are (for example) that the Service Bus Queues can hold larger messages and can be used in conjunction with Access Control Service. Working with queues To create a queue, go to the Windows Azure portal and select the Service Bus, Access Control & Caching tab. Next, select Service Bus, select the namespace, and click on New Queue. The following screen will appear. If you did not set up a namespace earlier you need to create a namespace before you can create a queue: There are some properties that can be configured during the setup process of a queue. Obviously, the name uniquely identifies the queue in the namespace. Default Message Time To Live configures messages having this default TTL. This can also be set in code and is a TimeSpan value. Duplicate Detection History Time Window implicates how long the message ID (unique) of the received messages will be retained to check for duplicate messages. This property will be ignored if the Required Duplicate Detection option is not set. Keep in mind that a long detection history results in the persistency of message IDs during that period. If you process many messages, the queue size will grow and so does your bill. When a message expires or when the limit of the queue size is reached, it will be deadlettered. This means that they will end up in a different queue named $DeadLetterQueue. Imagine a scenario where a lot of traffic in your queue results in messages in the dead letter queue. Your application should be robust and process these messages as well. The lock duration property defines the duration of the lock when the PeekLock() method is called. The PeekLock() method hides a specific message from other consumers/processors until the lock duration expires. Typically, this value needs to be sufficient to process and delete the message. A sample scenario Remember the differences between the two queue types that Windows Azure offers, where the Service Bus queues are able to guarantee first-in first-out and to support transactions. The scenario is when a user posts a geotopic on the canvas containing text and also uploads a video by using the parallel upload functionality. What should happen next is for the WCF service CreateGeotopic() to post a message in the queue to enter the geotopic, but when the file finishes uploading, there is also a message sent to the queue. These two together should be in a single transaction. Geotopia.Processor processes this message but only if the media file is finished uploading. In this example, you can see how a transaction is handled and how a message can be abandoned and made available on the queue again. If the geotopic is validated as a whole (file is uploaded properly), the worker role will reroute the message to a designated audit trail queue to keep track of actions made by the system and also send to a topic (see next section) dedicated to keeping messages that need to be pushed to possible mobile devices. The messages in this topic will again be processed by a worker role. The reason for choosing a separate worker role is that it creates a role, a loosely-coupled solution, and possible to be fine-grained by only scaling the back-end worker role. See the following diagram for an overview of this scenario: In the previous section, we already created a queue named geotopicaqueue. In order to work with queues, you need the service identity (in this case we use a service identity with a symmetric issuer and the key credentials) of the service namespace. Preparing the project In order to make use of the Service Bus capabilities, you need to add a reference to Microsoft.ServiceBus.dll, located in <drive>:Program FilesMicrosoft SDKsWindows Azure.NET SDK2012-06ref. Next, add the following using statements to your file: using Microsoft.ServiceBus; using Microsoft.ServiceBus.Messaging; Your project is now ready to make use of Service Bus queues. "Endpoint=sb://<servicenamespace>.servicebus.windows. net/;SharedSecretIssuer=<issuerName>;SharedSecretValue=<yoursecret>" The properties of the queue you configured in the Windows Azure portal can also be set programmatically. Sending messages Messages that are sent to a Service Bus queue are instances of BrokeredMessage. This class contains standard properties such as TimeToLive and MessageId. An important property is Properties, which is of type IDictionary<string, object>, where you can add additional data. The body of the message can be set in the constructor of BrokerMessage, where the parameter must be of a type decorated with the [Serializable] attribute. The following code snippet shows how to send a message of type BrokerMessage: MessagingFactory factory = MessagingFactory.CreateFromConnectionString (connectionString); MessageSender sender = factory.CreateMessageSender("geotopiaqueue"); sender.Send(new BrokeredMessage( new Geotopic { id = id, subject = subject, text = text, PostToFacebook = PostToFacebook, accessToken = accessToken, MediaFile = MediaFile //Uri of uploaded mediafile })); As the scenario depicts a situation where two messages are expected to be sent in a certain order and to be treated as a single transaction, we need to add some more logic to the code snippet. Right before this message is sent, the media file is uploaded by using the BlobUtil class. Consider sending the media file together with BrokeredMessage if it is small enough. This might be a long-running operation, depending on the size of the file. The asynchronous upload process returns Uri, which is passed to BrokeredMessage. The situation is: A multimedia file is uploaded from the client to Windows Azure Blob storage using a parallel upload (or passed on in the message). A Parallel upload is breaking up the media file in several chunks and uploading them separately by using multithreading. A message is sent to geotopiaqueue, and Geotopia.Processor processes the messages in the queues in a single transaction. Receiving messages On the other side of the Service Bus queue resides our worker role, Geotopia.Processor, which performs the following tasks: It grabs the messages from the queue Sends the message straight to a table in Windows Azure Storage for auditing purposes Creates a geotopic that can be subscribed to (see next section) The following code snippet shows how to perform these three tasks: MessagingFactory factory = MessagingFactory.CreateFromConnectionString (connectionString); MessageReceiver receiver = factory.CreateMessageReceiver("geotopiaqueue "); BrokeredMessage receivedMessage = receiver.Receive(); try { ProcessMessage(receivedMessage); receivedMessage.Complete(); } catch (Exception e) { receivedMessage.Abandon(); } Cross-domain communication We created a new web role in our Geotopia solution, hosting the WCF services we want to expose. As the client is a Silverlight one (and runs in the browser), we face cross-domain communication. To protect against security vulnerabilities and to prevent cross-site requests from a Silverlight client to some services (without the notice of the user), Silverlight by default allows only site-of-origin communication. A possible exploitation of a web application is cross-site forgery, exploits that can occur when cross-domain communication is allowed; for example, a Silverlight application sending commands to some service running on the Internet somewhere. As we want the Geotopia Silverlight client to access the WCF service running in another domain, we need to explicitly allow cross-domain operations. This can be achieved by adding a file named clientaccesspolicy.xml at the root of the domain where the WCF service is hosted and allowing this cross-domain access. Another option is to add a crossdomain.xml file at the root where the service is hosted. Please go to http://msdn.microsoft.com/en-us/library/cc197955(v=vs.95).aspx to find more details on the cross-domain communication issues. Comparison The following table shows the similarities and differences between Windows Azure and Service Bus queues: Criteria Windows Azure queue Service Bus queue Ordering guarantee No, but based on besteffort first-in, first out First-in, first-out Delivery guarantee At least once At most once; use the PeekLock() method to ensure that no messages are missed. PeekLock() together with the Complete() method enable a two-stage receive operation. Transaction support No Yes, by using TransactionScope Receive Mode Peek & Lease Peek & Lock Receive & Delete Lease/Lock duration Between 30 seconds and 7 days Between 60 seconds and 5 minutes Lease/Lock granularity Message level Queue level Batched Receive Yes, by using GetMessages(count) Yes, by using the prefetch property or the use of transactions Scheduled Delivery Yes Yes Automatic dead lettering No Yes In-place update Yes No Duplicate detection No Yes WCF integration No Yes, through WCF bindings WF integration Not standard; needs a customized activity Yes, out-of-the-box activities Message Size Maximum 64 KB Maximum 256 KB Maximum queue size 100 TB, the limits of a storage account 1, 2, 3, 4, or 5 GB; configurable Message TTL Maximum 7 days Unlimited Number of queues Unlimited 10,000 per service namespace Mgmt protocol REST over HTTP(S) REST over HTTPS Runtime protocol REST over HTTP(S) REST over HTTPS TCP with TLS Queue naming rules Maximum of 63 characters Maximum of 260 characters Queue length function Yes, value is approximate Yes, exact value Throughput Maximum of 2,000 messages/second Maximum of 2,000 messages/second Authentication Symmetric key ACS claims Role-based access control No Yes through ACS roles Identity provider federation No Yes Costs $0.01 per 10,000 transactions $ 0.01 per 10,000 transactions Billable operations Every call that touches "storage"' Only Send and Receive operations Storage costs $0.14 per GB per month None ACS transaction costs None, since ACS is not supported $1.99 per 100,000 token requests Background information There are some additional characteristics of Service Bus queues that need your attention: In order to guarantee the FIFO mechanism, you need to use messaging sessions. Using Receive & Delete in Service Bus queues reduces transaction costs, since it is counted as one. The maximum size of a Base64-encoded message on the Window Azure queue is 48 KB and for standard encoding it is 64 KB. Sending messages to a Service Bus queue that has reached its limit will throw an exception that needs to be caught. When the throughput has reached its limit, the HTTP 503 error response is returned from the Windows Azure queue service. Implement retrying logic to tackle this issue. Throttled requests (thus being rejected) are not billable. ACS transactions are based on instances of the message factory class. The received token will expire after 20 minutes, meaning that you will only need three tokens per hour of execution. Topics and subscriptions Topics and subscriptions can be useful in a scenario where (instead of a single consumer, in the case of queues) multiple consumers are part of the pattern. Imagine in our scenario where users want to be subscribed to topics posted by friends. In such a scenario, a subscription is created on a topic and the worker role processes it; for example, mobile clients can be push notified by the worker role. Sending messages to a topic works in a similar way as sending messages to a Service Bus queue. Preparing the project In the Windows Azure portal, go to the Service Bus, Access Control & Caching tab. Select Topics and create a new topic, as shown in the following screenshot: Next, click on OK and a new topic is created for you. The next thing you need to do is to create a subscription on this topic. To do this, select New Subscription and create a new subscription, as shown in the following screenshot: Using filters Topics and subscriptions, by default, it is a push/subscribe mechanism where messages are made available to registered subscriptions. To actively influence the subscription (and subscribe only to those messages that are of your interest), you can create subscription filters. SqlFilter can be passed as a parameter to the CreateSubscription method of the NamespaceManager class. SqlFilter operates on the properties of the messages so we need to extend the method. In our scenario, we are only interested in messages that are concerning a certain subject. The way to achieve this is shown in the following code snippet: BrokeredMessage message = new BrokeredMessage(new Geotopic { id = id, subject = subject, text = text, PostToFacebook = PostToFacebook, accessToken = accessToken, mediaFile = fileContent }); //used for topics & subscriptions message.Properties["subject"] = subject; The preceding piece of code extends BrokeredMessage with a subject property that can be used in SqlFilter. A filter can only be applied in code on the subscription and not in the Windows Azure portal. This is fine, because in Geotopia, users must be able to subscribe to interesting topics, and for every topic that does not exist yet, a new subscription is made and processed by the worker role, the processor. The worker role contains the following code snippet in one of its threads: Uri uri = ServiceBusEnvironment.CreateServiceUri ("sb", "<yournamespace>", string.Empty); string name = "owner"; string key = "<yourkey>"; //get some credentials TokenProvider tokenProvider = TokenProvider.CreateSharedSecretTokenProvider(name, key); // Create namespace client NamespaceManager namespaceClient = new NamespaceManager(ServiceBusEnvironment.CreateServiceUri ("sb", "geotopiaservicebus", string.Empty), tokenProvider); MessagingFactory factory = MessagingFactory.Create(uri, tokenProvider); BrokeredMessage message = new BrokeredMessage(); message.Properties["subject"] = "interestingsubject"; MessageSender sender = factory.CreateMessageSender("dataqueue"); sender.Send(message); //message is send to topic SubscriptionDescription subDesc = namespaceClient.CreateSubscription("geotopiatopic", "SubscriptionOnMe", new SqlFilter("subject='interestingsubject'")); //the processing loop while(true) { MessageReceiver receiver = factory.CreateMessageReceiver ("geotopiatopic/subscriptions/SubscriptionOnMe"); //it now only gets messages containing the property 'subject' //with the value 'interestingsubject' BrokeredMessage receivedMessage = receiver.Receive(); try { ProcessMessage(receivedMessage); receivedMessage.Complete(); } catch (Exception e) { receivedMessage.Abandon(); } } Windows Azure Caching Windows Azure offers caching capabilities out of the box. Caching is fast, because it is built as an in-memory (fast), distributed (running on different servers) technology. Windows Azure Caching offers two types of cache: Caching deployed on a role Shared caching When you decide to host caching on your Windows Azure roles, you need to pick from two deployment alternatives. The first is dedicated caching, where a worker role is fully dedicated to run as a caching store and its memory is used for caching. The second option is to create a co-located topology, meaning that a certain percentage of available memory in your roles is assigned and reserved to be used for in-memory caching purposes. Keep in mind that the second option is the most costeffective one, as you don't have a role running just for its memory. Shared caching is the central caching repository managed by the platform which is accessible for your hosted services. You need to register the shared caching mechanism on the portal in the Service Bus, Access Control & Caching section of the portal. You need to configure a namespace and the size of the cache (remember, there is money involved). This caching facility is a shared one and runs inside a multitenant environment. Caching capabilities Both the shared and dedicated caching offer a rich feature set. The following table depicts this: Feature Explanation ASP.NET 4.0 caching providers Programming model When you build ASP.NET 4.0 applications and deploy them on Windows Azure, the platform will install caching providers for them. This enables your ASP.NET 4.0  applications to use caching easily.   You can use the Microsoft.ApplicationServer.Caching namespace to perform CRUD operations on your cache. The application using the cache is responsible for populating and reloading the cache, as the programming model is based on the cache-aside pattern. This means that initially the cache is empty and will be populated during the lifetime of the application. The application checks whether the desired data is present. If not, the  application reads it from (for example) a database and inserts it into the cache.   The caching mechanism deployed on one of your roles, whether dedicated or not, lives up to the high availability of Windows Azure. It saves copies of your items in cache, in case a role instance goes down. Configuration model Configuration of caching (server side) is not relevant in the case of shared caching, as this is the standard, out-of-the-box functionality that can only vary in size, namespace, and location.   It is possible to create named caches. Every single cache has its own configuration settings, so you can really fine-tune your caching requirements. All settings are stored in the service definition and service configuration files. As the settings of named caches are stored in JSON format, they are difficult to read.   If one of your roles wants to access Windows Azure Cache, it needs some configuration as well. A DataCacheFactory object is used to return the DataCache objects that represent the named caches. Client cache settings are stored in the designated app.config or web.config files.   A configuration sample is shown later on in this section, together with some code snippets. Security model The two types of caching (shared and role-based) have two different ways of handling security.   Role-based caching is secured by its endpoints, and only those which are allowed to use these endpoints are permitted to touch the cache. Shared caching is secured by the use of an authentication token. Concurrency model As multiple clients can access and modify cache items simultaneously, there are concurrency issues to take care of; both optimistic and pessimistic concurrency models are available.   In the optimistic concurrency model, updating any objects in the cache does not result in locks. Updating an item in the cache will only take place if Azure detects that the updated version is the same as the one that currently resides in the cache.   When you decide to use the pessimistic concurrency model, items are locked explicitly by the cache client. When an item is locked, other lock requests are rejected by the platform. Locks need to be released by the client or after some configurable time-out, in order to prevent eternal locking. Regions and tagging Cached items can be grouped together in a so-called region. Together with additional tagging of cached items, it is possible to search for tagged items within a certain region. Creating a region results in adding cache items to be stored on a single server (analogous to partitioning). If additional backup copies are enabled, the region with all its items is also saved on a different server, to maintain availability. Notifications It is possible to have your application notified by Windows Azure when cache operations occur. Cache notifications exist for both operations on regions and items. A notification is sent when CreateRegion, ClearRegion, or RemoveRegion is executed. The operations AddItem, ReplaceItem, and RemoveItem on cached items also cause notifications to be sent.   Notifications can be scoped on the cache, region, and item level. This means you can configure them to narrow the scope of notifications and only receive those that are relevant to your applications.   Notifications are polled by your application at a configurable interval. Availability To keep up the high availability you are used to on Windows Azure, configure your caching role(s) to maintain backup copies. This means that the platform replicates copies of your cache within your deployment across different fault domains. Local caching To minimize the number of roundtrips between cache clients and the Windows Azure cache, enable local caching. Local caching means that every cache clients maintains a reference to the item in-memory itself. Requesting that same item again will cause an object returned from the local cache instead of the role-based cache. Make sure you choose the right lifetime for your objects, otherwise you might work with outdated cached items. Expiration and Eviction Cache items can be removed explicitly or implicitly by expiration or eviction.   The process of expiration means that the caching facility removes items from the cache automatically. Items will be removed after their time-out value expires, but keep in mind that locked items will not be removed even if they pass their expiration date. Upon calling the Unlock method, it is possible to extend the expiration date of the cached item.   To ensure that there is sufficient memory available for caching purposes, the least recently used (LRU) eviction is supported. The process of eviction means that memory will be cleared and cached items will be evicted when certain memory thresholds are exceeded.   By default, Shared Cache items expire after 48 hours. This behavior can be overridden by the overloads of the Add and Put methods. Setting it up To enable role-based caching, you need to configure it in Visual Studio. Open the Caching tab of the properties of your web or worker role (you decide which role is the caching one). Fill out the settings, as shown in the following screenshot: The configuration settings in this example cause the following to happen: Role-based caching is enabled. The specific role will be a dedicated role just for caching. Besides the default cache, there are two additional, named caches for different purposes. The first is a high-available cache for recently added geotopics with a sliding window. This means that every time an item is accessed, its expiration time is reset to the configured 10 minutes. For our geotopics, this is a good approach, since access to recently posted geotopics is heavy at first but will slow down as time passes by (and thus they will be removed from the cache eventually). The second named cache is specifically for profile pictures with a long time-to-live, as these pictures will not change too often. Caching examples In this section, several code snippets explain the use of Window Azure caching and clarify different features. Ensure that you get the right assemblies for Windows Azure Caching by running the following command in the Package Manager Console: Install-Package Microsoft.WindowsAzure.Caching. Running this command updates the designated config file for your project. Replace the [cache cluster role name] tag in the configuration file with the name of the role that hosts the cache. Adding items to the cache The following code snippet demonstrates how to access a named cache and how to add and retrieve items from it (you will see the use of tags and the sliding window): DataCacheFactory cacheFactory = new DataCacheFactory(); DataCache geotopicsCache = cacheFactory.GetCache("RecentGeotopics"); //get reference to this named cache geotopicsCache.Clear(); //clear the whole cache DataCacheTag[] tags = new DataCacheTag[] { new DataCacheTag("subject"), new DataCacheTag("test")}; //add a short time to live item DataCacheItemVersion version = geotopicsCache.Add(geotopicID, new Geotopic(), TimeSpan.FromMinutes(1)/* overrides default 10 minutes */, tags); //add a default item geotopicsCache.Add("defaultTTL", new Geotopic()); //default 10 minutes //let time pass for some minutes DataCacheItem item = geotopicsCache.GetCacheItem(geotopicID); // returns null! DataCacheItem defaultItem = geotopicsCache.GetCacheItem("defaultTTL"); //sliding window shows up //versioning, optimistic locking geotopicsCache.Put("defaultTTL", new Geotopic(), defaultItem.Version); //will fail if versions are not equal! Session state and output caching Two interesting areas in which Windows Azure caching can be applied are caching the session state of ASP.NET applications and the caching of HTTP responses, for example, complete pages. In order to use Windows Azure caching (that is, the role-based version), to maintain the session state, you need to add the following code snippet to the web.config file for your web application: <sessionState mode="Custom" customProvider="AppFabricCacheSessionStor eProvider"> <providers> <add name="AppFabricCacheSessionStoreProvider" type="Microsoft.Web.DistributedCache. DistributedCacheSessionStateStoreProvider, Microsoft.Web. DistributedCache" cacheName="default" useBlobMode="true" dataCacheClientName="default" /> </providers> </sessionState> The preceding XML snippet causes your web application to use the default cache that you configured on one of your roles. To enable output caching, add the following section to your web.config file: <caching> <outputCache defaultProvider="DistributedCache"> <providers> <add name="DistributedCache" type="Microsoft.Web.DistributedCache. DistributedCacheOutputCacheProvider, Microsoft.Web.DistributedCache" cacheName="default" dataCacheClientName="default" /> </providers> </outputCache> </caching> This will enable output caching for your web application, and the default cache will be used for this. Specify a cache name, if you have set up a specific cache for output caching purposes. The pages to be cached determine how long they will remain in the cache and set the different version of the page, depending on the parameter combinations. <%@ OutputCache Duration="60" VaryByParam="*" %>
Read more
  • 0
  • 0
  • 4369

article-image-windows-azure-service-bus-key-features
Packt
06 Dec 2012
13 min read
Save for later

Windows Azure Service Bus: Key Features

Packt
06 Dec 2012
13 min read
(For more resources related to this topic, see here.) Service Bus The Windows Azure Service Bus provides a hosted, secure, and widely available infrastructure for widespread communication, large-scale event distribution, naming, and service publishing. Service Bus provides connectivity options for Windows Communication Foundation (WCF) and other service endpoints, including REST endpoints, that would otherwise be difficult or impossible to reach. Endpoints can be located behind Network Address Translation (NAT) boundaries, or bound to frequently changing, dynamically assigned IP addresses, or both. Getting started To get started and use the features of Services Bus, you need to make sure you have the Windows Azure SDK installed. Queues Queues in the AppFabric feature (different from Table Storage queues) offer a FIFO message delivery capability. This can be an outcome for those applications that expect messages in a certain order. Just like with ordinary Azure Queues, Service Bus Queues enable the decoupling of your application components and can still function, even if some parts of the application are offline. Some differences between the two types of queues are (for example) that the Service Bus Queues can hold larger messages and can be used in conjunction with Access Control Service. Working with queues To create a queue, go to the Windows Azure portal and select the Service Bus, Access Control & Caching tab. Next, select Service Bus, select the namespace, and click on New Queue. The following screen will appear. If you did not set up a namespace earlier you need to create a namespace before you can create a queue: There are some properties that can be configured during the setup process of a queue. Obviously, the name uniquely identifies the queue in the namespace. Default Message Time To Live configures messages having this default TTL. This can also be set in code and is a TimeSpan value. Duplicate Detection History Time Window implicates how long the message ID (unique) of the received messages will be retained to check for duplicate messages. This property will be ignored if the Required Duplicate Detection option is not set. Keep in mind that a long detection history results in the persistency of message IDs during that period. If you process many messages, the queue size will grow and so does your bill. When a message expires or when the limit of the queue size is reached, it will be deadlettered . This means that they will end up in a different queue named $DeadLetterQueue. Imagine a scenario where a lot of traffic in your queue results in messages in the dead letter queue. Your application should be robust and process these messages as well. The lock duration property defines the duration of the lock when the PeekLock() method is called. The PeekLock() method hides a specific message from other consumers/processors until the lock duration expires. Typically, this value needs to be sufficient to process and delete the message. A sample scenario Remember the differences between the two queue types that Windows Azure offers, where the Service Bus queues are able to guarantee first-in first-out and to support transactions. The scenario is when a user posts a geotopic on the canvas containing text and also uploads a video by using the parallel upload functionality. What should happen next is for the WCF service CreateGeotopic() to post a message in the queue to enter the geotopic, but when the file finishes uploading, there is also a message sent to the queue. These two together should be in a single transaction. Geotopia.Processor processes this message but only if the media file is finished uploading. In this example, you can see how a transaction is handled and how a message can be abandoned and made available on the queue again. If the geotopic is validated as a whole (file is uploaded properly), the worker role will reroute the message to a designated audit trail queue to keep track of actions made by the system and also send to a topic (see next section) dedicated to keeping messages that need to be pushed to possible mobile devices. The messages in this topic will again be processed by a worker role. The reason for choosing a separate worker role is that it creates a role, a loosely-coupled solution, and possible to be fine-grained by only scaling the back-end worker role. See the following diagram for an overview of this scenario: In the previous section, we already created a queue named geotopicaqueue. In order to work with queues, you need the service identity (in this case we use a service identity with a symmetric issuer and the key credentials) of the service namespace. Preparing the project In order to make use of the Service Bus capabilities, you need to add a reference to Microsoft.ServiceBus.dll, located in <drive>:Program FilesMicrosoft SDKsWindows Azure.NET SDK2012-06ref. Next, add the following using statements to your file: using Microsoft.ServiceBus; using Microsoft.ServiceBus.Messaging; Your project is now ready to make use of Service Bus queues. In the configuration settings of the web role project hosting the WCF services, add a new configuration setting named ServiceBusQueue with the following value: "Endpoint=sb://<servicenamespace>.servicebus.windows. net/;SharedSecretIssuer=<issuerName>;SharedSecretValue=<yoursecret>" The properties of the queue you configured in the Windows Azure portal can also be set programmatically. Sending messages Messages that are sent to a Service Bus queue are instances of BrokeredMessage. This class contains standard properties such as TimeToLive and MessageId. An important property is Properties, which is of type IDictionary<string, object>, where you can add additional data. The body of the message can be set in the constructor of BrokerMessage, where the parameter must be of a type decorated with the [Serializable] attribute. The following code snippet shows how to send a message of type BrokerMessage: MessagingFactory factory = MessagingFactory.CreateFromConnectionString (connectionString); MessageSender sender = factory.CreateMessageSender("geotopiaqueue"); sender.Send(new BrokeredMessage( new Geotopic { id = id, subject = subject, text = text, PostToFacebook = PostToFacebook, accessToken = accessToken, MediaFile = MediaFile //Uri of uploaded mediafile })); As the scenario depicts a situation where two messages are expected to be sent in a certain order and to be treated as a single transaction, we need to add some more logic to the code snippet. Right before this message is sent, the media file is uploaded by using the BlobUtil class. Consider sending the media file together with BrokeredMessage if it is small enough. This might be a long-running operation, depending on the size of the file. The asynchronous upload process returns Uri, which is passed to BrokeredMessage. The situation is: A multimedia file is uploaded from the client to Windows Azure Blob storage using a parallel upload (or passed on in the message). A Parallel upload is breaking up the media file in several chunks and uploading them separately by using multithreading. A message is sent to geotopiaqueue, and Geotopia.Processor processes the messages in the queues in a single transaction. Receiving messages On the other side of the Service Bus queue resides our worker role, Geotopia. Processor, which performs the following tasks: It grabs the messages from the queue Sends the message straight to a table in Windows Azure Storage for auditing purposes Creates a geotopic that can be subscribed to The following code snippet shows how to perform these three tasks: MessagingFactory factory = MessagingFactory.CreateFromConnectionString (connectionString); MessageReceiver receiver = factory.CreateMessageReceiver("geotopiaqueue "); BrokeredMessage receivedMessage = receiver.Receive(); try { ProcessMessage(receivedMessage); receivedMessage.Complete(); } catch (Exception e) { receivedMessage.Abandon(); } Cross-domain communication We created a new web role in our Geotopia solution, hosting the WCF services we want to expose. As the client is a Silverlight one (and runs in the browser), we face cross-domain communication. To protect against security vulnerabilities and to prevent cross-site requests from a Silverlight client to some services (without the notice of the user), Silverlight by default allows only site-of-origin communication. A possible exploitation of a web application is cross-site forgery, exploits that can occur when cross-domain communication is allowed; for example, a Silverlight application sending commands to some service running on the Internet somewhere. As we want the Geotopia Silverlight client to access the WCF service running in another domain, we need to explicitly allow cross-domain operations. This can be achieved by adding a file named clientaccesspolicy.xml at the root of the domain where the WCF service is hosted and allowing this cross-domain access. Another option is to add a crossdomain.xml file at the root where the service is hosted. Please go to http://msdn.microsoft.com/en-us/library/cc197955(v=vs.95).aspx to find more details on the cross-domain communication issues. Comparison The following table shows the similarities and differences between Windows Azure and Service Bus queues: Criteria Windows Azure queue Service Bus queue Ordering guarantee No, but based on best effort first-in, first out First-in, first-out Delivery guarantee At least once At most once; use the PeekLock() method to ensure that no messages are missed. PeekLock() together with the Complete() method enable a two-stage receive operation. Transaction support No Yes, by using TransactionScope Receive Mode Peek & Lease Peek & Lock Receive & Delete Lease/Lock duration Between 30 seconds and 7 days Between 60 seconds and 5 minutes Lease/Lock granularity Message level Queue level Batched Receive Yes, by using GetMessages(count) Yes, by using the prefetch property or the use of transactions Scheduled Delivery Yes Yes Automatic dead lettering No Yes In-place update Yes No Duplicate detection No Yes WCF integration No Yes, through WCF bindings WF integration Not standard; needs a customized activity Yes, out-of-the-box activities Message Size Maximum 64 KB Maximum 256 KB Maximum queue size 100 TB, the limits of a storage account 1, 2, 3, 4, or 5 GB; configurable Message TTL Maximum 7 days Unlimited Number of queues Unlimited 10,000 per service namespace Mgmt protocol REST over HTTP(S) REST over HTTP(S) Runtime protocol REST over HTTP(S) REST over HTTP(S) Queue naming rules Maximum of 63 characters Maximum of 260 characters Queue length function Yes, value is approximate Yes, exact value Throughput Maximum of 2,000 messages/second Maximum of 2,000 messages/second Authentication Symmetric key ACS claims Role-based access control No Yes through ACS roles Identity provider federation No Yes Costs $0.01 per 10,000 transactions $ 0.01 per 10,000 transactions Billable operations Every call that touches "storage"' Only Send and Receive operations Storage costs $0.14 per GB per month None ACS transaction costs None, since ACS is not supported $1.99 per 100,000 token requests Background information There are some additional characteristics of Service Bus queues that need your attention: In order to guarantee the FIFO mechanism, you need to use messaging sessions. Using Receive & Delete in Service Bus queues reduces transaction costs, since it is counted as one. The maximum size of a Base64-encoded message on the Window Azure queue is 48 KB and for standard encoding it is 64 KB. Sending messages to a Service Bus queue that has reached its limit will throw an exception that needs to be caught. When the throughput has reached its limit, the HTTP 503 error response is returned from the Windows Azure queue service. Implement retrying logic to tackle this issue. Throttled requests (thus being rejected) are not billable. ACS transactions are based on instances of the message factory class. The received token will expire after 20 minutes, meaning that you will only need three tokens per hour of execution. Topics and subscriptions Topics and subscriptions can be useful in a scenario where (instead of a single consumer, in the case of queues) multiple consumers are part of the pattern. Imagine in our scenario where users want to be subscribed to topics posted by friends. In such a scenario, a subscription is created on a topic and the worker role processes it; for example, mobile clients can be push notified by the worker role. Sending messages to a topic works in a similar way as sending messages to a Service Bus queue. Preparing the project In the Windows Azure portal, go to the Service Bus, Access Control & Caching tab. Select Topics and create a new topic, as shown in the following screenshot: Next, click on OK and a new topic is created for you. The next thing you need to do is to create a subscription on this topic. To do this, select New Subscription and create a new subscription, as shown in the following screenshot: Using filters Topics and subscriptions, by default, it is a push/subscribe mechanism where messages are made available to registered subscriptions. To actively influence the subscription (and subscribe only to those messages that are of your interest), you can create subscription filters. SqlFilter can be passed as a parameter to the CreateSubscription method of the NamespaceManager class. SqlFilter operates on the properties of the messages so we need to extend the method. In our scenario, we are only interested in messages that are concerning a certain subject. The way to achieve this is shown in the following code snippet: BrokeredMessage message = new BrokeredMessage(new Geotopic { id = id, subject = subject, text = text, PostToFacebook = PostToFacebook, accessToken = accessToken, mediaFile = fileContent }); //used for topics & subscriptions message.Properties["subject"] = subject; The preceding piece of code extends BrokeredMessage with a subject property that can be used in SqlFilter. A filter can only be applied in code on the subscription and not in the Windows Azure portal. This is fine, because in Geotopia, users must be able to subscribe to interesting topics, and for every topic that does not exist yet, a new subscription is made and processed by the worker role, the processor. The worker role contains the following code snippet in one of its threads: Uri uri = ServiceBusEnvironment.CreateServiceUri ("sb", "<yournamespace>", string.Empty); string name = "owner"; string key = "<yourkey>"; //get some credentials TokenProvider tokenProvider = TokenProvider.CreateSharedSecretTokenProvider(name, key); // Create namespace client NamespaceManager namespaceClient = new NamespaceManager(ServiceBusEnvironment.CreateServiceUri ("sb", "geotopiaservicebus", string.Empty), tokenProvider); MessagingFactory factory = MessagingFactory.Create(uri, tokenProvider); BrokeredMessage message = new BrokeredMessage(); message.Properties["subject"] = "interestingsubject"; MessageSender sender = factory.CreateMessageSender("dataqueue"); sender.Send(message); //message is send to topic SubscriptionDescription subDesc = namespaceClient.CreateSubscription("geotopiatopic", "SubscriptionOnMe", new SqlFilter("subject='interestingsubject'")); //the processing loop while(true) { MessageReceiver receiver = factory.CreateMessageReceiver ("geotopiatopic/subscriptions/SubscriptionOnMe"); //it now only gets messages containing the property 'subject' //with the value 'interestingsubject' BrokeredMessage receivedMessage = receiver.Receive(); try { ProcessMessage(receivedMessage); receivedMessage.Complete(); } catch (Exception e) { receivedMessage.Abandon(); } } Windows Azure Caching Windows Azure offers caching capabilities out of the box. Caching is fast, because it is built as an in-memory (fast), distributed (running on different servers) technology. Windows Azure Caching offers two types of cache: Caching deployed on a role Shared caching When you decide to host caching on your Windows Azure roles, you need to pick from two deployment alternatives. The first is dedicated caching, where a worker role is fully dedicated to run as a caching store and its memory is used for caching. The second option is to create a co-located topology, meaning that a certain percentage of available memory in your roles is assigned and reserved to be used for in-memory caching purposes. Keep in mind that the second option is the most costeffective one, as you don't have a role running just for its memory. Shared caching is the central caching repository managed by the platform which is accessible for your hosted services. You need to register the shared caching mechanism on the portal in the Service Bus, Access Control & Caching section of the portal. You need to configure a namespace and the size of the cache (remember, there is money involved). This caching facility is a shared one and runs inside a multitenant environment.
Read more
  • 0
  • 0
  • 4802

article-image-troubleshooting-openstack-cloud-computing
Packt
01 Oct 2012
5 min read
Save for later

Troubleshooting in OpenStack Cloud Computing

Packt
01 Oct 2012
5 min read
Introduction OpenStack is a complex suite of software that can make tracking down issues and faults quite daunting to beginners and experienced system administrators alike. While there is no single approach to troubleshooting systems, understanding where OpenStack logs vital information or what tools are available to help track down bugs will help resolve issues we may encounter. Checking OpenStack Compute Services OpenStack provides tools to check various parts of Compute Services, and we'll use common system commands to check whether our environment is running as expected. Getting ready To check our OpenStack Compute host we must log in to that server, so do this now before following the given steps. How to do it... To check that Nova is running the required services, we invoke the nova-manage tool and ask it various questions of the environment as follows: To check the OpenStack Compute hosts are running OK: sudo nova-manage service list You will see the following output. The :-) icons are indicative that everything is fine. If Nova has a problem: If you see XXX where the :-) icon should be, then you have a problem. Troubleshooting is covered at the end of the book, but if you do see XXX then the answer will be in the logs at /var/log/nova/. If you get intermittent XXX and :-) icons for a service, first check if the clocks are in sync. Checking Glance: Glance doesn't have a tool to check, so we can use some system commands instead. ps -ef | grep glancenetstat -ant | grep 9292.*LISTEN These should return process information for Glance to show it is running and 9292 is the default port that should be open in the LISTEN mode on your server ready for use. Other services that you should check: rabbitmq: sudo rabbitmqctl status The following is an example output from rabbitmqctl when everything is running OK: ntp ( N etwork Time Protocol, for keeping nodes in sync): ntpq -p It should return output regarding contacting NTP servers, for example: MySQL Database Server: MYSQL_PASS=openstackmysqladmin -uroot –p$MYSQL_PASS status This will return some statistics about MySQL, if it is running: How it works... We have used some basic commands that communicate with OpenStack Compute and other services to show they are running. This elementary level of troubleshooting ensures you have the system running as expected. Understanding logging Logging is important in all computer systems, but the more complex the system, the more you rely on being able to spot problems to cut down on troubleshooting time. Understanding logging in OpenStack is important to ensure your environment is healthy and is able to submit relevant log entries back to the community to help fix bugs. Getting ready Log in as the root user onto the appropriate servers where the OpenStack services are installed. How to do it... OpenStack produces a large number of logs that help troubleshoot our OpenStack installations. The following details outline where these services write their logs. OpenStack Compute Services Logs Logs for the OpenStack Compute services are written to /var/log/nova/, which is owned by the nova user, by default. To read these, log in as the root user. The following is a list of services and their corresponding logs: nova-compute: /var/log/nova/nova-compute.log Log entries regarding the spinning up and running of the instances nova-network: /var/log/nova/nova-network.log Log entries regarding network state, assignment, routing, and security groups nova-manage: /var/log/nova/nova-manage.log Log entries produced when running the nova-manage command nova-scheduler: /var/log/nova/nova-scheduler.log Log entries pertaining to the scheduler, its assignment of tasks to nodes, and messages from the queue nova-objectstore: /var/log/nova/nova-objectstore.log Log entries regarding the images nova-api: /var/log/nova/nova-api.log Log entries regarding user interaction with OpenStack as well as messages regarding interaction with other components of OpenStack nova-cert: /var/log/nova/nova-cert.log Entries regarding the nova-cert process nova-console: /var/log/nova/nova-console.log Details about the nova-console VNC service nova-consoleauth: /var/log/nova/nova-consoleauth.log Authentication details related to the nova-console service nova-dhcpbridge: /var/log/nova/nova-dhcpbridge.log Network information regarding the dhcpbridge service OpenStack Dashboard logs OpenStack Dashboard (Horizon) is a web application that runs through Apache by default, so any errors and access details will be in the Apache logs. These can be found in /var/log/ apache2/*.log, which will help you understand who is accessing the service as well as the report on any errors seen with the service. OpenStack Storage logs OpenStack Storage (Swift) writes logs to syslog by default. On an Ubuntu system, these can be viewed in /var/log/syslog. On other systems, these might be available at /var/log/messages. Logging can be adjusted to allow for these messages to be filtered in syslog using the log_level, log_facility, and log_message options. Each service allows you to set the following: If you change any of these options, you will need to restart that service to pick up the change. Log-level settings in OpenStack Compute services Many OpenStack services allow you to control the chatter in the logs by setting different log output settings. Some services, though, tend to produce a lot of DEBUG noise by default. This is controlled within the configuration files for that service. For example, the Glance Registry service has the following settings in its configuration files: Moreover, many services are adopting this facility. In production, you would set debug to False and optionally keep a fairly high level of INFO requests being produced, which may help with the general health reports of your OpenStack environment. How it works... Logging is an important activity in any software, and OpenStack is no different. It allows an administrator to track down problematic activity that can be used in conjunction with the community to help provide a solution. Understanding where the services log, and managing those logs to allow someone to identify problems quickly and easily, are important.
Read more
  • 0
  • 0
  • 3482

article-image-introduction-web-experience-factory
Packt
24 Sep 2012
20 min read
Save for later

Introduction to Web Experience Factory

Packt
24 Sep 2012
20 min read
What is Web Experience Factory? Web Experience Factory is a rapid application development tool, which applies software automation technology to construct applications. By using WEF, developers can quickly create single applications that can be deployed to a variety of platforms, such as IBM WebSphere Application Server and IBM WebSphere Portal Server , which in turn can serve your application to standard browsers, mobile phones, tablets, and so on. Web Experience Factory is the new product derived from the former WebSphere Portlet Factory (WPF) product. In addition to creating portal applications, WEF always had the capability of creating exceptional web applications. In fact, the initial product developed by Bowstreet, the company which originally created WPF, was meant to create web applications, way before the dawn of portal technologies. As the software automation technology developed by Bowstreet could easily be adapted to produce portal applications, it was then tailored for the portal market. This same adaptability is now expanded to enable WEF to target different platforms and multiple devices. Key benefits of using Web Experience Factory for portlet development While WEF has the capability of targeting several platforms, we will be focusing on IBM WebSphere Portal applications. The following are a few benefits of WEF for the portal space: Significantly improves productivity Makes portal application development easier Contains numerous components (builders) to facilitate portal application development Insulates the developer from the complexity of the low-level development tasks Automatically handles the deployment and redeployment of the portal project (WAR file ) to the portal Reduces portal development costs The development environment Before we discuss key components of WEF, let's take a look at the development environment. From a development environment perspective, WEF is a plugin that is installed into either Eclipse or IBM Rational Application Developer for WebSphere. As a plugin, it uses all the standard features from these development environments at the same time that it provides its own perspective and views to enable the development of portlets with WEF. Let's explore the WEF development perspective in Eclipse. The WEF development environment is commonly referred to as the designer. While we explore this perspective, you will read about new WEF-specific terms. In this section, we will neither define nor discuss them, but don't worry. Later on in this article, you will learn all about these new WEF terms. The following screenshot shows the WEF perspective with its various views and panes: The top-left pane, identified by number 1, shows the Project Explorer tab. In this pane you can navigate to the WEF project, which has a structure similar to a JEE project. WEF adds a few extra folders to host the WEF-specific files. Box 1 also contains a tab to access the Package Explorer view. The Package Explorer view enables you to navigate the several directories containing the .jar files. These views can be arranged in different ways within this Eclipse perspective. The area identified by number 2 shows the Outline view. This view holds the builder call list. This view also holds two important icons. The first one is the "Regeneration" button. This is the first icon from left to right, immediately above the builder call table header. Honestly, we do not know what the graphical image of this icon is supposed to convey. Some people say it looks like a candlelight, others say it looks like a chess pawn. We even heard people referring to this icon as the "Fisher-Price" icon, because it looks like the Fisher-Price children's toy. The button right next to the Regeneration button is the button to access the Builder palette. From the Builder palette, you can select all builders available in WEF. Box number 3 presents the panes available to work on several areas of the designer. The screenshot inside this box shows the Builder Call Editor. This is the area where you will be working with the builders you add to your model. Lastly, box number 4 displays the Applied Profiles view. This view displays content only when the open model contains profile-enabled inputs, which is not the case in this screenshot. The following screenshot shows the right-hand side pane, which contains four tabs—Source, Design, Model XML, and Builder Call Editor. The preceding screenshot shows the content displayed when you select the first tab from the right-hand side pane, the Source tab. The Source tab exposes two panes. The left-hand side pane contains the WebApp tree, and the right-hand side pane contains the source code for elements selected from the WebApp tree. Although it is not our intention to define the WEF elements in this section, it is important to make an exception to explain to you what the WebApp tree is. The WebApp tree is a graphical representation of your application. This tree represents an abstract object identified as WebApp object. As you add builders to your models or modify them, these builders add or modify elements in this WebApp object. You cannot modify this object directly except through builders. The preceding screenshot shows the source code for the selected element in the WebApp tree. The code shows what WEF has written and the code to be compiled. The following screenshot shows the Design pane. The Design pane displays the user interface elements placed on a page either directly or as they are created by builders. It enables you to have a good sense of what you are building from a UI perspective. The following screenshot shows the content of a model represented as an XML structure in the Model XML tab. The highlighted area in the right-hand side pane shows the XML representation of the sample_PG builder, which has been selected in the WebApp tree. We will discuss the next tab, Builder Call Editor, when we address builders in the next section. Key components of WEF—builders, models, and profiles Builders, models, and profiles comprise the key components of WEF. These three components work together to enable software automation through WEF. Here, we will explain and discuss in details what they are and what they do. Builders Builders are at the core of WEF technology. There have been many definitions for builders. Our favorite is the one that defines builders as "software components, which encapsulate design patterns". Let's look at the paradigm of software development as it maps to software patterns. Ultimately, everything a developer does in terms of software development can be defined as patterns. There are well-known patterns, simple and complex patterns, well-documented patterns, and patterns that have never been documented. Even simple, tiny code snippets can be mapped to patterns. Builders are the components that capture these countless patterns in a standard way, and present them to developers in an easy, common, and user-friendly interface. This way, developers can use and reuse these patterns to accomplish their tasks. Builders enable developers to put together these encapsulated patterns in a meaningful fashion in such a way that they become full-fl edged applications, which address business needs. In this sense, developers can focus more on quickly and efficiently building the business solutions instead of focusing on low-level, complex, and time consuming development activities. Through the builder technology, senior and experienced developers at the IBM labs can identify, capture, and code these countless patterns into reusable components. When you are using builders, you are using code that has not only been developed by a group, which has already put a lot of thought and effort into the development task, but also a component, which has been extensively tested by IBM. Here, we will refer to the IBM example, because they are the makers of WEF—but overall, any developer can create builders. Simple and complex builders The same way that development activities can range from very simple to very complex tasks, builders can also range from very simple to very complex. Simple builders can perform tasks such as placing an attribute on a tag, highlighting a row of a table, or creating a simple link. Equally, there are complex builders , which perform complex and extensive tasks. These builders can save WEF developers' days worth of work, troubleshooting, and aggravation. For instance, there are builders for accessing, retrieving, and transforming data from backend systems, builders to create tables, form, and hundreds of others. The face of builders The following screenshot shows a Button builder in the Builder Editor pane: All builders have a common interface, which enables developers to provide builder input values. The builder input values define several aspects concerning how the application code will be generated by this builder. Through the Builder Editor pane, developers define how a builder will contribute to the process of creating your application, be it a portlet, a web application, or a widget. Any builder contains required and optional builder inputs. The required inputs are identified with an asterisk symbol (*) in front of their names. For instance, the preceding screenshot representing the Button builder shows two required inputs—Page and Tag. As you can see through the preceding screenshot, builder input values can be provided through several ways. The following table describes the items identified by the numbered labels: Label number Description Function 1 Free form inputs Enables developer to type in any appropriate value. 2 Drop-down controls Enables developer to select values from a predefined list, which is populated based on the context of the input. This type of input is dynamically populated with possible influence from other builder inputs, other builders in the same model, or even other aspects of the current WEF project. 3 Picker controls This type of control enables users make a selection from multiple source types such as variables, action list builders, methods defined in the current model, public methods defined in java classes and exposed through the Linked Java Class builder, and so on. The values selected through the picker controls can be evaluated at runtime. 4 Profiling assignment button This button enables developers to profile-enable the value for this input. In another words, through this button, developers indicate that the value for this input will come from a profile to be evaluated at regeneration time.     Through these controls, builders contribute to make the modeling process faster at the same time it reduces errors, because only valid options and within the proper context are presented. Builders are also adaptive. Inputs, controls, and builder sections are either presented, hidden, or modified depending upon the resulting context that is being automatically built by the builder. This capability not only guides the developers to make the right choices, but it also helps developers become more productive. Builder artifacts We have already mentioned that builders either add artifacts to or modify existing artifacts in the WebApp abstract object. In this section, we will show you an instance of these actions. In order to demonstrate this, we will not walk you through a sample. Rather, we will show you this process through a few screenshots from a model. Here, we will simulate the action of adding a button to a portlet page. In WEF, it is common to start portlet development with a plain HTML page, which contains mostly placeholder tags. These placeholders, usually represented by the names of span or div tags, indicate locations where code will be added by the properly selected builders. The expression "code will be added" can be quite encompassing. Builders can create simple HTML code, JavaScript code, stylesheet values, XML schemas, Java code, and so on. In this case, we mean to say that builders have the capability of creating any code required to carry on the task or tasks for which they have been designed. In our example, we will start with a plain and simple HTML page, which is added to a model either through a Page builder or an Imported Page builder. Our sample page contains the following HTML content: Now, let's use a Button builder to add a button artifact to this sample_PG page, more specifically to the sampleButton span tag. Assume that this button performs some action through a Method builder (Java Method), which in turn returns the same page. The following screenshot shows what the builder will look like after we provide all the inputs we will describe ahead: Let's discuss the builder inputs we have provided in the preceding screenshot. The first input we provide to this builder is the builder name. Although this input is not required, you should always name your builders. Some naming convention should be used for naming your builders. If you do not name your builders, WEF will name them for you. The following table shows same sample names, which adds an underscore followed by two or three letters to indentify the builder type: Builder type Builder name Button search_BTN Link details_LNK Page search_PG Data Page searchCriteria_DP Variable searchInputs_VAR Imported Model results_IM Model Container customer_MC There are several schools of thoughts regarding naming convention. Some scholars like to debate in favor of one or another. Regardless of the naming convention you adopt, you need to make sure that the same convention is followed by the entire development team. The next inputs relate to the location where the content created by this builder will be placed. For User Interface builders, you need to specify which page will be targeted. You also need to specify, within that page, the tag with which this builder will be associated. Besides specifying a tag based on the name, you can also use the other location techniques to define this location. In our simple example, we will be selecting the sample_PG page. If you were working on a sample, and if you would click on the drop-down control, you would see that only the available pages would be displayed as options from which you could choose. When a page is not selected, the tag input does not display any value. That is because the builders know how to present only valid options based on the inputs you have previously provided. For this example, we will select sample_PG for page input. After doing so, the Tag input is populated with all the HTML tags available on this page. We selected the sampleButton tag. This means that the content to be created on this page will be placed at the same location where this tag currently exists. It replaces the span tag type, but it preserves the other attributes, which make sense for the builder being currently added. Another input is the label value to be displayed. Once again, here you can type in a value, you can select a value from the picker, or you can specify a value to be provided by a profile. In this sample, we have typed in Sample Button. For the Button builder, you need to define the action to be performed when the button is clicked. Here also, the builder presents only the valid actions from which we can select one. We have selected, Link to an action. For the Action input, we select sample_MTD. This is the mentioned method, which performs some action and returns the same page. Now that the input values to this Button builder have been provided, we will inspect the content created by this builder. Inspecting content created by builders The builder call list has a small gray arrow icon in front of each builder type. By clicking on this icon, you cause the designer to show the content and artifacts created by the selected builder: By clicking on the highlighted link, the designer displays the WebApp tree in its right-hand side pane. By expanding the Pages node, you can see that one of the nodes is sample_BTN, which is our button. By clicking on this element, the Source pane displays the sample page with which we started. If necessary, click on the Source tab at the bottom of the page to expose the source pane. Once the WebApp tree is shown, by clicking on the sample_BTN element, the right-hand side pane highlights the content created by the Button builder we have added. Let's compare the code shown in the preceding screenshot against the original code shown by the screenshot depicturing the Sample Page builder. Please refer to the screenshot that shows a Sample Page builder named sample_PG. This screenshot shows that the sample_PG builder contains simple HTML tags defined in the Page Contents (HTML) input. By comparing these two screenshots, the first difference we notice is that after adding the Button builder, our initial simple HTML page became a JSP page, as denoted by the numerous JSP notations on this page. We can also notice that the initial sampleButton span tag has been replaced by an input tag of the button type. This tag includes an onClick JavaScript event. The code for this JavaScript event is provided by JSP scriptlet created by the Button builder. As we learned in this section, builders add diverse content to the WebApp abstract object. They can add artifacts such as JSP pages, JavaScript code, Java classes, and so on, or they can modify content already created by other builders. In summary, builders add or modify any content or artifacts in order to carry on their purpose according to the design pattern they represent. Models Another important element of WEF is the Model component. Model is a container for builder calls. The builder call list is maintained in an XML file with a .model extension. The builder call list represents the list of builders added to a model. The Outline view of the WEF perspective displays the list of builders that have been added to a model. The following screenshot displays the list of builder calls contained in a sample model: To see what a builder call looks like inside the model, you can click on the gray arrow icon in front of the builder type and inspect it in the Model XML tab. For instance, let's look at the Button builder call inside the sample model we described in the previous section. The preceding image represents a builder call the way it is stored in the model file. This builder call is one of the XML elements found in the BuilderCallList node, which in turn is child of the Model node. Extra information is also added at the end of this file. This XML model file contains the input names and the values for each builder you have added to this model. WEF operates on this information and the set of instructions contained in these XML elements, to build your application by invoking a process known as generation or regeneration to actually build the executable version of your application, be it a portlet, a web application, or a widget. We will discuss more on regeneration at the end of this article. It is important to notice that models contain only the builder call list, not the builders themselves. Although the terms—builder call and builder are used interchangeably most of the times, technically they are different. Builder call can be defined as an entry in your model, which identifies the builder by the Builder call ID, and then provides inputs to that builder. Builders are the elements or components that actually perform the tasks of interacting with the WebApp object. These elements are the builder definition file (an XML file) and a Java Class. A builder can optionally have a coordinator class. This class coordinates the behavior of the builder interface you interact with through the Builder Editor. Modeling Unlike traditional development process utilizing pure Java, JSP, JavaScript coding, WEF enables developers to model their application. By modeling, WEF users actually define the instructions of how the tool will build the final intended application. The time-consuming, complex, and tedious coding and testing tasks have already been done by the creators of the builders. It is now left to the WEF developer to select the right builders and provide the right inputs to these builders in order to build the application. In this sense, WEF developers are actually modelers. A modeler works with a certain level of abstraction by not writing or interacting directly with the executable code. This is not to say that WEF developers do not have to understand or write some Java or eventually JavaScript code. It means that, when some code writing is necessary, the amount and complexity of this code is reduced as WEF does the bulk of the coding for you. There are many advantages to the modeling approach. Besides the fact that it significantly speeds the development process, it also manages changes to the underlying code, without requiring you to deal with low-level coding. You only change the instructions that generate your application. WEF handles all the intricacies and dependencies for you. In the software development lifecycle, requests to change requirements and functionality after implementation are very common. It is given that your application will change after you have coded it. So, be proactive by utilizing a tool, which efficiently and expeditiously handles these changes. WEF has been built with the right mechanism to graciously handle change request scenarios. That is because changing the instructions to build the code is much faster and easier than changing the code itself. Code generation versus software automation While software has been vastly utilized to automate an infinite number of processes in countless domains, very little has been done to facilitate and improve software automation itself. Prior to being a tool for building portlets, WEF exploits the quite dormant paradigm of software automation. It is beyond the scope of this book to discuss software automation in details, but it is suffice to say that builders, profiles, and the regeneration engine enable the automation of the process of creating software. In the particular case of WEF, the automation process targets web applications and portlets, but it keeps on expanding to other domains, such as widgets and mobile phones. WEF is not a code generation tool. While code generation tools utilize a static process mostly based on templates, WEF implements software automation to achieve not only high productivity but also variability. Profiles In the development world, the word profile can signify many things. From the WEF perspective, profile represents a means to provide variability to an application. WEF also enables profiles or profile entries to be exposed to external users. In this way, external users can modify predefined aspects of the application without assistance from development or redeployment of the application. The externalized elements are the builder input values. By externalizing the builder input values, line of business, administrators, and even users can change these values causing WEF to serve a new flavor of their application. Profile names, profile entry names, which map to the builder inputs, and their respective values are initially stored in an XML file with a .pset extention . This is part of your project and is deployed with your project. Once it is deployed, it can be stored in other persistence mechanisms, for example a database. WEF provides an interface to enable developers to create profile, define entries and their initial values, as well as define the mechanism that will select which profile to use at runtime. By selecting a profile, all the entry values associated with that profile will be applied to your application, providing an unlimited level of variability. Variability can be driven by personalization, configuration, LDAP attributes, roles, or it can even be explicitly set through the Java methods. The following screenshot shows the Manage Profile tab of Profile Manager. The Profile Manager enables you to manage every aspect related to profile sets. The top portion of this screenshot lists the three profiles available in this profile set. The bottom part of this screenshot shows the profile entries and their respective values for the selected profile:
Read more
  • 0
  • 0
  • 3307

article-image-digging-windows-azure-diagnostics
Packt
11 Aug 2011
14 min read
Save for later

Digging into Windows Azure Diagnostics

Packt
11 Aug 2011
14 min read
Diagnostic data can be used to identify problems with a hosted service. The ability to view the data from several sources and across different instances eases the task of identifying a problem. Diagnostic data can be used to identify when service capacity is either too high or too low for the expected workload. This can guide capacity decisions such as whether to scale up or down the number of instances. The configuration of Windows Azure Diagnostics is performed at the instance level. The code to do that configuration is at the role level, but the diagnostics configuration for each instance is stored in individual blobs in a container named wad-control-container located in the storage service account configured for Windows Azure Diagnostics. Read more: Windows Azure Diagnostics: Initializing the Configuration and Using a Configuration File There is no need for application data and diagnostics data to be located in the same storage service account. Indeed, a best practice from both security and performance perspectives would be to host application data and diagnostic data in separate storage service accounts. The configuration of Windows Azure Diagnostics is centered on the concept of data buffers with each data buffer representing a specific type of diagnostic information. Some of the data buffers have associated data sources which represent a further refining of the data captured and persisted. For example, the performance counter data buffer has individual data sources for each configured performance counter. Windows Azure Diagnostics supports record-based data buffers that are persisted to Windows Azure tables and file-based data buffers that are persisted to Windows Azure blobs. In the Accessing data persisted to Windows Azure Storage recipe we see that we can access the diagnostic data in the same way we access other data in Windows Azure storage. Windows Azure Diagnostics supports the following record-based data buffers: Windows Azure basic logs Performance counters Windows Event Logs Windows Azure Diagnostic infrastructure logs The Windows Azure basic logs data buffer captures information written to a Windows Azure trace listener. In the Using the Windows Azure Diagnostics trace listener recipe, we see how to configure and use the basic logs data buffer. The performance counters data buffer captures the data of any configured performance counters. The Windows Event Logs data buffer captures the events form any configured Windows Event Log. The Windows Azure Diagnostic infrastructure logs data buffer captures diagnostic data produced by the Windows Azure Diagnostics process. Windows Azure Diagnostics supports the following file-based data sources for the Directories data buffer: IIS logs IIS Failed Request Logs Crash dumps Custom directories The Directories data buffer copies new files in a specified directory to blobs in a specified container in the Windows Azure Blob Service. The data captured by IIS Logs, IIS Failed Request Logs, and crash dumps is self-evident. With the custom directories data source, Windows Azure Diagnostics supports the association of any directory on the instance with a specified container in Windows Azure storage. This allows for the coherent integration of third-party logs into Windows Azure Diagnostics. We see how to do this in the Implementing custom logging recipe. The implementation of Windows Azure Diagnostics was changed in Windows Azure SDK v1.3 and it is now one of the pluggable modules that have to be explicitly imported into a role in the service definition file. As Windows Azure Diagnostics persists both its configuration and data to Windows Azure storage, it is necessary to specify a storage service account for diagnostics in the service configuration file. The default configuration for Windows Azure Diagnostics captures some data but does not persist it. Consequently, the diagnostics configuration should be modified at role startup. In the Initializing the configuration of Windows Azure Diagnostics recipe, we see how to do this programmatically, which is the normal way to do it. In the Using a configuration file with Windows Azure Diagnostics recipe, we see how to use a configuration file to do this, which is necessary in a VM role. In normal use, diagnostics data is captured all the time and is then persisted to the storage service according to some schedule. In the event of a problem, it may be necessary to persist diagnostics data before the next scheduled transfer time. We see how to do this in the Performing an on-demand transfer recipe. Both Microsoft and Cerebrata have released PowerShell cmdlets that facilitate the remote administration of Windows Azure Diagnostics. We see how to do this in the Using the Windows Azure Platform PowerShell cmdlets to configure Windows Azure Diagnostics recipe. There are times, especially early in the development process, when non-intrusive diagnostics monitoring is not sufficient. In the Using IntelliTrace to Diagnose Problems with a Hosted Service recipe, we see the benefits of intrusive monitoring of a Windows Azure role instance. Using the Windows Azure Diagnostics trace listener Windows Azure Diagnostics supports the use of Trace to log messages. The Windows Azure SDK provides the DiagnosticMonitorTraceListener trace listener to capture the messages. The Windows Azure Diagnostics basic logs data buffer is used to configure their persistence to the Windows Azure Table Service. The trace listener must be added to the Listeners collection for the Windows Azure hosted service. This is typically done through configuration in the appropriate app.config or web.config file, but it can also be done in code. When it creates a worker or web role, the Windows Azure tooling for Visual Studio adds the DiagnosticMonitorTraceListener to the list of trace listeners specified in the Configuration section of the relevant configuration file. Methods of the System.Diagnostics.Trace class can be used to write error, warning and informational messages. When persisting the messages to the storage service, the Diagnostics Agent can filter the messages if a LogLevel filter is configured for the BasicLogsBufferConfiguration. The Compute Emulator in the development environment adds an additional trace listener, so that trace messages can be displayed in the Compute Emulator UI. In this recipe, we will learn how to trace messages using the Windows Azure trace listener. How to do it... We are going to see how to use the trace listener provided in the Windows Azure SDK to trace messages and persist them to the storage service. We do this as follows: Ensure that the DiagnosticMonitorTraceListener has been added to the appropriate configuration file: app.config for a worker role and web.config for a web role. If necessary, add the following to the Configuration section of app.config or web.config file: <system.diagnostics> <trace> <listeners> <add type="Microsoft.WindowsAzure.Diagnostics. DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" name="AzureDiagnostics"> <filter type="" /> </add> </listeners> </trace> </system.diagnostics> Use the following to write an informational message: System.Diagnostics.Trace.TraceInformation("Information"); Use the following to write a warning message: System.Diagnostics.Trace.Warning("Warning "); Use the following to write an error message: System.Diagnostics.Trace.TraceError("Error"); Ensure that the DiagnosticMonitorConfiguration.Logs property is configured with an appropriate ScheduledTransferPeriod and ScheduledTransferLogLevelFilter when DiagnosticMonitor.Start() is invoked. How it works... In steps 1 and 2, we ensure that the DiagnosticMonitorTraceListener is added to the collection of trace listeners for the web role or worker role. In steps 3 through 5, we see how to write messages to the trace listener. In step 6, we ensure that the Diagnostic Agent has been configured to persist the messages to the storage service. Note that they can also be persisted through an on-demand transfer. This configuration is described in the recipe Initializing the configuration of Windows Azure Diagnostics. There's more... The Windows Azure SDK v1.3 introduced full IIS in place of the hosted web core used previously for web roles. With full IIS, the web role entry point and IIS are hosted in separate processes. Consequently, the trace listener must be configured separately for each process. The configuration using web.config configures the trace listener for IIS, not the web role entry point. Note that Windows Azure Diagnostics needs to be configured only once in each role, even though the trace listener is configured separately in both the web role entry point and in IIS. The web role entry point runs under a process named WaIISHost.exe. Consequently, one solution is to create a special configuration file for this process named WaIISHost.exe.config and add the trace listener configuration to it. A more convenient solution is to add the DiagnosticMonitorTraceListener trace listener programmatically to the list of trace listeners for the web role entry point. The following demonstrates an overridden OnStart() method in a web role entry point modified to add the trace listener and write an informational message: public override bool OnStart() { System.Diagnostics.Trace.Listeners.Add(new Microsoft. WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener()); System.Diagnostics.Trace.AutoFlush = true; System.Diagnostics.Trace.TraceInformation("Information"); return base.OnStart(); } The AutoFlush property is set to true to indicate that messages should be flushed through the trace listener as soon as they are written. Performing an on-demand transfer The Windows Azure Diagnostics configuration file specifies a schedule in which the various data buffers are persisted to the Windows Azure Storage Service. The on-demand transfer capability in Windows Azure Diagnostics allows a transfer to be requested outside this schedule. This is useful if a problem occurs with an instance and it becomes necessary to look at the captured logs before the next scheduled transfer. An on-demand transfer is requested for a specific data buffer in a specific instance. This request is inserted into the diagnostics configuration for the instance stored in a blob in wad-control-container. This is an asynchronous operation whose completion is indicated by the insertion of a message in a specified notification queue. The on-demand transfer is configured using an OnDemandTransferOptions instance that specifies the DateTime range for the transfer, a LogLevelFilter that filters the data to be transferred, and the name of the notification queue. The RoleInstanceDiagnosticeManager.BeginOnDemandTransfer() method is used to request the on-demand transfer with the configured options for the specified data buffer. Following the completion of an on-demand transfer, the request must be removed from the diagnostics configuration for the instance by using the RoleInstanceDiagnosticManager.EndOnDemandTransfer() method. The completion message in the notification queue should also be removed. The GetActiveTransfers() and CancelOnDemandTransfers() methods of the RoleInstanceDiagnosticManager class can be used to enumerate and cancel active on-demand transfers. Note that it is not possible to modify the diagnostics configuration for the instance if there is a current request for an on-demand transfer, even if the transfer has completed. Note that requesting an on-demand transfer does not require a direct connection with the hosted service. The request merely modifies the diagnostic configuration for the instance. This change is then picked up when the Diagnostic Agent on the instance next polls the diagnostic configuration for the instance. The default value for this polling interval is 1 minute. This means that a request for an on-demand transfer needs to be authenticated only against the storage service account containing the diagnostic configuration for the hosted service. In this recipe, we will learn how to request an on-demand transfer and clean up after it completes. How to do it... We are going to see how to request an on-demand transfer and clean up after it completes. We do this as follows: Use Visual Studio to create a WPF project. Add the following assembly references to the project: Microsoft.WindowsAzure.Diagnostics.dll Microsoft.WindowsAzure.ServiceRuntime.dll Microsoft.WindowsAzure.StorageClient.dll System.configuration.dll Add a class named OnDemandTransferExample to the project. Add the following using statements to the class: using Microsoft.WindowsAzure; using Microsoft.WindowsAzure.Diagnostics; using Microsoft.WindowsAzure.Diagnostics.Management; using Microsoft.WindowsAzure.ServiceRuntime; using Microsoft.WindowsAzure.StorageClient; using System.Configuration; Add the following private member to the class: String wadNotificationQueueName = "wad-transfer-queue"; Add the following method, requesting an on-demand transfer, to the class: public void RequestOnDemandTransfer( String deploymentId, String roleName, String roleInstanceId) { CloudStorageAccount cloudStorageAccount = CloudStorageAccount.Parse( ConfigurationManager.AppSettings[ "DiagnosticsConnectionString"]); OnDemandTransferOptions onDemandTransferOptions = new OnDemandTransferOptions() { From = DateTime.UtcNow.AddHours(-1), To = DateTime.UtcNow, LogLevelFilter = Microsoft.WindowsAzure.Diagnostics.LogLevel.Verbose, NotificationQueueName = wadNotificationQueueName }; RoleInstanceDiagnosticManager ridm = cloudStorageAccount.CreateRoleInstanceDiagnosticManager( deploymentId, roleName, roleInstanceId); IDictionary<DataBufferName, OnDemandTransferInfo> activeTransfers = ridm.GetActiveTransfers(); if (activeTransfers.Count == 0) { Guid onDemandTransferId = ridm.BeginOnDemandTransfer( DataBufferName.PerformanceCounters, onDemandTransferOptions); } } Add the following method, cleaning up after an on-demand transfer, to the class: public void CleanupOnDemandTransfers() { CloudStorageAccount cloudStorageAccount = CloudStorageAccount.Parse( ConfigurationManager.AppSettings[ "DiagnosticsConnectionString"]); CloudQueueClient cloudQueueClient = cloudStorageAccount.CreateCloudQueueClient(); CloudQueue cloudQueue = cloudQueueClient.GetQueueReference( wadNotificationQueueName); CloudQueueMessage cloudQueueMessage; while ((cloudQueueMessage = cloudQueue.GetMessage()) != null) { OnDemandTransferInfo onDemandTransferInfo = OnDemandTransferInfo.FromQueueMessage( cloudQueueMessage); String deploymentId = onDemandTransferInfo.DeploymentId; String roleName = onDemandTransferInfo.RoleName; String roleInstanceId = onDemandTransferInfo.RoleInstanceId; Guid requestId = onDemandTransferInfo.RequestId; RoleInstanceDiagnosticManager ridm = cloudStorageAccount.CreateRoleInstanceDiagnosticManager( deploymentId, roleName, roleInstanceId); Boolean result = ridm.EndOnDemandTransfer(requestId); cloudQueue.DeleteMessage(cloudQueueMessage); } } Add the following Grid declaration to the Window element of MainWindow.xaml: <Grid> <Label Content="DeploymentId:" Height="28" HorizontalAlignment="Left" VerticalAlignment="Top" Margin="30,60,0,0" Name="label1" /> <Label Content="Role name:" Height="28" HorizontalAlignment="Left" VerticalAlignment="Top" Margin="30,110,0,0" Name="label2" /> <Label Content="Instance Id:" Height="28" HorizontalAlignment="Left" VerticalAlignment="Top" Margin="30,160,0,0" Name="label3" /> <TextBox HorizontalAlignment="Left" VerticalAlignment="Top" Margin="120,60,0,0" Name="DeploymentId" Height="23" Width="120" Text="24447326eed3475ca58d01c223efb778" /> <TextBox HorizontalAlignment="Left" VerticalAlignment="Top" Margin="120,110,0,0" Width="120" Name="RoleName" Text="WebRole1" /> <TextBox Height="23" HorizontalAlignment="Left" VerticalAlignment="Top" Margin="120,160,0,0" Width="120" Name="InstanceId" Text="WebRole1_IN_0" /> <Button Content="Request On-Demand Transfer" Height="23" HorizontalAlignment="Left" VerticalAlignment="Top" Margin="60,220,0,0" Width="175" Name="RequestTransfer" Click="RequestTransfer_Click" /> <Button Content="Cleanup On-Demand Transfers" Height="23" HorizontalAlignment="Left" VerticalAlignment="Top" Margin="300,220,0,0" Width="175" Name="CleanupTransfers" Click="CleanupTransfers_Click" /> </Grid> Add the following event handler to MainWindow.xaml.cs: private void RequestTransfer_Click( object sender, RoutedEventArgs e) { String deploymentId = DeploymentId.Text; String roleName = RoleName.Text; String roleInstanceId = InstanceId.Text; OnDemandTransferExample example = new OnDemandTransferExample(); example.RequestOnDemandTransfer( deploymentId, roleName, roleInstanceId); } Add the following event handler to MainWindow.xaml.cs: private void CleanupTransfers_Click( object sender, RoutedEventArgs e) { OnDemandTransferExample example = new OnDemandTransferExample(); example.CleanupOnDemandTransfers(); } Add the following to the configuration element of app.config: <appSettings> <add key="DiagnosticsConnectionString" value="DefaultEndpointsProtocol=https;AccountName={ ACCOUNT_NAME};AccountKey={ACCESS_KEY}"/> </appSettings> How it works... We create a WPF project in step 1 and add the required assembly references in step 2. We set up the OnDemandTransferExample class in steps 3 and 4. We add a private member to hold the name of the Windows Azure Diagnostics notification queue in step 5. In step 6, we add a method requesting an on-demand transfer. We create an OnDemandTransferOptions object configuring an on-demand transfer for data captured in the last hour. We provide the name of the notification queue Windows Azure Diagnostics inserts a message indicating the completion of the transfer. We use the deployment information captured in the UI to create a RoleInstanceDiagnosticManager instance. If there are no active on-demand transfers, then we request an on-demand transfer for the performance counters data buffer. In step 7, we add a method cleaning up after an on-demand transfer. We create a CloudStorageAccount object that we use to create the CloudQueueClient object with which we access to the notification queue. We then retrieve the transfer-completion messages in the notification queue. For each transfer-completion message found, we create an OnDemandTransferInfo object describing the deploymentID, roleName, instanceId, and requestId of a completed on-demand transfer. We use the requestId to end the transfer and remove it from the diagnostics configuration for the instance allowing on-demand transfers to be requested. Finally, we remove the notification message from the notification queue. In step 8, we add the UI used to capture the deployment ID, role name, and instance ID used to request the on-demand transfer. We can get this information from the Windows Azure Portal or the Compute Emulator UI. This information is not needed for cleaning up on-demand transfers, which uses the transfer-completion messages in the notification queue. In steps 9 and 10, we add the event handlers for the Request On-Demand Transfer and Cleanup On-Demand Transfers buttons in the UI. These methods forward the requests to the methods we added in steps 6 and 7. In step 11, we add the DiagnosticsConnectionString to the app.config file. This contains the connection string used to interact with the Windows Azure Diagnostics configuration. We must replace {ACCOUNT_NAME} and {ACCESS_KEY} with the storage service account name and access key for the storage account in which the Windows Azure Diagnostics configuration is located.
Read more
  • 0
  • 0
  • 5159
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-using-intellitrace-diagnose-problems-hosted-service
Packt
11 Aug 2011
4 min read
Save for later

Using IntelliTrace to Diagnose Problems with a Hosted Service

Packt
11 Aug 2011
4 min read
  Microsoft Windows Azure Development Cookbook Over 80 advanced recipes for developing scalable services with the Windows Azure platform         Read more about this book       (For more resources on this subject, see here.) Windows Azure Diagnostics provides non-intrusive support for diagnosing problems with a Windows Azure hosted service. This non-intrusion is vital in a production service. However, when developing a hosted service, it may be worthwhile to get access to additional diagnostics information even at the cost of intruding on the service. The Visual Studio 2010 Ultimate Edition supports the use of IntelliTrace with an application deployed to the cloud. This can be particularly helpful when dealing with problems, such as missing assemblies. It also allows for the easy identification and diagnosis of exceptions. Note that IntelliTrace has a significant impact on the performance of a hosted service. Consequently, it should never be used in a production environment and, in practice, should only be used when needed during development. IntelliTrace is configured when the application package is published. This configuration includes specifying the events to trace and identifying the modules and processes for which IntelliTrace should not capture data. For example, the Storage Client module is removed by default from IntelliTrace since otherwise, storage exceptions could occur due to timeouts. Once the application package has been deployed, the Windows Azure Compute node in the Visual Studio Server Explorer indicates the Windows Azure hosted service, roles, and instances which are capturing IntelliTrace data. From the instance level in this node, a request can be made to download the current IntelliTrace log. This lists: Threads Exceptions System info Modules The threads section provides information about when particular threads were running. The exceptions list specifies the exceptions that occurred, and provides the call stack when they occurred. The system info section provides general information about the instance, such as number of processors and total memory. The modules section lists the loaded assemblies. The IntelliTrace logs will probably survive an instance crash, but they will not survive if the virtual machine is moved due to excessive failure. The instance must be running for Visual Studio to be able to download the IntelliTrace logs. In this recipe, we will learn how to use IntelliTrace to identify problems with an application deployed to a hosted service in the cloud. Getting ready Only Visual Studio Ultimate Edition supports the use of IntelliTrace with an application deployed to a hosted service in the cloud. How to do it... We are going to use IntelliTrace to investigate an application deployed to a hosted service in the cloud. We do this as follows: The first few steps occur before the application package is deployed to the cloud: Use Visual Studio 2010 Ultimate Edition to build a Windows Azure project. Right click on the Solution and select Publish.... Select Enable IntelliTrace for .Net 4 roles. Click on Settings... and make any changes desired to the IntelliTrace settings for modules excluded, and so on. Click on OK to continue the deployment of the application package. The remaining steps occur after the package has been deployed and the hosted service is in the Ready (that is, running) state: Open the Server Explorer in Visual Studio. On the Windows Azure Compute node, right click on an instance node and select View IntelliTrace logs. Investigate the downloaded logs, looking at exceptions and their call stacks, and so on. Right click on individual lines of code in a code file and select Search For This Line In IntelliTrace. Select one of the located uses and step through the code from the line. How it works... Steps 1 through 5 are a normal application package deployment except for the IntelliTrace configuration. In steps 6 and 7, we use Server Explorer to access and download the IntelliTrace logs. Note that we can refresh the logs through additional requests to View IntelliTrace logs. In steps 8 through 10, we look at various aspects of the downloaded IntelliTrace logs. Further resources on this subject: Windows Azure Diagnostics: Initializing the Configuration and Using a Configuration File [Article] Digging into Windows Azure Diagnostics [Article] Managing Azure Hosted Services with the Service Management API [Article] Autoscaling with the Windows Azure Service Management REST API [Article] Using the Windows Azure Platform PowerShell Cmdlets [Article]
Read more
  • 0
  • 0
  • 1754

article-image-windows-azure-diagnostics-initializing-configuration-and-using-configuration-file
Packt
11 Aug 2011
8 min read
Save for later

Windows Azure Diagnostics: Initializing the Configuration and Using a Configuration File

Packt
11 Aug 2011
8 min read
  Microsoft Windows Azure Development Cookbook Over 80 advanced recipes for developing scalable services with the Windows Azure platform         Read more about this book       (For more resources on this subject, see here.) The implementation of Windows Azure Diagnostics was changed in Windows Azure SDK v1.3 and it is now one of the pluggable modules that have to be explicitly imported into a role in the service definition file. As Windows Azure Diagnostics persists both its configuration and data to Windows Azure storage, it is necessary to specify a storage service account for diagnostics in the service configuration file. The configuration of Windows Azure Diagnostics is performed at the instance level. The code to do that configuration is at the role level, but the diagnostics configuration for each instance is stored in individual blobs in a container named wad-control-container located in the storage service account configured for Windows Azure Diagnostics. Initializing the configuration of Windows Azure Diagnostics The Windows Azure Diagnostics module is imported into a role by the specification of an Import element with a moduleName attribute of Diagnostics in the Imports section of the service definition file (ServiceDefinition.csdef). This further requires the specification, in the service configuration file (ServiceConfiguration.cscfg), of a Windows Azure Storage Service account that can be used to access the instance configuration for diagnostics. This configuration is stored as an XML file in a blob, named for the instance, in a container named wad-control-container in the storage service account configured for diagnostics. The Diagnostics Agent service is started automatically when a role instance starts provided the diagnostics module has been imported into the role. Note that in Windows Azure SDK versions prior to v1.3, this is not true in that the Diagnostics Agent must be explicitly started through the invocation of DiagnosticMonitor.Start(). On instance startup, the diagnostics configuration for the instance can be set as desired in the overridden RoleEntryPoint.OnStart() method. The general idea is to retrieve the default initial configuration using DiagnosticMonitor.GetDefaultInitialConfiguration() and modify it as necessary before saving it using DiagnosticMonitor.Start(). This name is something of a relic, since Windows Azure SDK v1.3 and later, the Diagnostics Agent service is started automatically. Another way to modify the diagnostics configuration for the instance is to use RoleInstanceDiagnosticManager.GetCurrentConfiguration() to retrieve the existing instance configuration from wad-control-container. This can be modified and then saved using RoleInstanceDiagnosticManager.SetCurrentConfiguration(). This method can be used both inside and outside the role instance. For example, it can be implemented remotely to request that an on-demand transfer be performed. An issue is that using this technique during instance startup violates the principal that the environment on startup is always the same, as the existing instance configuration may already have been modified. Note that it is not possible to modify the diagnostics configuration for an instance if there is a currently active on-demand transfer. In this recipe, we will learn how to initialize programmatically the configuration of Windows Azure Diagnostics. How to do it... We are going to see how to initialize the configuration for Windows Azure Diagnostics using code. We do this as follows: Use Visual Studio to create an empty cloud project. Add a web role to the project (accept the default name of WebRole1). Add the following assembly reference to the project: System.Data.Services.Client In the WebRole class, replace OnStart() with the following: public override bool OnStart(){ WadManagement wadManagement = new WadManagement(); wadManagement.InitializeConfiguration(); return base.OnStart();} In the Default.aspx file, replace the asp:Content element named BodyContent with the following: <asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent"> <div id="xmlInner"> <pre> <asp:label id="xmlLabel" runat="server"/> </pre> </div></asp:Content> Add the following using statements to the Default.aspx.cs file: using Microsoft.WindowsAzure.ServiceRuntime; In the Default.aspx.cs file, add the following private members to the _Default class: private String deploymentId = RoleEnvironment.DeploymentId;private String roleName = RoleEnvironment.CurrentRoleInstance.Role.Name;private String instanceId = RoleEnvironment.CurrentRoleInstance.Id; In the Default.aspx.cs file, replace Page_Load() with the following: protected void Page_Load(object sender, EventArgs e){ WadManagement wad = new WadManagement(); String wadConfigurationForInstance = wad.GetConfigurationBlob( deploymentId, roleName, instanceId); xmlLabel.Text = Server.HtmlEncode(wadConfigurationForInstance);} Add a class named WadManagement to the project. Add the following using statements to the WadManagement class: using Microsoft.WindowsAzure;using Microsoft.WindowsAzure.Diagnostics;using Microsoft.WindowsAzure.Diagnostics.Management;using Microsoft.WindowsAzure.ServiceRuntime;using Microsoft.WindowsAzure.StorageClient; Add the following private members to the WadManagement class: private String wadConnectionString ="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString";private String wadControlContainerName = "wad-control-container";private CloudStorageAccount cloudStorageAccount; Add the following constructor to the WadManagement class: public WadManagement(){ cloudStorageAccount = CloudStorageAccount.Parse( RoleEnvironment.GetConfigurationSettingValue( wadConnectionString));} Add the following methods, retrieving the instance configuration blob from Windows Azure Storage, to the WadManagement class: public String GetConfigurationBlob( String deploymentId, String roleName, String instanceId){ DeploymentDiagnosticManager deploymentDiagnosticManager = new DeploymentDiagnosticManager( cloudStorageAccount, deploymentId); String wadConfigurationBlobNameForInstance = String.Format("{0}/{1}/{2}", deploymentId, roleName, instanceId); String wadConfigurationForInstance = GetWadConfigurationForInstance( wadConfigurationBlobNameForInstance); return wadConfigurationForInstance;}private String GetWadConfigurationForInstance( String wadConfigurationInstanceBlobName){ CloudBlobClient cloudBlobClient = cloudStorageAccount.CreateCloudBlobClient(); CloudBlobContainer cloudBlobContainer = cloudBlobClient.GetContainerReference( wadControlContainerName); CloudBlob cloudBlob = cloudBlobContainer.GetBlobReference( wadConfigurationInstanceBlobName); String wadConfigurationForInstance = cloudBlob.DownloadText(); return wadConfigurationForInstance;} Add the following method, initializing the configuration of Windows Azure Diagnostics, to the WadManagement class: public void InitializeConfiguration(){ String eventLog = "Application!*"; String performanceCounter = @"Processor(_Total)% Processor Time"; DiagnosticMonitorConfiguration dmc = DiagnosticMonitor.GetDefaultInitialConfiguration(); dmc.DiagnosticInfrastructureLogs.BufferQuotaInMB = 100; dmc.DiagnosticInfrastructureLogs.ScheduledTransferPeriod = TimeSpan.FromHours(1); dmc.DiagnosticInfrastructureLogs. ScheduledTransferLogLevelFilter = LogLevel.Verbose; dmc.WindowsEventLog.BufferQuotaInMB = 100; dmc.WindowsEventLog.ScheduledTransferPeriod = TimeSpan.FromHours(1); dmc.WindowsEventLog.ScheduledTransferLogLevelFilter = LogLevel.Verbose; dmc.WindowsEventLog.DataSources.Add(eventLog); dmc.Logs.BufferQuotaInMB = 100; dmc.Logs.ScheduledTransferPeriod = TimeSpan.FromHours(1); dmc.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose; dmc.Directories.ScheduledTransferPeriod = TimeSpan.FromHours(1); PerformanceCounterConfiguration perfCounterConfiguration = new PerformanceCounterConfiguration(); perfCounterConfiguration.CounterSpecifier = performanceCounter; perfCounterConfiguration.SampleRate = System.TimeSpan.FromSeconds(10); dmc.PerformanceCounters.DataSources.Add( perfCounterConfiguration); dmc.PerformanceCounters.BufferQuotaInMB = 100; dmc.PerformanceCounters.ScheduledTransferPeriod = TimeSpan.FromHours(1); DiagnosticMonitor.Start(cloudStorageAccount, dmc);} How it works... In steps 1 and 2, we create a cloud project with a web role. We add the required assembly reference in step 3. In step 4, we modify OnStart(), so that it initializes the configuration of Windows Azure Diagnostics. In step 5, we modify the default web page, so that it displays the content of the blob storing the instance configuration for Windows Azure Diagnostics. In step 6, we add the required using statement to Default.aspx.cs. In step 7, we add some private members to store the deployment ID, the role name, and the instance ID of the current instance. In step 8, we modify the Page_Load() event handler to retrieve the blob content and display it on the default web page. In step 9, we add the WadManagement class that interacts with the Windows Azure Blob Service. In step 10, we add the required using statements. In step 11, we add some private members to contain the name of the connection string in the service configuration file, and the name of the blob container containing the instance configuration for Windows Azure Diagnostics. We also add a CloudStorageAccount instance, which we initialize in the constructor we add in step 12. We then add, in step 13, the two methods we use to retrieve the content of the blob containing the instance configuration for Windows Azure Diagnostics. In GetConfigurationBlob(), we first create the name of the blob. We then pass this into the GetWadConfigurationForInstance() method, which invokes various Windows Azure Storage Client Library methods to retrieve the content of the blob. In step 14, we add the method to initialize the configuration of Windows Azure Diagnostics for the instance. We first specify the names of the event log and performance counter we want to capture and persist. We then retrieve the default initial configuration and configure capture of the Windows Azure infrastructure logs, Windows Event Logs, basic logs, directories, and performance counters. For each of them, we specify a data buffer size of 100 MB and schedule an hourly transfer of logged data. For Windows Event Logs, we specify that the Application!* event log should be captured locally and persisted to the storage service. The event log is specified using an XPath expression allowing the events to be filtered, if desired. We can add other event logs if desired. We configure the capture and persistence of only one performance counter—the Processor(_Total)% Processor Time. We can add other performance counters if desired. Two sections at the end of this recipe provide additional details on the configuration of event logs and performance counters. We specify a transfer schedule for the directories data buffer. The Diagnostics Agent automatically inserts special directories into the configuration: crash dumps for all roles, and IIS logs and IIS failed request logs for web roles. The Diagnostics Agent does this because the actual location of the directories is not known until the instance is deployed. Note that even though we have configured a persistence schedule for crash dumps, they are not captured by default. We would need to invoke the CrashDumps.EnableCollection() method to enable the capture of crash dumps.
Read more
  • 0
  • 0
  • 4778

article-image-managing-azure-hosted-services-service-management-api
Packt
08 Aug 2011
11 min read
Save for later

Managing Azure Hosted Services with the Service Management API

Packt
08 Aug 2011
11 min read
  Microsoft Windows Azure Development Cookbook Over 80 advanced recipes for developing scalable services with the Windows Azure platform         Read more about this book       (For more resources on this subject, see here.) Introduction The Windows Azure Portal provides a convenient and easy-to-use way of managing the hosted services and storage account in a Windows Azure subscription, as well as any deployments into these hosted services. The Windows Azure Service Management REST API provides a programmatic way of managing the hosted services and storage accounts in a Windows Azure subscription, as well as any deployments into these hosted services. These techniques are complementary and, indeed, it is possible to use the Service Management API to develop an application that provides nearly all the features of the Windows Azure Portal. The Service Management API provides almost complete control over the hosted services and storage accounts contained in a Windows Azure subscription. All operations using this API must be authenticated using an X.509 management certificate. We see how to do this in the Authenticating against the Windows Azure Service Management REST API recipe in Controlling Access in the Windows Azure Platform. In Windows Azure, a hosted service is an administrative and security boundary for an application. A hosted service specifies a name for the application, as well as specifying a Windows Azure datacenter or affinity group into which the application is deployed. In the Creating a Windows Azure hosted service recipe, we see how to use the Service Management API to create a hosted service. A hosted service has no features or functions until an application is deployed into it. An application is deployed by specifying a deployment slot, either production or staging, and by providing the application package containing the code, as well as the service configuration file used to configure the application. We see how to do this using the Service Management API in the Deploying an application into a hosted service recipe. Once an application has been deployed, it probably has to be upgraded occasionally. This requires the provision of a new application package and service configuration file. We see how to do this using the Service Management API in the Upgrading an application deployed to a hosted service recipe. A hosted service has various properties defining it as do the applications deployed into it. There could, after all, be separate applications deployed into each of the production and staging slots. In the Retrieving the properties of a hosted service recipe, we see how to use the Service Management API to get these properties. An application deployed as a hosted service in Windows Azure can use the Service Management API to modify itself while running. Specifically, an application can autoscale by varying the number of role instances to match anticipated demand. We see how to do this in the Autoscaling with the Windows Azure Service Management REST API recipe. We can use the Service Management API to develop our own management applications. Alternatively, we can use one of the PowerShell cmdlets libraries that have already been developed using the API. Both the Windows Azure team and Cerebrata have developed such libraries. We see how to use them in the Using the Windows Azure Platform PowerShell Cmdlets recipe. Creating a Windows Azure hosted service A hosted service is the administrative and security boundary for an application deployed to Windows Azure. The hosted service specifies the service name, a label, and either the Windows Azure datacenter location or the affinity group into which the application is to be deployed. These cannot be changed once the hosted service is created. The service name is the subdomain under cloudapp.net used by the application, and the label is a humanreadable name used to identify the hosted service on the Windows Azure Portal. The Windows Azure Service Management REST API exposes a create hosted service operation. The REST endpoint for the create hosted service operation specifies the subscription ID under which the hosted service is to be created. The request requires a payload comprising an XML document containing the properties needed to define the hosted service, as well as various optional properties. The service name provided must be unique across all hosted services in Windows Azure, so there is a possibility that a valid create hosted service operation will fail with a 409 Conflict error if the provided service name is already in use. As the create hosted service operation is asynchronous, the response contains a request ID that can be passed into a get operation status operation to check the current status of the operation. In this recipe, we will learn how to use the Service Management API to create a Windows Azure hosted service. Getting ready The recipes in this article use the ServiceManagementOperation utility class to invoke operations against the Windows Azure Service Management REST API. We implement this class as follows: Add a class named ServiceManagementOperation to the project. Add the following assembly reference to the project: System.Xml.Linq.dll Add the following using statements to the top of the class file: using System.Security.Cryptography.X509Certificates;using System.Net;using System.Xml.Linq;using System.IO; Add the following private members to the class: String thumbprint;String versionId = "2011-02-25"; Add the following constructor to the class: public ServiceManagementOperation(String thumbprint){ this.thumbprint = thumbprint;} Add the following method, retrieving an X.509 certificate from the certificate store, to the class: private X509Certificate2 GetX509Certificate2( String thumbprint){ X509Certificate2 x509Certificate2 = null; X509Store store = new X509Store("My", StoreLocation.LocalMachine); try { store.Open(OpenFlags.ReadOnly); X509Certificate2Collection x509Certificate2Collection = store.Certificates.Find( X509FindType.FindByThumbprint, thumbprint, false); x509Certificate2 = x509Certificate2Collection[0]; } finally { store.Close(); } return x509Certificate2;} Add the following method, creating an HttpWebRequest, to the class: private HttpWebRequest CreateHttpWebRequest( Uri uri, String httpWebRequestMethod){ X509Certificate2 x509Certificate2 = GetX509Certificate2(thumbprint); HttpWebRequest httpWebRequest = (HttpWebRequest)HttpWebRequest.Create(uri); httpWebRequest.Method = httpWebRequestMethod; httpWebRequest.Headers.Add("x-ms-version", versionId); httpWebRequest.ClientCertificates.Add(x509Certificate2); httpWebRequest.ContentType = "application/xml"; return httpWebRequest;} Add the following method, invoking a GET operation on the Service Management API, to the class: public XDocument Invoke(String uri){ XDocument responsePayload; Uri operationUri = new Uri(uri); HttpWebRequest httpWebRequest = CreateHttpWebRequest(operationUri, "GET"); using (HttpWebResponse response = (HttpWebResponse)httpWebRequest.GetResponse()) { Stream responseStream = response.GetResponseStream(); responsePayload = XDocument.Load(responseStream); } return responsePayload;} Add the following method, invoking a POST operation on the Service Management API, to the class: public String Invoke(String uri, XDocument payload){ Uri operationUri = new Uri(uri); HttpWebRequest httpWebRequest = CreateHttpWebRequest(operationUri, "POST"); using (Stream requestStream = httpWebRequest.GetRequestStream()) { using (StreamWriter streamWriter = new StreamWriter(requestStream, System.Text.UTF8Encoding.UTF8)) { payload.Save(streamWriter, SaveOptions.DisableFormatting); } } String requestId; using (HttpWebResponse response = (HttpWebResponse)httpWebRequest.GetResponse()) { requestId = response.Headers["x-ms-request-id"]; } return requestId;} How it works... In steps 1 through 3, we set up the class. In step 4, we add a version ID for service management operations. Note that Microsoft periodically releases new operations for which it provides a new version ID, which is usually applicable for operations added earlier. In step 4, we also add a private member for the X.509 certificate thumbprint that we initialize in the constructor we add in step 5. In step 6, we open the Personal (My) certificate store on the local machine level and retrieve an X.509 certificate identified by thumbprint. If necessary, we can specify the current user level, instead of the local machine level, by using StoreLocation.CurrentUser instead of StoreLocation.LocalMachine. In step 7, we create an HttpWebRequest with the desired HTTP method type, and add the X.509 certificate to it. We also add various headers including the required x-ms-version. In step 8, we invoke a GET request against the Service Management API and load the response into an XML document which we then return. In step 9, we write an XML document, containing the payload, into the request stream for an HttpWebRequest and then invoke a POST request against the Service Management API. We extract the request ID from the response and return it. How to do it... We are now going to construct the payload required for the create hosted service operation, and then use it when we invoke the operation against the Windows Azure Service Management REST API. We do this as follows: Add a new class named CreateHostedServiceExample to the WPF project. If necessary, add the following assembly reference to the project: System.Xml.Linq.dll Add the following using statement to the top of the class file: using System.Xml.Linq; Add the following private members to the class: XNamespace wa = "http://schemas.microsoft.com/windowsazure";String createHostedServiceFormat ="https://management.core.windows.net/{0}/services/hostedservices"; Add the following method, creating a base-64 encoded string, to the class: private String ConvertToBase64String(String value){ Byte[] bytes = System.Text.Encoding.UTF8.GetBytes(value); String base64String = Convert.ToBase64String(bytes); return base64String;} Add the following method, creating the payload, to the class: private XDocument CreatePayload( String serviceName, String label, String description, String location, String affinityGroup){ String base64LabelName = ConvertToBase64String(label); XElement xServiceName = new XElement(wa + "ServiceName", serviceName); XElement xLabel = new XElement(wa + "Label", base64LabelName); XElement xDescription = new XElement(wa + "Description", description); XElement xLocation = new XElement(wa + "Location", location); XElement xAffinityGroup = new XElement(wa + "AffinityGroup", affinityGroup); XElement createHostedService = new XElement(wa +"CreateHostedService"); createHostedService.Add(xServiceName); createHostedService.Add(xLabel); createHostedService.Add(xDescription); createHostedService.Add(xLocation); //createHostedService.Add(xAffinityGroup); XDocument payload = new XDocument(); payload.Add(createHostedService); payload.Declaration = new XDeclaration("1.0", "UTF-8", "no"); return payload;} Add the following method, invoking the create hosted service operation, to the class: private String CreateHostedService(String subscriptionId, String thumbprint, String serviceName, String label, String description, String location, String affinityGroup){ String uri = String.Format(createHostedServiceFormat, subscriptionId); XDocument payload = CreatePayload(serviceName, label, description, location, affinityGroup); ServiceManagementOperation operation = new ServiceManagementOperation(thumbprint); String requestId = operation.Invoke(uri, payload); return requestId;} Add the following method, invoking the methods added earlier, to the class: public static void UseCreateHostedServiceExample(){ String subscriptionId = "{SUBSCRIPTION_ID}"; String thumbprint = "{THUMBPRINT}"; String serviceName = "{SERVICE_NAME}"; String label = "{LABEL}"; String description = "Newly created service"; String location = "{LOCATION}"; String affinityGroup = "{AFFINITY_GROUP}"; CreateHostedServiceExample example = new CreateHostedServiceExample(); String requestId = example.CreateHostedService( subscriptionId, thumbprint, serviceName, label, description, location, affinityGroup);} How it works... In steps 1 through 3, we set up the class. In step 4, we add private members to define the XML namespace used in creating the payload and the String format used in generating the endpoint for the create hosted service operation. In step 5, we add a helper method to create a base-64 encoded copy of a String. We create the payload in step 6 by creating an XElement instance for each of the required and optional properties, as well as the root element. We add each of these elements to the root element and then add this to an XML document. Note that we do not add an AffinityGroup element because we provide a Location element and only one of them should be provided. In step 7, we use the ServiceManagementOperation utility class , described in the Getting ready section, to invoke the create hosted service operation on the Service Management API. The Invoke() method creates an HttpWebRequest, adds the required X.509 certificate and the payload, and then sends the request to the create hosted services endpoint. It then parses the response to retrieve the request ID which can be used to check the status of the asynchronous create hosted services operation. In step 8, we add a method that invokes the methods added earlier. We need to provide the subscription ID for the Windows Azure subscription, a globally unique service name for the hosted service, and a label used to identify the hosted service in the Windows Azure Portal. The location must be one of the official location names for a Windows Azure datacenter, such as North Central US. Alternatively, we can provide the GUID identifier of an existing affinity group and swap the commenting out in the code, adding the Location and AffinityGroup elements in step 6. We see how to retrieve the list of locations and affinity groups in the Locations and affinity groups section of this recipe. There's more... Each Windows Azure subscription can create six hosted services. This is a soft limit that can be raised by requesting a quota increase from Windows Azure Support at the following URL: http://www.microsoft.com/windowsazure/support/ There are also soft limits on the number of cores per subscription (20) and the number of Windows Azure storage accounts per subscription (5). These limits can also be increased by request to Windows Azure Support. Locations and affinity groups The list of locations and affinity groups can be retrieved using the list locations and list affinity groups operations respectively in the Service Management API. We see how to do this in the Using the Windows Azure Platform PowerShell Cmdlets recipe. As of this writing, the locations are: Anywhere US South Central US North Central US Anywhere Europe North Europe West Europe Anywhere Asia Southeast Asia East Asia The affinity groups are specific to a subscription.
Read more
  • 0
  • 0
  • 2362

article-image-using-windows-azure-platform-powershell-cmdlets
Packt
08 Aug 2011
4 min read
Save for later

Using the Windows Azure Platform PowerShell Cmdlets

Packt
08 Aug 2011
4 min read
  Microsoft Windows Azure Development Cookbook Over 80 advanced recipes for developing scalable services with the Windows Azure platform         Read more about this book       (For more resources on this subject, see here.) Getting ready If necessary, we can download PowerShell 2 from the Microsoft download center at the following URL: http://www.microsoft.com/download/en/details.aspx?id=11829 We need to download and install the Windows Azure Platform PowerShell cmdlets. The package with the cmdlets can be downloaded from the following URL: http://wappowershell.codeplex.com/ Once the package has been downloaded, the cmdlets need to be built and installed. The installed package contains a StartHere file explaining the process. How to do it... We are going to use the Windows Azure Platform cmdlets to retrieve various properties of a Windows Azure subscription and a hosted service in it. Create a PowerShell script named Get-Properties.ps1 and insert the following text: $subscriptionId = 'SUBSCRIPTION_ID' $serviceName = 'SERVICE_NAME' $thumbprint = 'THUMBPRINT' $getCertificate = Get-Item cert:LocalMachineMy$thumbprint Add-PSSnapin AzureManagementToolsSnapIn Get-HostedServices -SubscriptionId $subscriptionId -Certificate $getCertificate Get-AffinityGroups -SubscriptionId $subscriptionId -Certificate $getCertificate Get-HostedProperties -SubscriptionId $subscriptionId -Certificate $getCertificate -ServiceName $serviceName Launch PowerShell. Navigate to the directory containing Get-Properties.ps1. Invoke the cmdlets to retrieve the properties: .Get-Properties.ps1 How it works... In step 1, we create the PowerShell script to invoke the get hosted service properties, list affinity groups, and get hosted service properties operations in the Windows Azure Service Management REST API. We need to provide the subscription ID for the Windows Azure subscription, the name of the hosted service, and the thumbprint for a management certificate uploaded to the Windows Azure subscription. In the script, we retrieve the X.509 certificate from the Personal (My) certificate store on the local machine level. If necessary, we can specify the current user level, instead of the local machine level, by using CurrentUser in place of LocalMachine when we define $getCertificate. In steps 2 and 3, we set up PowerShell. In step 4, we invoke the script using a . syntax to demonstrate that we really want to invoke an unsigned script in the current directory. There's more... PowerShell supports an execution policy to restrict the PowerShell scripts that can be run on a system. If the current execution policy does not permit the Windows Azure Service Management cmdlets to run, then the execution policy can be changed to remote signed by invoking the following at the command prompt: C:UsersAdministrator>PowerShell -command "Set-ExecutionPolicy RemoteSigned" This sets the global PowerShell execution context. PowerShell 2 introduced a command-line switch allowing it to be set only for the current invocation: C:UsersAdministrator>PowerShell -ExecutionPolicy RemoteSigned Azure Management cmdlets Cerebrata has released a commercial set of Azure Management cmdlets that are more extensive than the Windows Azure Service Management cmdlets. The following PowerShell script retrieves the list of affinity groups for a Windows Azure subscription, including the GUID identifier not available on the Windows Azure Portal: $subscriptionId = 'SUBSCRIPTION_ID' $thumbprint = 'THUMBPRINT' $getCertificate = Get-ChildItem -path cert:LocalMachineMy$thumbprint Add-PSSnapin AzureManagementCmdletsSnapIn Get-AffinityGroup -SubscriptionId $subscriptionId -Certificate $getCertificate We need to provide the subscription ID for the Windows Azure subscription, and the thumbprint for a management certificate uploaded to the Windows Azure subscription. In the script, we retrieve the X.509 certificate from the Personal (My) certificate store on the local machine level. If necessary, we can specify the current user level, instead of the local machine lever, by using CurrentUser in place of LocalMachine when we define $getCertificate. We can use the following command to retrieve the list of Windows Azure locations: Get-AzureDataCenterLocation -SubscriptionId $subscriptionId -Certificate $getCertificate Summary In this article we saw how to use the Windows Azure Platform PowerShell cmdlets to invoke various service operations in the Windows Azure Service Management REST API. Further resources on this subject: Managing Azure Hosted Services with the Service Management API [Article] Autoscaling with the Windows Azure Service Management REST API [Article] Windows Azure Diagnostics: Initializing the Configuration and Using a Configuration File [Article] Digging into Windows Azure Diagnostics [Article] Using IntelliTrace to Diagnose Problems with a Hosted Service [Article]
Read more
  • 0
  • 0
  • 5192
article-image-autoscaling-windows-azure-service-management-rest-api
Packt
08 Aug 2011
9 min read
Save for later

Autoscaling with the Windows Azure Service Management REST API

Packt
08 Aug 2011
9 min read
  Microsoft Windows Azure Development Cookbook A hosted service may have a predictable pattern such as heavy use during the week and limited use at the weekend. Alternatively, it may have an unpredictable pattern identifiable through various performance characteristics. Windows Azure charges by the hour for each compute instance, so the appropriate number of instances should be deployed at all times. The basic idea is that the number of instances for the various roles in the hosted service is modified to a value appropriate to a schedule or to the performance characteristics of the hosted service. We use the Service Management API to retrieve the service configuration for the hosted service, modify the instance count as appropriate, and then upload the service configuration. In this recipe, we will learn how to use the Windows Azure Service Management REST API to autoscale a hosted service depending on the day of the week. Getting ready We need to create a hosted service. We must create an X.509 certificate and upload it to the Windows Azure Portal twice: once as a management certificate and once as a service certificate to the hosted service. How to do it... We are going to vary the instance count of a web role deployed to the hosted service by using the Windows Azure Service Management REST API to modify the instance count in the service configuration. We are going to use two instances of the web role from Monday through Friday and one instance on Saturday and Sunday, where all days are calculated in UTC. We do this as follows: Create a Windows Azure Project and add an ASP.Net Web Role to it. Add the following using statements to the top of WebRole.cs: using System.Threading; using System.Xml.Linq; using System.Security.Cryptography.X509Certificates; Add the following members to the WebRole class in WebRole.cs: XNamespace wa = "http://schemas.microsoft.com/windowsazure"; XNamespace sc = http://schemas.microsoft.com/ ServiceHosting/2008/10/ServiceConfiguration"; String changeConfigurationFormat = https://management.core. windows.net/{0}/services/hostedservices/{1}/deploymentslots/{2}/ ?comp=config"; String getConfigurationFormat = https://management.core.windows. net/{0}/services/hostedservices/{1}/deploymentslots/{2}"; String subscriptionId = RoleEnvironment.GetConfigurationSettingVal ue("SubscriptionId"); String serviceName = RoleEnvironment.GetConfigurationSettingValue ("ServiceName"); String deploymentSlot = RoleEnvironment.GetConfigurationSettingVal ue("DeploymentSlot"); String thumbprint = RoleEnvironment.GetConfigurationSettingValue ("Thumbprint"); String roleName = "WebRole1"; String instanceId = "WebRole1_IN_0"; Add the following method, implementing RoleEntryPoint.Run(), to the WebRole class: WebRole class: public override void Run() { Int32 countMinutes = 0; while (true) { Thread.Sleep(60000); if (++countMinutes == 20) { countMinutes = 0; if ( RoleEnvironment.CurrentRoleInstance.Id == instanceId) { ChangeInstanceCount(); } } } } Add the following method, controlling the instance count change, to the WebRole class: private void ChangeInstanceCount() { XElement configuration = LoadConfiguration(); Int32 requiredInstanceCount = CalculateRequiredInstanceCount(); if (GetInstanceCount(configuration) != requiredInstanceCount) { SetInstanceCount(configuration, requiredInstanceCount); String requestId = SaveConfiguration(configuration); } } Add the following method, calculating the required instance count, to the WebRole class: private Int32 CalculateRequiredInstanceCount() { Int32 instanceCount = 2; DayOfWeek dayOfWeek = DateTime.UtcNow.DayOfWeek; if (dayOfWeek == DayOfWeek.Saturday || dayOfWeek == DayOfWeek.Sunday) { instanceCount = 1; } return instanceCount; } Add the following method, retrieving the instance count from the service configuration, to the WebRole class: private Int32 GetInstanceCount(XElement configuration) { XElement instanceElement = (from s in configuration.Elements(sc + "Role") where s.Attribute("name").Value == roleName select s.Element(sc + "Instances")).First(); Int32 instanceCount = (Int32)Convert.ToInt32( instanceElement.Attribute("count").Value); return instanceCount; } Add the following method, setting the instance count in the service configuration, to the WebRole class: private void SetInstanceCount( XElement configuration, Int32 value) { XElement instanceElement = (from s in configuration.Elements(sc + "Role") where s.Attribute("name").Value == roleName select s.Element(sc + "Instances")).First(); instanceElement.SetAttributeValue("count", value); } Add the following method, creating the payload for the change deployment configuration operation, to the WebRole class: private XDocument CreatePayload(XElement configuration) { String configurationString = configuration.ToString(); String base64Configuration = ConvertToBase64String(configurationString); XElement xConfiguration = new XElement(wa + "Configuration", base64Configuration); XElement xChangeConfiguration = new XElement(wa + "ChangeConfiguration", xConfiguration); XDocument payload = new XDocument(); payload.Add(xChangeConfiguration); payload.Declaration = new XDeclaration("1.0", "UTF-8", "no"); return payload; } Add the following method, loading the service configuration, to the WebRole class: private XElement LoadConfiguration() { String uri = String.Format(getConfigurationFormat, subscriptionId, serviceName, deploymentSlot); ServiceManagementOperation operation = new ServiceManagementOperation(thumbprint); XDocument deployment = operation.Invoke(uri); String base64Configuration = deployment.Element( wa + "Deployment").Element(wa + "Configuration").Value; String stringConfiguration = ConvertFromBase64String(base64Configuration); XElement configuration = XElement.Parse(stringConfiguration); return configuration; } Add the following method, saving the service configuration, to the WebRole class: private String SaveConfiguration(XElement configuration) { String uri = String.Format(changeConfigurationFormat, subscriptionId, serviceName, deploymentSlot); XDocument payload = CreatePayload(configuration); ServiceManagementOperation operation = new ServiceManagementOperation(thumbprint); String requestId = operation.Invoke(uri, payload); return requestId; } Add the following utility methods, converting a String to and from its base-64 encoded version, to the WebRole class: private String ConvertToBase64String(String value) { Byte[] bytes = System.Text.Encoding.UTF8.GetBytes(value); String base64String = Convert.ToBase64String(bytes); return base64String; } private String ConvertFromBase64String(String base64Value) { Byte[] bytes = Convert.FromBase64String(base64Value); String value = System.Text.Encoding.UTF8.GetString(bytes); return value; } Add the ServiceManagementOperation class described in the Getting ready section of the Creating a Windows Azure hosted service recipe to the WebRole1 project. Set the ConfigurationSettings element in the ServiceDefinition.csdef file to: <ConfigurationSettings> <Setting name="DeploymentSlot" /> <Setting name="ServiceName" /> <Setting name="SubscriptionId" /> <Setting name="Thumbprint" /> </ConfigurationSettings> Set the ConfigurationSettings element in the ServiceDefinition.cscfg file to the following: <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics. ConnectionString" alue="DefaultEndpointsProtocol=https;AccountNam e=ACCOUNT_NAME;AccountKey=ACCOUNT_KEY" /> <Setting name="DeploymentSlot" value="production" /> <Setting name="ServiceName" value="SERVICE_NAME" /> <Setting name="SubscriptionId" value="SUBSCRIPTION_ID" /> <Setting name="Thumbprint" value="THUMBPRINT" /> </ConfigurationSettings> How it works... In steps 1 and 2, we set up the WebRole class. In step 3, we add private members to define the XML namespace used in processing the response and the String format used in generating the endpoint for the change deployment configuration and get deployment operations. We then initialize several values from configuration settings in the service configuration file deployed to each instance. In step 4, we implement the Run() class . Every 20 minutes, the thread this method runs in wakes up and, only in the instance named WebRole1_IN_0, invokes the method controlling the instance count for the web role. This code runs in a single instance to ensure that there is no race condition with multiple instances trying to change the instance count simultaneously. In step 5, we load the service configuration. If we detect that the instance count should change we modify the service configuration to have the desired instance count and then save the service configuration. Note that the service configuration used here is downloaded and uploaded using the Service Management API. Step 6 contains the code where we calculate the needed instance count. In this example, we choose an instance count of 2 from Monday through Friday and 1 on Saturday and Sunday. All days are specified in UTC. This is the step where we should insert the desired scaling algorithm. In step 7, we retrieve the instance count for the web role from the service configuration. In step 8, we set the instance count to the desired value in the service configuration. In step 9, we create the payload for the change deployment configuration operation. We create a Configuration element and add a base-64 encoded copy of the service configuration to it. We add the Configuration element to the root ChangeConfiguration element which we then add to an XML document. In step 10, we use the ServiceManagementOperation utility class , described in the Creating a Windows Azure hosted service recipe, to invoke the get deployment operation on the Service Management API. The Invoke() method creates an HttpWebRequest, adds the required X.509 certificate, and sends the request to the get deployment endpoint. We load the response into an XML document from which we extract the base-64 encoded service configuration. We then convert this into its XML format and load this into an XElement which we return. In step 11, we use the ServiceManagementOperation utility class to invoke the change deployment configuration operation on the Service Management API. The Invoke() method creates an HttpWebRequest, adds the required X.509 certificate and the payload, and then sends the request to the change deployment configuration endpoint. It then parses the response to retrieve the request ID. In step 12, we add two utility methods to convert to and from a base-64 encoded String. In step 13, we add the ServiceManagementOperation utility class that we use to invoke operations against the Service Management API. In steps 14 and 15, we define some configuration settings in the service definition file and specify them in the service configuration file. We provide values for the Windows Azure Storage Service account name and access key. We also provide the subscription ID for the Windows Azure subscription, as well as the service name for current hosted service. We also need to add the thumbprint for the X.509 certificate we uploaded as a management certificate to the Windows Azure subscription and a service certificate to the hosted service we are deploying this application into. Note that this thumbprint is the same as that configured in the Certificate section of the ServiceConfiguration.cscfg file. This duplication is necessary because the Certificate section of this file is not accessible to the application code. Summary Windows Azure charges by the hour for each compute instance, so the appropriate number of instances should be deployed at all times. Autoscaling with the Windows Azure Service Management REST API as shown in this article is a boon in terms of keeping track of number of deployments at any time. Further resources on this subject: Managing Azure Hosted Services with the Service Management API [Article] Using the Windows Azure Platform PowerShell Cmdlets [Article] Windows Azure Diagnostics: Initializing the Configuration and Using a Configuration File [Article] Digging into Windows Azure Diagnostics [Article] Using IntelliTrace to Diagnose Problems with a Hosted Service [Article]
Read more
  • 0
  • 0
  • 2462

article-image-how-storage-works-amazon
Packt
22 Jul 2011
9 min read
Save for later

How Storage Works on Amazon

Packt
22 Jul 2011
9 min read
Amazon Web Services: Migrating your .NET Enterprise Application Evaluate your Cloud requirements and successfully migrate your .NET Enterprise Application to the Amazon Web Services Platform Creating a S3 bucket with logging Logging provides detailed information on who accessed what data in your bucket and when. However, to turn on logging for a bucket, an existing bucket must have already been created to hold the logging information, as this is where AWS stores it. To create a bucket with logging, click on the Create Bucket button in the Buckets sidebar: This time, however, click on the Set Up Logging button . You will be presented with a dialog that allows you to choose the location for the logging information, as well as the prefix for your logging data: You will note that we have pointed the logging information back at the original bucket migrate_to_aws_01 Logging information will not appear immediately; however, a file will be created every few minutes depending on activity. The following screenshot shows an example of the files that are created: Before jumping right into the command-line tools, it should be noted that the AWS Console includes a Java-based multi-file upload utility that allows a maximum size of 300 MB for each file Using the S3 command-line tools Unfortunately, Amazon does not provide official command-line tools for S3 similar to the tools they have provided for EC2. However, there is an excellent simple free utility provided at o http://s3.codeplex.com, called S3.exe, that requires no installation and runs without the requirement of third-party packages. To install the program, just download it from the website and copy it to your C:AWS folder. Setting up your credentials with S3.exe Before we can run S3.exe, we first need to set up our credentials. To do that you will need to get your S3 Access Key and your S3 Secret Access Key from the credentials page of your AWS account. Browse to the following location in your browser, https://aws-portal.amazon.com/gp/aws/developer/account/ index.html?ie=UTF8&action=access-key and scroll down to the Access Credentials section: The Access Key is displayed in this screen; however, to get your Secret Access Key you will need to click on the Show link under the Secret Access Key heading. Run the following command to set up S3.exe: C:AWS>s3 auth AKIAIIJXIP5XC6NW3KTQ 9UpktBlqDroY5C4Q7OnlF1pNXtK332TslYFsWy9R To check that the tool has been installed correctly, run the s3 list command: C:AWS>s3 list You should get the following result: Copying files to S3 using S3.exe First, create a file called myfile.txt in the C:AWS directory. To copy this file to an S3 bucket that you own, use the following command: c:AWS>s3 put migrate_to_aws_02 myfile.txt This command copies the file to the migrate_to_aws_02 bucket with the default permissions of full control for the owner. You will need to refresh the AWS Console to see the file listed. (Move the mouse over the image to enlarge it.) Uploading larger files to AWS can be problematic, as any network connectivity issues during the upload will terminate the upload. To upload larger files, use the following syntax: C:AWS>s3 put migrate_to_aws_02/mybigfile/ mybigfile.txt /big This breaks the upload into small chunks, which can be reversed when getting the file back again. If you run the same command again, you will note that no chunks are uploaded. This is because S3.exe does not upload a chunk again if the checksum matches. Retrieving files from S3 using S3.exe Retrieving files from S3 is the reverse of copying files up to S3. To get a single file back use: C:AWS>s3 get migrate_to_aws_02/myfile.txt To get our big file back again use: C:AWS>s3 get migrate_to_aws_02/mybigfile/mybigfile.txt /big The S3.exe command automatically recombines our large file chunks back into a single file. Importing and exporting large amounts of data in and out of S3 Because S3 lives in the cloud within Amazon's data centers, it may be costly and time consuming to transfer large amounts of data to and from Amazon's data center to your own data center. An example of a large file transfer may be a large database backup file that you may wish to migrate from your own data center to AWS. Luckily for us, Amazon provides the AWS Import/Export Service for the US Standard and EU (Ireland) regions. However, this service is not supported for the other two regions at this time. The AWS Import service allows you to place your data on a portable hard drive and physically mail your hard disk to Amazon for uploading/downloading of your data from within Amazon's data center. Amazon provides the following recommendations for when to use this service. If your connection is 1.55Mbps and your data is 100GB or more If your connection is 10Mbps and your data is 600GB or more If your connection is 44.736Mbps and your data is 2TB or more If your connection is 100Mbps and your data is 5TB or more Make sure if you choose either the US West (California) or Asia Pacific (Singapore) regions that you do not need access to the AWS Import/ Export service, as it is not available in these regions. Setting up the Import/Export service To begin using this service once again, you will need to sign up for this service separately from your other services. Click on the Sign Up for AWS Import/Export button located on the product page http://aws.amazon.com/importexport, confirm the pricing and click on the Complete Sign Up button . Once again, you will need to wait for the service to become active: Current costs are:     Cost Type US East US West EU APAC Device handling $80.00 $80.00 $80.00 $99.00 Data loading time $2.49 per data loading hour $2.49 per data loading hour $2.49 per data loading hour $2.99 per data loading hour Using the Import/Export service To use the Import/Export service, first make sure that your external disk device conforms to Amazon's specifications. Confirming your device specifications The details are specified at http://aws.amazon.com/importexport/#supported_ devices, but essentially as long as it is a standard external USB 2.0 hard drive or a rack mountable device less than 8Us supporting eSATA then you will have no problems. Remember to supply a US power plug adapter if you are not located in the United States. Downloading and installing the command-line service tool Once you have confirmed that your device meets Amazon's specifications, download the command-line tools for the Import/Export service. At this time, it is not possible to use this service from the AWS Console. The tools are located at http:// awsimportexport.s3.amazonaws.com/importexport-webservice-tool.zip. Copy the .zip file to the C:AWS directory and unzip them, they will most likely end up in the following directory, C:AWSimportexport-webservice-tool. Creating a job To create a job, change directory to the C:AWSimportexport-webservice- tool directory, open notepad, and paste the following text into a new file: manifestVersion: 2.0 bucket: migrate_to_aws_01 accessKeyId: AKIAIIJXIP5XC6NW3KTQ deviceId: 12345678 eraseDevice: no returnAddress: name: Rob Linton street1: Level 1, Migrate St city: Amazon City stateOrProvince: Amazon postalCode: 1000 phoneNumber: 12345678 country: Amazonia customs: dataDescription: Test Data encryptedData: yes encryptionClassification: 5D992 exportCertifierName: Rob Linton requiresExportLicense: no deviceValue: 250.00 deviceCountryOfOrigin: China deviceType: externalStorageDevice Edit the text to reflect your own postal address, accessKeyId, bucket name, and save the file as MyManifest.txt. For more information on the customs configuration items refer to http://docs.amazonwebservices. com/AWSImportExport/latest/DG/index.html?ManifestFileRef_ international.html. If you are located outside of the United States a customs section in the manifest is a requirement. In the same folder open the AWSCredentials.properties file in notepad, and copy and paste in both your AWS Access Key ID and your AWS Secret Access Key. The file should look like this: # Fill in your AWS Access Key ID and Secret Access Key # http://aws.amazon.com/security-credentials accessKeyId:AKIAIIJXIP5XC6NW3KTQ secretKey:9UpktBlqDroY5C4Q7OnlF1pNXtK332TslYFsWy9R Now that you have created the required files, run the following command in the same directory. C:AWSimportexport-webservice-tool>java -jar lib/AWSImportExportWebServiceTool-1.0.jar CreateJob Import MyManifest.txt . (Move the mouse over the image to enlarge it.) Your job will be created along with a .SIGNATURE file in the same directory. Copying the data to your disk device Now you are ready to copy your data to your external disk device. However, before you start, it is mandatory to copy the .SIGNATURE file created in the previous step into the root directory of your disk device. Sending your disk device Once your data and the .SIGNATURE file have been copied to your disk device, print out the packing slip and fill out the details. The JOBID can be obtained in the output from your earlier create job request, in our example the JOBID is XHNHC. The DEVICE IDENTIFIER is the device serial number, which was entered into the manifest file, in our example it was 12345678. The packing slip must be enclosed in the package used to send your disk device.   Each package can have only one storage device and one packing slip, multiple storage devices must be sent separately. Address the package with the address output in the create job request: AWS Import/Export JOBID TTVRP 2646 Rainier Ave South Suite 1060 Seattle, WA 98144 Please note that this address may change depending on what region you are sending your data to. The correct address will always be returned from the Create Job command in the AWS Import/Export Tool. Managing your Import/Export jobs Once your job has been submitted, the only way to get the current status of your job or to modify your job is to run the AWS Import/Export command-line tool. Here is an example of how to list your jobs and how to cancel a job. To get a list of your current jobs, you can run the following command: C:AWSimportexport-webservice-tool>java -jar lib/AWSImportExportWebServiceTool-1.0.jar ListJobs To cancel a job, you can run the following command: C:AWSimportexport-webservice-tool>java -jar lib/AWSImportExportWebServiceTool-1.0.jar CancelJob XHNHC (Move the mouse over the image to enlarge it.)
Read more
  • 0
  • 0
  • 3640

article-image-getting-started-aws-and-amazon-ec2
Packt
20 Jul 2011
4 min read
Save for later

Getting Started with AWS and Amazon EC2

Packt
20 Jul 2011
4 min read
  Amazon Web Services: Migrate your .NET Enterprise Application to the Amazon Cloud Evaluate your Cloud requirements and successfully migrate your Enterprise .NET application to the Amazon Web Services Platform with this book and eBook         Read more about this book       (For more resources on this subject, see here.) Creating your first AWS account Well, here you are, ready to log in; create your first AWS account and get started! AWS lives at http://aws.amazon.com, so browse to this location and you will be greeted with the Amazon Web Services home page. From November 1st, 2010, Amazon has provided a free usage tier, which is currently displayed prominently on the front page. So, to get started click on the Sign Up Now button. You will be prompted with the Web Services Sign In screen. Enter the e-mail address that you would like to be associated with your AWS account and select I am a new user. When you have entered your e-mail address, click on the Sign in using our secure server button. Multi-factor authentication One of the things worth noting about this sign in screen is the Learn more comment at the bottom of the page, which mentions multi-factor authentication. Multi-factor authentication can be useful where organizations are required to use a more secure form of remote access. If you would like to secure your AWS account using multi-factor authentication this is now an option with AWS. To enable this, you will need to continue and create your AWS account. After your account has been created, go to the following address http://aws.amazon.com/ mfa/#get_device and follow the instructions for purchasing a device: Once you have the device in hand, you'll need to log in again to enable it: You will then be prompted with the extra dialog when signing in: Registration and privacy details Once you have clicked on the Sign in using our secure server button, you will be presented with the registration screen. Enter your full name and password that you would like to use: Note the link to the Privacy Notice at the bottom of the screen. You should be aware that the privacy notice is the same privacy notice used for the Amazon.com bookstore and website, which essentially means that any information you provide to Amazon through AWS may be correlated to purchases made on the Amazon bookstore and website. (Move the mouse over the image to enlarge it.) Fill out your contact details, agree to the AWS Customer Agreement, and complete the Security Check at the bottom of the form: If you are successful, you will be presented with the following result: AWS customer agreement Please note that the AWS Customer agreement is worth reading, with the full version located at http://aws.amazon.com/agreement. The agreement covers a lot of ground, but a couple of sections that are worth noting are: Section 10.2 – Your Applications, Data, and Content This section specifically states that you are the intellectual property and proprietary rights owner of all data and applications running under this account. However, the same section specifically gives the right to Amazon to hand over your data to a regulatory body, or to provide your data at the request of a court order or subpoena. Section 14.2 – Governing Law This section states that by agreeing to this agreement, you are bound by the laws of the State of Washington, USA, which—read in conjunction with section 10.2— suggests that any actions that fall out of section 10.2 will be initiated from within the State of Washington. Section 11.2 – Applications and Content This section may concern some users as it warrants that you (as the AWS user) are solely responsible for the content and security of any data and applications running under your account. I advise that you seek advice from your company's legal department prior to creating an account, which will be used for your enterprise.
Read more
  • 0
  • 0
  • 2766
article-image-funambol-e-mail-part-2
Packt
06 Jan 2010
3 min read
Save for later

Funambol E-mail: Part 2

Packt
06 Jan 2010
3 min read
Mobile e-mail at work One of the most widely used phones for mobile e-mail are phones running Windows Mobile; therefore, this is a platform Maria will have to support. Funambol fully supports this platform, extending the Windows Mobile native e-mail client to support SyncML and Funambol mobile e-mail. As Windows Mobile does not natively support SyncML, Maria needs to download the Funambol Windows Mobile Sync Client from the following URLs: http://www.funambol.com/opensource/download.php?file_id=funambol-smartphone-sync-client-7.2.2.exe&_=d (for Windows Mobile smartphone) http://www.funambol.com/opensource/download.php?file_id=funambol-pocketpc-sync-client-7.2.2.exe&_=d (for Windows Mobile Pocket PC) Like any other Windows Mobile applications, these are executable files that need to be run on a desktop PC and the installation will be performed by Microsoft ActiveSync. Once installed on the mobile phone, Maria can run Funambol by clicking the Funambol icon. The first time the application is launched, it asks for the Funambol credentials, as shown in the following image: Maria fills in her Funambol Server location and credentials (not her e-mail account credentials) and presses Save. After a short while, the device will start downloading the messages that she can access from the Funambol account created by the Funambol installation program in Pocket Outlook. The inbox will look similar to the following image: To see mobile e-mail at work, Maria just needs to send an e-mail to the e-mail account she set up earlier. In less than a minute, her mobile device will be notified that there are new messages and the synchronization will automatically start (unless the client is configured differently). Mobile e-mail client configuration There are a number of settings that Maria can set on her mobile phone to change how mobile e-mail works. These settings are accessible from the Funambol application by clicking on Menu | Settings. There are two groups of settings that are important for mobile e-mail: E-mail options... and Sync Method. From the Email options panel, Maria can choose which e-mails to download (all e-mails, today's e-mails, or e-mails received from the last X days), the size of the e-mail to download first (then the remaining part can be downloaded on demand), and if she also wants to download attachments. In the advanced options, she can also choose to use a different "From" display name and e-mail address. From the push method panel, Maria can choose how to download e-mail automatically using the push service on a regular basis, with either a scheduled sync or only manually upon request (from the Funambol Windows Mobile Sync Client interface or the PocketOutlook send and receive command). Funambol supports many mobile phones for mobile e-mail. The previous description is only for Windows Mobile phones. The manner in which Funambol supports other devices depends on the phone. In some cases, Funambol uses the phone's native e-mail client, such as with Windows Mobile. In other cases, Funambol provides its own mobile e-mail client that is downloaded onto the device.
Read more
  • 0
  • 0
  • 1500

article-image-funambol-e-mail-part-1
Packt
06 Jan 2010
6 min read
Save for later

Funambol E-mail: Part 1

Packt
06 Jan 2010
6 min read
In this article, Maria will set up Funambol to connect to the company e-mail server, in order to enable her users to receive e-mail on their mobile phones. E-mail Connector The E-mail Connector allows Funambol to connect to any IMAP and POP e-mail server to enable mobile devices to receive corporate or personal e-mail. The part of the Funambol architecture involved with this functionality is illustrated in the following figure: The E-mail Connector is a container of many things, the most important ones being: The e-mail server extension (represented in the figure by the E-mail Connector block): This is the extension of the Funambol Data Synchronization Service that allows e-mail synchronization through the connection to the e-mail provider. The Inbox Listener Service: This is the service that detects new e-mail in the user inbox and notifies the user's devices. When the Funambol DS Service receives sync requests for e-mail, the request calls the E-mail Connector, which downloads new messages from the e-mail server and makes them available to the DS Service, which in turn delivers them to the device. When a user receives a new e-mail, the new message is detected by the Inbox Listener Service that notifies the user's device to start a new sync. When the E-mail Connector is set up and activated, e-mail can be synced with an e-mail provider if it supports one of the two popular e-mail protocols—POP3 or IMAP v4 for incoming e-mail and the SMTP protocol for outgoing e-mail delivery. Please note that the Funambol server does not store user e-mail locally. For privacy and security reasons, e-mail is stored in the e-mail store of the E-mail Provider. The server constructs a snapshot of each user's inbox in the local database to speed up the process of discovering new e-mails without connecting to the e-mail server. Basically, this e-mail cache contains the ID of the messages and their received date and time. The Funambol service responsible for populating and updating the user inbox cache is the Inbox Listener Service. This service checks each user inbox on a regular basis (that is, every 15 minutes) and updates the inbox cache, adding new messages and deleting the messages that are removed from the inbox (for example, when a desktop client downloaded them or the user moved the messages to a different folder). Another important aspect to consider with mobile e-mail is that many devices have limited capabilities and resources. Further, the latency of retrieving a large inbox can be unacceptable for mobile users, who need the device to be always functional when they are away from their computer. For this reason, Funambol limits the maximum number of e-mails that Maria can download on her mobile so that she is never inconvenienced by having too many e-mails in her mobile e-mail inbox. This value can be customized in the server settings (see section E-mail account setup). In the following sections, Maria will learn how to set up Funambol to work with the corporate e-mail provider and how she can provide Funambol mobile e-mail for her users. Setting up Funambol mobile e-mail The Funambol E-mail Connector is part of a default installation of Funambol so Maria does not need to install any additional packages to use it. The following sections describe what Maria needs to do to set up Funambol to connect to her corporate e-mail server. E-mail Provider The only thing that Maria needs to make sure about the corporate E-mail Provider is that it supports POP/IMAP and SMTP access from the network where Funambol is installed. It is not necessary that the firewall is open to mobile devices. Devices will keep using SyncML as the transport protocol, while the Funambol server connects to the e-mail server when required. Also, the same e-mail server does not need to provide both POP (or IMAP) and SMTP. Funambol can be configured to use two different servers for incoming and outgoing messages. Funambol authentication with e-mail One of Maria's security concerns is the distribution and provisioning of e-mail account information on the mobile phones. She does not like the fact that e-mail account information is sent over a channel that she can only partially control. This is a common concern of IT administrators. Funambol addresses this issue by not storing e-mail credentials on the device. The device (or any other SyncML client) is provisioned with Funambol credentials. In the previous sections, Maria was able to create new accounts so that users could use the PIM synchronization service, and in doing so, she needed to provide new usernames and passwords. This is still valid for e-mail users. What Maria needs to do now is to configure the E-mail Connector and add the e-mail account of the users she wants to enable for mobile e-mail. These topics are covered in detail in the following sections. E-mail account setup To add a user e-mail account to the synchronization service, Maria can use the Funambol Administration Tool, expanding the Modules | email | FunambolEmailConnector node and double-clicking the connector. This opens the connector administration panel, as shown in the following screenshot: There are two sections: Public Mail Servers and Accounts. Maria needs to add new accounts. Let's start with her account first. Clicking the Add button in the Accounts section opens up a new search window so that she can search which Funambol user to attach to the e-mail account. Typing maria in the Username field and clicking Search, will show you the result as shown in the following screenshot: Double-clicking the desired account displays a form for maria's account details as shown in the following screenshot: Each field is explained as follows: Login, Password, Confirm password, and E-mail addressAs the labels of the fields describe, these are the e-mail account credentials and e-mail address. These are credentials used to access the e-mail service, not the ones to access the Funambol synchronization service. Enable PollingThis setting enables or disables the functionality of the Inbox Listener Service to check for updates on this account's inbox. When disabled, the account inbox won't be scanned for new/updated/deleted e-mail. This disables e-mail synchronization completely. Enable PushThis setting enables or disables the push functionality. When disabled, the user will not be notified of new e-mails. If Enable Polling checkbox is active, the Inbox Listener Service keeps updating this account's e-mail cache anyway. In this case Maria can still download e-mail by manually starting the synchronization from the client. Refresh time(min) This setting specifies how frequently the Inbox Listener Service checks for updates on this account's inbox. The value is expressed in minutes. The shorter this period, the more often new e-mail is detected and therefore the closer the user experience is to real time. However, the smaller this number, the heavier the load on the Inbox Listener Service and the e-mail provider. When you have only a few users, this is not too relevant, but it is something to consider when planning a major deployment.
Read more
  • 0
  • 0
  • 2010