Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Microsoft Windows Azure Development Cookbook
Microsoft Windows Azure Development Cookbook

Microsoft Windows Azure Development Cookbook: Realize the full potential of Windows Azure with this superb Cookbook that has over 80 recipes for building advanced, scalable cloud-based services. Simply pick the solutions you need to answer your requirements immediately.

eBook
$32.99
Paperback
$54.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Table of content icon View table of contents Preview book icon Preview Book

Microsoft Windows Azure Development Cookbook

Chapter 1. Controlling Access in the Windows Azure Platform

In this chapter, we will cover:

  • Managing Windows Azure Storage Service access keys

  • Connecting to the Windows Azure Storage Service

  • Using SetConfigurationSettingPublisher()

  • Connecting to the storage emulator

  • Managing access control for containers and blobs

  • Creating a Shared Access Signature for a container or blob

  • Using a container-level access policy

  • Authenticating against the Windows Azure Service Management REST API

  • Authenticating with the Windows Azure AppFabric Caching Service

Introduction


The various components of the Windows Azure Platform are exposed using Internet protocols. Consequently, they need to support authentication so that access to them can be controlled.

The Windows Azure Storage Service manages the storage of blobs, queues, and tables. It is essential that this data be kept secure, so that there is no unauthorized access to it. Each storage account has an account name and an access key which are used to authenticate access to the storage service. The management of these access keys is important. The storage service provides two access keys for each storage account, so that the access key not being used can be regenerated. We see how to do this in the Managing Windows Azure Storage Service access keys recipe.

The storage service supports hash-based message authentication (HMAC), in which a storage operation request is hashed with the access key. On receiving the request, the storage service validates it and either accepts or denies it. The Windows Azure Storage Client library provides several classes that support various ways of creating an HMAC, and which hide the complexity of creating and using one. We see how to use them in the Connecting to the Windows Azure Storage Service recipe. The SetConfigurationSettingPublisher() method has caused some programmer grief, so we look at it in the Using SetConfigurationSettingPublisher() recipe.

The Windows Azure SDK provides a compute emulator and a storage emulator. The latter uses a hard-coded account name and access key. We see the support provided for this in the Connecting to the storage emulator recipe.

Blobs are ideal for storing static content for web roles, so the storage service provides several authentication methods for access to containers and blobs. Indeed, a container can be configured to allow anonymous access to the blobs in it. Blobs in such a container can be downloaded without any authentication. We see how to configure this in the Managing access control for containers and blobs recipe.

There is a need to provide an intermediate level of authentication for containers and blobs, a level that lies between full authentication and anonymous access. The storage service supports the concept of a shared access signature, which is a pre-calculated authentication token and can be shared in a controlled manner allowing the bearer to access a specific container or blob for up to one hour. We see how to do this in the Creating a shared access signature for a container or blob recipe.

A shared access policy combines access rights with a time for which they are valid. A container-level access policy is a shared access policy that is associated by name with a container. A best practice is to derive a shared access signature from a container-level access policy. Doing this provides greater control over the shared access signature as it becomes possible to revoke it. We see how do this in the Using a container-level access policy recipe.

There is more to the Windows Azure Platform than storage. The Windows Azure Service Management REST API is a RESTful API that provides programmatic access to most of the functionality available on the Windows Azure Portal. This API uses X.509 certificates for authentication. Prior to use, the certificate must be uploaded, as a management certificate, to the Windows Azure Portal. The certificate must then be added as a certificate to each request made against the Service Management API. We see how to do this in the Authenticating against the Windows Azure Service Management REST API recipe.

The Windows Azure AppFabric services use a different authentication scheme, based on a service namespace and authentication token. In practice, these are similar to the account name and access key used to authenticate against the storage service, although the implementation is different. The Windows Azure AppFabric services use the Windows Azure Access Control Service (ACS) to perform authentication. However, this is abstracted away in the various SDKs provided for the services. We see how to authenticate to one of these services in the Authenticating with the Windows Azure AppFabric Caching Service recipe.

Managing Windows Azure Storage Service access keys


The data stored by the Windows Azure Storage Service must be secured against unauthorized access. To ensure that security, all storage operations against the table service and the queue service must be authenticated. Similarly, other than inquiry requests against public containers and blobs, all operations against the blob service must also be authenticated. The blob service supports public containers so that, for example, blobs containing images can be downloaded directly into a web page.

Each storage account has a primary access key and a secondary access key that can be used to authenticate operations against the storage service. When creating a request against the storage service, one of the keys is used along with various request headers to generate a 256-bit, hash-based message authentication code (HMAC). This HMAC is added as an Authorization request header to the request. On receiving the request, the storage service recalculates the HMAC and rejects the request if the received and calculated HMAC values differ. The Windows Azure Storage Client library provides methods that manage the creation of the HMAC and attaching it to the storage operation request.

There is no distinction between the primary and secondary access keys. The purpose of the secondary access key is to enable continued use of the storage service while the other access key is being regenerated. While the primary access key is used for authentication against the storage service, the secondary access key can be regenerated without affecting the service—and vice versa. This can be extremely useful in situations where storage access credentials must be rotated regularly.

As possession of the storage account name and access key is sufficient to provide full control over the data managed by the storage account, it is essential that the access keys be kept secure. In particular, access keys should never be downloaded to a client, such as a Smartphone, as that exposes them to potential abuse.

In this recipe, we will learn how to use the primary and secondary access keys.

Getting ready

This recipe requires a deployed Windows Azure hosted service that uses a Windows Azure storage account.

How to do it...

We are going to regenerate the secondary access key for a storage account and configure a hosted service to use it. We do this as follows:

  1. Go to the Windows Azure Portal.

  2. In the Storage Accounts section, regenerate the secondary access key for the desired storage account.

  3. In the Hosted Services section, configure the desired hosted service and replace the value of AccountKey in the DataConnectionString setting with the newly generated secondary access key.

How it works...

In step 2, we can choose which access key to regenerate. It is important that we never regenerate the access key currently being used since doing so immediately renders the storage account inaccessible. Consequently, we regenerate only the secondary access key if the primary access key is currently in use—and vice versa.

In step 3, we upgrade the service configuration to use the access key we just generated. This change can be trapped and handled by the hosted service. However, it should not require the hosted service to be recycled. We see how to handle configuration changes in the Handling changes to the configuration and topology of a hosted service recipe in Chapter 5.

Connecting to the Windows Azure Storage Service


In a Windows Azure hosted service, the storage account name and access key are stored in the service configuration file. By convention, the account name and access key for data access are provided in a setting named DataConnectionString. The account name and access key needed for Windows Azure diagnostics must be provided in a setting named Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString.

Note

The DataConnectionString setting must be declared in the ConfigurationSettings section of the service definition file. However, unlike other settings, the connection string setting for Windows Azure diagnostics is implicitly defined when the diagnostics module is specified in the Imports section of the service definition file. Consequently, it must not be specified in the ConfigurationSettings section.

A best practice is to use different storage accounts for application data and diagnostic data. This reduces the possibility of application data access being throttled by competition for concurrent writes from the diagnostics monitor. It also provides a security boundary between application data and diagnostics data, as diagnostics data may be accessed by individuals who should have no access to application data.

In the Windows Azure Storage Client library, access to the storage service is through one of the client classes. There is one client class for each of Blob service, Queue service, and Table service—CloudBlobClient, CloudQueueClient, and CloudTableClient respectively. Instances of these classes store the pertinent endpoint, as well as the account name and access key.

The CloudBlobClient class provides methods to access containers list their contents and get references to containers and blobs. The CloudQueueClient class provides methods to list queues and to get a reference to the CloudQueue instance used as an entry point to the Queue service functionality. The CloudTableClient class provides methods to manage tables and to get the TableServiceContext instance used to access the WCF Data Services functionality used in accessing the Table service. Note that CloudBlobClient, CloudQueueClient, and CloudTableClient instances are not thread safe so distinct instances should be used when accessing these services concurrently.

The client classes must be initialized with the account name and access key, as well as the appropriate storage service endpoint. The Microsoft.WindowsAzure namespace has several helper classes. The StorageCredentialsAccountAndKey class initializes a StorageCredential instance from an account name and access key while the StorageCredentialsSharedAccessSignature class initializes a StorageCredential instance from a shared access signature. The CloudStorageAccount class provides methods to initialize an encapsulated StorageCredential instance directly from the service configuration file.

In this recipe, we will learn how to use CloudBlobClient, CloudQueueClient, and CloudTableClient instances to connect to the storage service.

Getting ready

This recipe assumes the application configuration file contains the following:

<appSettings>
   <add key="DataConnectionString"value="DefaultEndpointsProtocol=https;AccountName={ACCOUNT_NAME};AccountKey={ACCOUNT_KEY}"/>
   <add key="AccountName" value="{ACCOUNT_NAME}"/>
   <add key="AccountKey" value="{ACCOUNT_KEY}"/>
</appSettings>

Note

Downloading the example code for this book

You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com. If you purchased this book elsewhere, you can visit http://www.PacktPub.com/support and register to have the files e-mailed directly to you.

We must replace {ACCOUNT_NAME} and {ACCOUNT_KEY} with appropriate values for the storage account name and access key.

How to do it...

We are going to connect to the Table service, the Blob service, and the Queue service to perform a simple operation on each. We do this as follows:

  1. Add a new class named ConnectingToStorageExample to the project.

  2. Add the following using statements to the top of the class file:

    using Microsoft.WindowsAzure;
    using Microsoft.WindowsAzure.StorageClient;
    using System.Configuration;
  3. Add the following method, connecting to the blob service, to the class:

    private static void UseCloudStorageAccountExtensions()
    {
       CloudStorageAccount cloudStorageAccount =CloudStorageAccount.Parse(ConfigurationManager.AppSettings["DataConnectionString"]);
    
       CloudBlobClient cloudBlobClient =cloudStorageAccount.CreateCloudBlobClient();
       CloudBlobContainer cloudBlobContainer =cloudBlobClient.GetContainerReference("{CONTAINER_NAME}");
    
       cloudBlobContainer.Create();
    }
  4. Add the following method, connecting to the Table service, to the class:

    private static void UseCredentials()
    {
       String accountName = ConfigurationManager.AppSettings["AccountName"];
       String accountKey = ConfigurationManager.AppSettings["AccountKey"];
       StorageCredentialsAccountAndKey storageCredentials =new StorageCredentialsAccountAndKey(accountName, accountKey);
    
       CloudStorageAccount cloudStorageAccount =new CloudStorageAccount(storageCredentials, true);
       CloudTableClient tableClient =new CloudTableClient(cloudStorageAccount.TableEndpoint.AbsoluteUri,storageCredentials);
    
       Boolean tableExists =tableClient.DoesTableExist("{TABLE_NAME}");
    }
  5. Add the following method, connecting to the Queue service, to the class:

    private static void UseCredentialsWithUri()
    {
       String accountName = ConfigurationManager.AppSettings["AccountName"];
       String accountKey = ConfigurationManager.AppSettings["AccountKey"];
       StorageCredentialsAccountAndKey storageCredentials =new StorageCredentialsAccountAndKey(accountName, accountKey);
    
       String baseUri =String.Format("https://{0}.queue.core.windows.net/",accountName);
       CloudQueueClient cloudQueueClient =new CloudQueueClient(baseUri, storageCredentials);
       CloudQueue cloudQueue =cloudQueueClient.GetQueueReference("{QUEUE_NAME}");
    
       Boolean queueExists = cloudQueue.Exists();
    }
  6. Add the following method, using the other methods, to the class:

    public static void UseConnectionToStorageExample()
    {
       UseCloudStorageAccountExtensions();
       UseCredentials();
       UseCredentialsWithUri();
    }

How it works...

In steps 1 and 2, we set up the class.

In step 3, we implement the standard way to access the storage service using the Storage Client library. We use the static CloudStorageAccount.Parse() method to create a CloudStorageAccount instance from the value of the connection string stored in the configuration file. We then use this instance with the CreateCloudBlobClient() extension method to the CloudStorageAccount class to get the CloudBlobClient instance we use to connect to the Blob service. We can also use this technique with the Table service and the Queue service using the relevant extension methods for them: CreateCloudTableClient() and CreateCloudQueueClient() respectively. We complete this example by using the CloudBlobClient instance to get a CloudBlobContainer reference to a container and then fetch its attributes. We need to replace {CONTAINER_NAME} with the name for a container.

In step 4, we create a StorageCredentialsAccountAndKey instance directly from the account name and access key. We then use this to construct a CloudStorageAccount instance, specifying that any connection should use HTTPS. Using this technique, we need to provide the Table service endpoint explicitly when creating the CloudTableClient instance. We then use this to verify the existence of a table. We need to replace {TABLE_NAME} with the name for a table. We can use the same technique with the Blob service and Queue service by using the relevant CloudBlobClient or CloudQueueClient constructor.

In step 5, we use a similar technique except that we avoid the intermediate step of using a CloudStorageAccount instance and explicitly provide the endpoint for the Queue service. We use the CloudQueueClient instance created in this step to verify the existence of a queue. We need to replace {QUEUE_NAME} with the name of a queue. Note that we have hard-coded the endpoint for the Queue service.

In step 6, we add a method that invokes the methods added in the earlier steps.

Using SetConfigurationSettingPublisher()


The CloudStorageAccount class in the Windows Azure Storage Client library encapsulates a StorageCredential instance that can be used to authenticate against the Windows Azure Storage Service. It also exposes a FromConfigurationSetting() factory method that creates a CloudStorageAccount instance from a configuration setting.

This method has caused much confusion since, without additional configuration, it throws an InvalidOperationException with a message of "SetConfigurationSettingPublisher needs to be called before FromConfigurationSetting() can be used." Consequently, before using FromConfigurationSetting(), it is necessary to invoke SetConfigurationSettingPublisher() once. The intent of this method is that it can be used to specify alternate ways of retrieving the data connection string that FromConfigurationSetting() uses to initialize the CloudStorageAccount instance. This setting is process-wide, so is typically done in the OnStart() method of the RoleEntryPoint class for the role.

The following is a simple implementation for SetConfigurationSettingPublisher():

CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) =>
{
   configSetter(RoleEnvironment.GetConfigurationSettingValue(configName));
});

There are several levels of indirection here, but the central feature is the use of a method that takes a String parameter specifying the name of the configuration setting and returns the value of that setting. In the example here, the method used is RoleEnvironment.GetConfigurationSettingValue(). The configuration-setting publisher can be set to retrieve configuration settings from any location including app.config or web.config.

The use of SetConfigurationSettingPublisher() is no longer encouraged. Instead, it is better to use CloudStorageAccount.Parse(), which takes a data connection string in canonical form and creates a CloudStorageAccount instance from it. We see how to do this in the Connecting to the Windows Azure Storage Service recipe.

In this recipe, we will learn how to set and use a configuration-setting publisher to retrieve the data connection string from a configuration file.

How to do it...

We are going to add an implementation for SetConfigurationSettingPublisher() to a worker role. We do this as follows:

  1. Create a new cloud project.

  2. Add a worker role to the project.

  3. Add the following to the WorkerRole section of the ServiceDefinition.csdef file:

    <ConfigurationSettings>
       <Setting name="DataConnectionString" />
    </ConfigurationSettings>
  4. Add the following to the ConfigurationSettings section of the ServiceConfiguration.cscfg file:

    <Setting name="DataConnectionString" value="DefaultEndpointsProtocol=https;AccountName={ACCOUNT_NAME};AccountKey={ACCOUNT_KEY}"/>
  5. Replace WorkerRole.Run() with the following:

    public override void Run()
    {
       UseFromConfigurationSetting("{CONTAINER_NAME}");
    
       while (true)
       {
          Thread.Sleep(10000);
          Trace.WriteLine("Working", "Information");
       }
    }
  6. Replace WorkerRole.OnStart() with the following:

    public override bool OnStart()
    {
       ServicePointManager.DefaultConnectionLimit = 12;
    
       CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) =>
       {
          configSetter(RoleEnvironment.GetConfigurationSettingValue(configName));
       });
    
       return base.OnStart();
    }
  7. Add the following method, implicitly using the configuration setting publisher, to the WorkerRole class:

    private void UseFromConfigurationSetting(String containerName)
    {
       CloudStorageAccount cloudStorageAccount =CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
    
       CloudBlobClient cloudBlobClient =cloudStorageAccount.CreateCloudBlobClient();
       CloudBlobContainer cloudBlobContainer =cloudBlobClient.GetContainerReference(containerName);
    
       cloudBlobContainer.Create();
    }

How it works...

In steps 1 and 2, we set up the project.

In steps 3 and 4, we define and provide a value for the DataConnectionString setting in the service definition and service configuration files. We must replace {ACCOUNT_NAME} and {ACCOUNT_KEY} with appropriate values for the account name and access key.

In step 5, we modify the Run() method to invoke a method that accesses the storage service. We must provide an appropriate value for {CONTAINER_NAME}.

In step 6, we modify the OnStart() method to set a configuration setting publisher for the role instance. We set it to retrieve configuration settings from the service configuration file.

In step 7, we invoke CloudStorageAccount.FromConfigurationSetting(), which uses the configuration setting publisher we added in step 6. We then use the CloudStorageAccount instance to create CloudBlobClient and CloudBlobContainer instances that we use to create a new container in blob storage.

Connecting to the storage emulator


The Windows Azure SDK provides a compute emulator and a storage emulator that work in a development environment to provide a local emulation of Windows Azure hosted services and storage services. There are some differences in functionality between storage services and the storage emulator. Prior to Windows Azure SDK v1.3, the storage emulator was named development storage.

Note

By default, the storage emulator uses SQL Server Express, but it can be configured to use SQL Server.

An immediate difference is that the storage emulator supports only one account name and access key. The account name is hard-coded to be devstoreaccount1. The access key is hard-coded to be:

Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==

Another difference is that the storage endpoints are constructed differently for the storage emulator. The storage service uses URL subdomains to distinguish the endpoints for the various types of storage. For example, the endpoint for the Blob service for a storage account named myaccount is:

myaccount.blob.core.windows.net

The endpoints for the other storage types are constructed similarly by replacing the word blob with either table or queue.

This differentiation by subdomain name is not used in the storage emulator which is hosted on the local host at 127.0.0.1. Instead, the storage emulator distinguishes the endpoints for various types of storage through use of different ports. Furthermore, the account name, rather than being part of the subdomain, is provided as part of the URL. Consequently, the endpoints used by the storage emulator are as follows:

  • 127.0.0.1:10000/devstoreaccount1 Blob

  • 127.0.0.1:10001/devstoreaccount1 Queue

  • 127.0.0.1:10002/devstoreaccount1 Table

The Windows Azure Storage Client library hides much of this complexity but an understanding of it remains important in case something goes wrong. The account name and access key are hard-coded into the Storage Client library, which also provides simple access to an appropriately constructed CloudStorageAccount object.

The Storage Client library also supports a special value for the DataConnectionString in the service configuration file. Instead of specifying the account name and access key, it is sufficient to specify the following:

UseDevelopmentStorage=true

For example, this is specified as follows in the service configuration file:

<Setting name="DataConnectionString" value="UseDevelopmentStorage=true" />

This value can also be used for the Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString data connection string required for Windows Azure Diagnostics.

The CloudStorageAccount.Parse() and CloudStorageAccount.FromConnectionString() methods handle this value in a special way to create a CloudStorageAccount object that can be used to authenticate against the storage emulator.

In this recipe, we will learn how to connect to the storage emulator.

Getting ready

This recipe assumes the following is in the application configuration file:

<appSettings>
   <add key="DataConnectionString"value="UseDevelopmentStorage=true"/>
</appSettings>

How to do it...

We are going to connect to the storage emulator in various ways and perform some operations on blobs. We do this as follows:

  1. Add a class named StorageEmulatorExample to the project.

  2. Add the following using statements to the top of the class file:

    using Microsoft.WindowsAzure;
    using Microsoft.WindowsAzure.StorageClient;
    using System.Configuration;
  3. Add the following private members to the class:

    private String containerName;
    private String blobName;
  4. Add the following constructor to the class:

    StorageEmulatorExample(String containerName, String blobName)
    {
    this.containerName = containerName;
    this.blobName = blobName;
    }
  5. Add the following method, using the configuration file, to the class:

    private void UseConfigurationFile()
    {
       CloudStorageAccount cloudStorageAccount =CloudStorageAccount.Parse(
         ConfigurationManager.AppSettings["DataConnectionString"]);
    
       CloudBlobClient cloudBlobClient =cloudStorageAccount.CreateCloudBlobClient();
       CloudBlobContainer cloudBlobContainer =cloudBlobClient.GetContainerReference(containerName);
       cloudBlobContainer.Create();
    }
  6. Add the following method, using an explicit storage account, to the class:

    private void CreateStorageCredential()
    {
       String baseAddress ="http://127.0.0.1:10000/devstoreaccount1";
       String accountName = "devstoreaccount1";
       String accountKey ="Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==";
       StorageCredentialsAccountAndKey storageCredentials = newStorageCredentialsAccountAndKey(accountName, accountKey);
    
       CloudBlobClient cloudBlobClient =new CloudBlobClient(baseAddress, storageCredentials);
       CloudBlobContainer cloudBlobContainer =cloudBlobClient.GetContainerReference(containerName);
       CloudBlockBlob cloudBlockBlob =cloudBlobContainer.GetBlockBlobReference(blobName);
    
       cloudBlockBlob.UploadText("If we shadows have offended.");   cloudBlockBlob.Metadata["{METADATA_KEY}"] ="{METADATA_VALUE}";
       cloudBlockBlob.SetMetadata();
    }
    
  7. Add the following method, using the CloudStorageAccount.DevelopmentStorageAccount property, to the class:

    private void UseDevelopmentStorageAccount()
    {
       CloudStorageAccount cloudStorageAccount =CloudStorageAccount.DevelopmentStorageAccount;
    
       CloudBlobClient cloudBlobClient =cloudStorageAccount.CreateCloudBlobClient();
       CloudBlobContainer cloudBlobContainer =cloudBlobClient.GetContainerReference(containerName);
       CloudBlockBlob cloudBlockBlob =cloudBlobContainer.GetBlockBlobReference(blobName);
    
       cloudBlockBlob.FetchAttributes();
       BlobAttributes blobAttributes = cloudBlockBlob.Attributes;
       String metadata =blobAttributes.Metadata["{METADATA_KEY}"];
    }
  8. Add the following method, using the methods added earlier, to the class:

    public static void UseStorageEmulatorExample()
    {
       String containerName = "{CONTAINER_NAME}";
       String blobName = "{BLOB_NAME}";
    
       StorageEmulatorExample example =new StorageEmulatorExample(containerName, blobName);
    
       example.UseConfigurationFile();
       example.CreateStorageCredential();
       example.UseDevelopmentStorageAccount();
    }
    

How it works...

In steps 1 and 2, we set up the class. In step 3, we add some private members for the container name and blob name, which we initialize in the constructor we add in step 4.

In step 5, we retrieve a data connection string from a configuration file and pass it into CloudStorageAccount.Parse() to create a CloudStorageAccount instance. We use this to get references to a CloudBlobContainer instance for the specified container. We use this to create the container.

In step 6, we create a StorageCredentialsAccountAndKey instance from the explicitly provided storage emulator values for account name and access key. We use the resulting StorageCredential to initialize a CloudStorageClient object, explicitly providing the storage emulator endpoint for blob storage. We then create a reference to a blob, upload some text to the blob, define some metadata for the blob, and finally update the blob with it. We must replace {METADATA_KEY} AND {METADATA_VALUE} with actual values.

In step 7, we initialize a CloudStorageAccount from the hard-coded CloudStorageAccount property exposed by the class. We then get a CloudBlockBlob object, which we use to retrieve the properties stored on the blob and retrieve the metadata we added in step 6. We should replace {METADATA_KEY} with the same value that we used in step 6.

In step 8, we add a helper method that invokes each of the methods we added earlier. We must replace {CONTAINER_NAME} and {BLOB_NAME} with appropriate names for the container and blob.

There's more...

Fiddler is a program that captures HTTP traffic, which makes it very useful for diagnosing problems when using the Windows Azure Storage Service. Its use is completely transparent when cloud storage is being used. However, the data connection string must be modified if you want Fiddler to be able to monitor the traffic. The data connection string must be changed to the following:

UseDevelopmentStorage=true;DevelopmentStorageProxyUri=http://ipv4.fiddler

Fiddler can be downloaded from the following URL:

http://www.fiddler2.com/fiddler2/

Managing access control for containers and blobs


The Windows Azure Storage Service authenticates all requests against the Table service and Queue service. However, the storage service allows the possibility of unauthenticated access against the Blob service. The reason is that blobs provide an ideal location for storing large static content for a website. For example, the images in a photo-sharing site could be stored as blobs and downloaded directly from the Blob service without being transferred through a web role.

Public access control for the Blob service is managed at the container level. The Blob service supports the following three types of access control:

  • No public read access in which all access must be authenticated

  • Public read access which allows blobs in a container to be readable without authentication

  • Full public read access in which authentication is not required to read the container data and the blobs contained in it

No public read access is the same access control as for the Queue service and Table service. The other two access control types both allow anonymous access to a blob, so that, for example, the blob can be downloaded into a browser by providing its full URL.

In the Windows Azure Storage Client library, the BlobContainerPublicAccessType enumeration specifies the three types of public access control for a container. The BlobContainerPermissions class exposes two properties: PublicAccess specifying a member of the BlobContainerPublicAccessType enumeration and SharedAccessPolicies specifying a set of shared access policies. The SetPermissions() method of the CloudBlobContainer class is used to associate a BlobContainerPermissions instance with the container. The GetPermissions() method retrieves the access permissions for a container.

In this recipe, we will learn how to specify the level of public access control for containers and blobs managed by the Blob service.

Getting ready

This recipe assumes the following is in the application configuration file:

<appSettings>
   <add key="DataConnectionString"value="DefaultEndpointsProtocol=https;AccountName={ACCOUNT_NAME};AccountKey={ACCOUNT_KEY}"/>
</appSettings>

We must replace {ACCOUNT_NAME} and {ACCOUNT_KEY} with appropriate values for the account name and access key.

How to do it...

We are going to specify various levels of public access control for a container. We do this as follows:

  1. Add a new class named BlobContainerPublicAccessExample to the project.

  2. Add the following using statements to the top of the class file:

    using Microsoft.WindowsAzure;
    using Microsoft.WindowsAzure.StorageClient;
    using System.Configuration;
    using System.Net;
  3. Add the following method, setting public access control for a container, to the class:

    public static void CreateContainerAndSetPermission(String containerName, String blobName,BlobContainerPublicAccessType publicAccessType )
    {
       CloudStorageAccount cloudStorageAccount =CloudStorageAccount.Parse(ConfigurationManager.AppSettings[
         "DataConnectionString"]);
       CloudBlobClient cloudBlobClient =cloudStorageAccount.CreateCloudBlobClient();
    
       CloudBlobContainer blobContainer =new CloudBlobContainer(containerName, cloudBlobClient);
       blobContainer.Create();
    
       BlobContainerPermissions blobContainerPermissions =new BlobContainerPermissions()
       {
          PublicAccess = publicAccessType
       };
       blobContainer.SetPermissions(blobContainerPermissions);
    
       CloudBlockBlob blockBlob =blobContainer.GetBlockBlobReference(blobName);
       blockBlob.UploadText("Has been changed glorious summer");
    }
  4. Add the following method, retrieving a blob, to the class:

    public static void GetBlob(String containerName, String blobName)
    {
       CloudStorageAccount cloudStorageAccount =CloudStorageAccount.Parse(ConfigurationManager.AppSettings["DataConnectionString"]);
       Uri blobUri = new Uri(cloudStorageAccount.BlobEndpoint +containerName + "/" + blobName);
    
       HttpWebRequest httpWebRequest =(HttpWebRequest)HttpWebRequest.Create(blobUri);
       httpWebRequest.Method = "GET";
       using (HttpWebResponse response =(HttpWebResponse)httpWebRequest.GetResponse())
       {
          String status = response.StatusDescription;
       }}
  5. Add the following method, using the method we just added, to the class:

    public static void UseBlobContainerPublicAccessExample()
    {
       CreateContainerAndSetPermission("container1", "blob1",BlobContainerPublicAccessType.Blob);
       
       CreateContainerAndSetPermission("container2", "blob2",BlobContainerPublicAccessType.Container);
    
       CreateContainerAndSetPermission("container3", "blob3",BlobContainerPublicAccessType.Off);
    }

How it works...

In steps 1 and 2, we set up the class.

In step 3, we add a method that creates a container and blob, and applies a public access policy to the container. We create a CloudBlobClient instance from the data connection string in the configuration file. We then create a new container using the name passed in to the method. Then, we create a BlobContainerPermissions instance with the BlobContainerPublicAccessType passed into the method and set the permissions on the container. Note that we must create the container before we set the permissions because SetPermissions() sets the permissions directly on the container. Finally, we create a blob in the container.

In step 4, we use an HttpWebRequest instance to retrieve the blob without providing any authentication. This request causes a 404 (Not Found) error when the request attempts to retrieve a blob that has not been configured for public access. Note that when constructing blobUri for the storage emulator, we must add a / into the path after cloudStorageAccount.BlobEndpoint because of a difference in the way the Storage Client library constructs the endpoint for the storage emulator and the storage service. For example, we need to use the following for the storage emulator:

Uri blobUri = new Uri(cloudStorageAccount.BlobEndpoint + "/" +containerName + "/" + blobName);

In step 5, we add a method that invokes the CreateContainerAndSetPermission() method once for each value of the BlobContainerPublicAccessType enumeration. We then invoke the method twice, to retrieve blobs from different container. The second invocation leads to a 404 error since container3 has not been configured for unauthenticated public access.

See also

  • The recipe named Creating a Shared Access Signature for a container or blob in this chapter

Creating a Shared Access Signature for a container or blob


The Windows Azure Blob Service supports fully authenticated requests, anonymous requests, and requests authenticated by a temporary access key referred to as a Shared Access Signature. The latter allows access to containers or blobs to be limited to only those in possession of the Shared Access Signature.

A Shared Access Signature is constructed from a combination of:

  • Resource (container or blob)

  • Access rights (read, write, delete, list)

  • Start time

  • Expiration time

These are combined into a string from which a 256-bit, hash-based message authentication code (HMAC) is generated. An access key for the storage account is used to seed the HMAC generation. This HMAC is referred to as a shared access signature. The process of generating a Shared Access Signature requires no interaction with the Blob service. A shared access signature is valid for up to one hour, which limits the allowable values for the start time and expiration time.

When using a Shared Access Signature to authenticate a request, it is submitted as one of the query string parameters. The other query parameters comprise the information from which the shared access signature was created. This allows the Blob service to create a Shared Access Signature, using the access key for the storage account, and compare it with the Shared Access Signature submitted with the request. A request is denied if it has an invalid Shared Access Signature.

An example of a storage request for a blob named theBlob in a container named chapter1 is:

GET /chapter1/theBlob

An example of the query string parameters is:

st=2011-03-22T05%3A49%3A09Z
&se=2011-03-22T06%3A39%3A09Z
&sr=b
&sp=r
&sig=GLqbiDwYweXW4y2NczDxmWDzrJCc89oFfgBMTieGPww%3D

The st parameter is the start time for the validity of the Shared Access Signature. The se parameter is the expiration time for the validity of the Shared Access Signature. The sr parameter specifies that the Shared Access Signature is for a blob. The sp parameter specifies that the Shared Access Signature authenticates for read access only. The sig parameter is the Shared Access Signature. A complete description of these parameters is available on MSDN at the following URL:

http://msdn.microsoft.com/en-us/library/ee395415.aspx

Once a Shared Access Signature has been created and transferred to a client, no further verification of the client is required. It is important, therefore, that the Shared Access Signature be created with the minimum period of validity and that its distribution be restricted as much as possible. It is not possible to revoke a Shared Access Signature created in this manner.

In this recipe, we will learn how to create and use a Shared Access Signature.

Getting ready

This recipe assumes the following is in the application configuration file:

<appSettings>
   <add key="DataConnectionString"value="DefaultEndpointsProtocol=https;AccountName={ACCOUNT_NAME};AccountKey={ACCOUNT_KEY}"/>
   <add key="AccountName" value="{ACCOUNT_NAME}"/>
   <add key="AccountKey" value="{ACCOUNT_KEY}"/>
</appSettings>

We must replace {ACCOUNT_NAME} and {ACCOUNT_KEY} with appropriate values for the account name and access key.

How to do it...

We are going to create and use Shared Access Signatures for a blob. We do this as follows:

  1. Add a class named SharedAccessSignaturesExample to the project.

  2. Add the following using statements to the top of the class file:

    using Microsoft.WindowsAzure;
    using Microsoft.WindowsAzure.StorageClient;
    using System.Configuration;
    using System.Net;
    using System.IO;
    using System.Security.Cryptography;
  3. Add the following private members to the class:

    private String blobEndpoint;
    private String accountName;
    private String accountKey;
  4. Add the following constructor to the class:

    SharedAccessSignaturesExample()
    {
       CloudStorageAccount cloudStorageAccount =CloudStorageAccount.Parse(ConfigurationManager.AppSettings["DataConnectionString"]);
       blobEndpoint =cloudStorageAccount.BlobEndpoint.AbsoluteUri;
       accountName = cloudStorageAccount.Credentials.AccountName;
       
    
       StorageCredentialsAccountAndKey accountAndKey =cloudStorageAccount.Credentials asStorageCredentialsAccountAndKey;
       accountKey =accountAndKey.Credentials.ExportBase64EncodedKey();
    }
  5. Add the following method, creating a container and blob, to the class:

       private void CreateContainerAndBlob(String containerName,String blobName)
       {
         CloudStorageAccount cloudStorageAccount =CloudStorageAccount.Parse(ConfigurationManager.AppSettings["DataConnectionString"]);
         CloudBlobClient cloudBlobClient =cloudStorageAccount.CreateCloudBlobClient();
         CloudBlobContainer cloudBlobContainer =new CloudBlobContainer(containerName, cloudBlobClient);
         cloudBlobContainer.Create();
         CloudBlockBlob cloudBlockBlob =cloudBlobContainer.GetBlockBlobReference(blobName);
         cloudBlockBlob.UploadText("This weak and idle theme.");
    }
  6. Add the following method, getting a Shared Access Signature, to the class:

    private String GetSharedAccessSignature(String containerName, String blobName)
    {
       SharedAccessPolicy sharedAccessPolicy =new SharedAccessPolicy()
       {
          Permissions = SharedAccessPermissions.Read,
          SharedAccessStartTime =DateTime.UtcNow.AddMinutes(-10d),
          SharedAccessExpiryTime =DateTime.UtcNow.AddMinutes(40d)
       };
    
       CloudStorageAccount cloudStorageAccount =CloudStorageAccount.Parse(ConfigurationManager.AppSettings["DataConnectionString"]);
       CloudBlobClient cloudBlobClient =cloudStorageAccount.CreateCloudBlobClient();
       CloudBlobContainer cloudBlobContainer =new CloudBlobContainer(containerName, cloudBlobClient);
       CloudBlockBlob cloudBlockBlob =cloudBlobContainer.GetBlockBlobReference(blobName);
    
       String sharedAccessSignature =cloudBlockBlob.GetSharedAccessSignature(sharedAccessPolicy);
    
       return sharedAccessSignature;
    }
  7. Add the following method, creating a Shared Access Signature, to the class:

    private String CreateSharedAccessSignature(String containerName, String blobName, String permissions)
    {
       String iso8061Format = "{0:yyyy-MM-ddTHH:mm:ssZ}";
       DateTime startTime = DateTime.UtcNow;
       DateTime expiryTime = startTime.AddHours(1d);
       String start = String.Format(iso8061Format, startTime);
       String expiry = String.Format(iso8061Format, expiryTime);
       String stringToSign =String.Format("{0}\n{1}\n{2}\n/{3}/{4}\n",permissions, start, expiry, accountName, containerName);
    
       String rawSignature = String.Empty;
       Byte[] keyBytes = Convert.FromBase64String(accountKey);
       using (HMACSHA256 hmacSha256 = new HMACSHA256(keyBytes))
       {
          Byte[] utf8EncodedStringToSign =System.Text.Encoding.UTF8.GetBytes(stringToSign);
          Byte[] signatureBytes =hmacSha256.ComputeHash(utf8EncodedStringToSign);
          rawSignature = Convert.ToBase64String(signatureBytes);
       }
    
       String sharedAccessSignature =String.Format("?st={0}&se={1}&sr=c&sp={2}&sig={3}",Uri.EscapeDataString(start),Uri.EscapeDataString(expiry),permissions,Uri.EscapeDataString(rawSignature));
    
       return sharedAccessSignature;
    }
  8. Add the following method, authenticating with a Shared Access Signature, to the class:

    private void AuthenticateWithSharedAccessSignature(String containerName, String blobName,String sharedAccessSignature)
    {
       StorageCredentialsSharedAccessSignature storageCredentials= new StorageCredentialsSharedAccessSignature(sharedAccessSignature);
       CloudBlobClient cloudBlobClient =new CloudBlobClient(blobEndpoint, storageCredentials);
    
       CloudBlobContainer cloudBlobContainer =new CloudBlobContainer(containerName, cloudBlobClient);
       CloudBlockBlob cloudBlockBlob =cloudBlobContainer.GetBlockBlobReference(blobName);
       String blobText = cloudBlockBlob.DownloadText();
    }
  9. Add the following method, using a Shared Access Signature, to the class:

    private void UseSharedAccessSignature(String containerName, String blobName,String sharedAccessSignature)
    {
       String requestMethod = "GET";
       String urlPath = String.Format("{0}{1}/{2}{3}",
         blobEndpoint, containerName, blobName,sharedAccessSignature);
       Uri uri = new Uri(urlPath);
       HttpWebRequest request =(HttpWebRequest)WebRequest.Create(uri);
       request.Method = requestMethod;
       using (HttpWebResponse response =(HttpWebResponse)request.GetResponse())
       {
          Stream dataStream = response.GetResponseStream();
          using (StreamReader reader =new StreamReader(dataStream))
          {
             String responseFromServer = reader.ReadToEnd();
          }
       }
    }
  10. Add the following method, using the methods added earlier, to the class:

    public static void UseSharedAccessSignaturesExample()
    {
       String containerName = "{CONTAINER_NAME}";
       String blobName = "{BLOB_NAME}";
    
       SharedAccessSignaturesExample example =new SharedAccessSignaturesExample();
    
       example.CreateContainerAndBlob(containerName, blobName);
    
       String sharedAccessSignature1 =example.GetSharedAccessSignature(containerName, blobName);
       example.AuthenticateWithSharedAccessSignature(containerName, blobName, sharedAccessSignature1);
    
       String sharedAccessSignature2 =example.CreateSharedAccessSignature(containerName, blobName, "rw");
       example.UseSharedAccessSignature(containerName, blobName, sharedAccessSignature2);
    }
    

How it works...

In steps 1 and 2, we set up the class. In step 3, we add some private members for the blob endpoint, as well as the account name and access key which we initialize in the constructor we add in step 4. In step 5, we create a container and upload a blob to it.

In step 6, we use the GetSharedAccessSignature() method of the CloudBlockBlob class to get a shared access signature based on a SharedAccessPolicy we pass into it. In this SharedAccessPolicy, we specify that we want read access on a blob from 10 minutes earlier to 40 minutes later than the current time. The fuzzing of the start time is to minimize any risk of the time on the local machine being too far out of sync with the time on the storage service. This approach is the easiest way to get a shared access signature.

In step 7, we construct a Shared Access Signature from first principles. This version does not use the Storage Client library. We generate a string to sign from the account name, desired permissions, start, and expiration time. We initialize an HMACSHA256 instance from the access key, and use this to generate an HMAC from the string to sign. We then create the remainder of the query string while ensuring that the data is correctly URL encoded.

In step 8, we use a shared access signature to initialize a StorageCredentialsSharedAccessSignature instance, which we use to create a CloudBlobClient instance. We use this to construct the CloudBlobContainer and CloudBlobClient instances we use to download the content of a blob.

In step 9, we use HttpWebRequest and HttpWebResponse objects to perform an anonymous download of the content of a blob. We construct the query string for the request using the Shared Access Signature and direct the request to the appropriate blob endpoint. Note that when constructing urlPath for the storage emulator, we must add a / between {0} and {1} because of a difference in the way the Storage Client library constructs the endpoint for the storage emulator and the storage service. For example, we need to use the following for the storage emulator:

String urlPath = String.Format("{0}/{1}/{2}{3}",blobEndpoint, containerName, blobName,sharedAccessSignature);

In step 10, we add a helper method that invokes all the methods we added earlier. We must replace {CONTAINER_NAME} and {BLOB_NAME} with appropriate names for the container and blob.

There's more...

In step 7, we could create a Shared Access Signature based on a container-level access policy by replacing the definition of stringToSign with the following:

String stringToSign = String.Format("\n\n\n/{0}/{1}\n{2}", accountName, containerName, policyId);

policyId specifies the name of a container-level access policy.

See also

  • The recipe named Using a container-level access policy in this chapter

Using a container-level access policy


A shared access policy comprises a set of permissions (read, write, delete, list) combined with start and expiration times for validity of the policy. There are no restrictions on the start and expiration times for a shared access policy. A container-level access policy is a shared access policy associated by name with a container. A maximum of five container-level access policies can be associated simultaneously with a container, but each must have a distinct name.

A container-level access policy improves the management of shared access signatures. There is no way to retract or otherwise disallow a standalone shared access signature once it has been created. However, a shared access signature created from a container-level access policy has validity dependent on the container-level access policy. The deletion of a container-level access policy causes the revocation of all shared access signatures derived from it and they can no longer be used to authenticate a storage request. As they can be revoked at any time, there are no time restrictions on the validity of a shared access signature derived from a container-level access policy.

The container-level access policies for a container are set and retrieved as a SharedAccessPolicies collection of named SharedAccessPolicy objects. The SetPermissions() and GetPermissions() methods of the CloudBlobContainer class set and retrieve container-level access policies. A container-level access policy can be removed by retrieving the current SharedAccessPolicies, removing the specified policy, and then setting the SharedAccessPolicies on the container again.

A shared access signature is derived from a container-level access policy by invoking the CloudBlobContainer.GetSharedAccessSignature()passing in the name of the container-level access policy and an empty SharedAccessPolicy instance. It is not possible to modify the validity of the shared-access signature by using a non-empty SharedAccessPolicy.

In this recipe, we will learn how to manage container-level access policies and use them to create shared access signatures.

Getting ready

This recipe assumes the following is in the application configuration file:

<appSettings>
   <add key="DataConnectionString"value="DefaultEndpointsProtocol=https;AccountName={ACCOUNT_NAME};AccountKey={ACCOUNT_KEY}"/>
</appSettings>

We must replace {ACCOUNT_NAME} and {ACCOUNT_KEY} with appropriate values for the account name and access key.

How to do it...

We are going to create, use, modify, revoke, and delete a container-level access policy. We do this as follows:

  1. Add a class named ContainerLevelAccessPolicyExample to the project.

  2. Add the following using statements to the top of the class file:

    using Microsoft.WindowsAzure;
    using Microsoft.WindowsAzure.StorageClient;
    using System.Configuration;
  3. Add the following private members to the class:

    private Uri blobEndpoint;
    private CloudBlobContainer cloudBlobContainer;
  4. Add the following constructor to the class:

    ContainerLevelAccessPolicyExample()
    {
       CloudStorageAccount cloudStorageAccount =CloudStorageAccount.Parse(ConfigurationManager.AppSettings["DataConnectionString"]);
       blobEndpoint = cloudStorageAccount.BlobEndpoint;
    
       CloudBlobClient cloudBlobClient =cloudStorageAccount.CreateCloudBlobClient();
       cloudBlobContainer =new CloudBlobContainer(containerName, cloudBlobClient);
       cloudBlobContainer.Create();
    }
  5. Add the following method, creating a container-level access policy, to the class:

    private void AddContainerLevelAccessPolicy(String policyId)
    {
       DateTime startTime = DateTime.UtcNow;
       SharedAccessPolicy sharedAccessPolicy =new SharedAccessPolicy()
       {
          Permissions = SharedAccessPermissions.Read |SharedAccessPermissions.Write,
          SharedAccessStartTime = startTime,
          SharedAccessExpiryTime = startTime.AddDays(3d)
       };
    
       BlobContainerPermissions blobContainerPermissions =new BlobContainerPermissions();
       blobContainerPermissions.SharedAccessPolicies.Add(policyId, sharedAccessPolicy);
    
       blobContainer.SetPermissions(blobContainerPermissions);
    }
  6. Add the following method, getting a shared access signature using the container-level access policy, to the class:

    private String GetSharedAccessSignature(String policyId)
    {
       SharedAccessPolicy sharedAccessPolicy =new SharedAccessPolicy();
       String sharedAccessSignature =cloudBlobContainer.GetSharedAccessSignature(sharedAccessPolicy, policyId);
       
       return sharedAccessSignature;
    }
  7. Add the following method, modifying the container-level access policy, to the class:

    private void ModifyContainerLevelAccessPolicy(String policyId)
    {
       BlobContainerPermissions blobContainerPermissions =cloudBlobContainer.GetPermissions();
    
       DateTime sharedAccessExpiryTime =(DateTime)blobContainerPermissions.SharedAccessPolicies[policyId].SharedAccessExpiryTime;
       blobContainerPermissions.SharedAccessPolicies[policyId].SharedAccessExpiryTime =
           sharedAccessExpiryTime.AddDays(1d);
    
       blobContainer.SetPermissions(blobContainerPermissions);
    }
  8. Add the following method, revoking a container-level access policy, to the class:

    private void RevokeContainerLevelAccessPolicy(String policyId)
    {
       BlobContainerPermissions containerPermissions =cloudBlobContainer.GetPermissions();
       
       SharedAccessPolicy sharedAccessPolicy =containerPermissions.SharedAccessPolicies[policyId];
       containerPermissions.SharedAccessPolicies.Remove(policyId);
       containerPermissions.SharedAccessPolicies.Add(policyId + "1", sharedAccessPolicy);
       
       cloudBlobContainer.SetPermissions(containerPermissions);
    }
  9. Add the following method, deleting all container-level access policies, to the class:

    private void DeleteContainerLevelAccessPolicies()
    {
       BlobContainerPermissions blobContainerPermissions =new BlobContainerPermissions();
    
       blobContainer.SetPermissions(blobContainerPermissions);
    }
  10. Add the following method, using the methods added earlier, to the class:

    public static void UseContainerLevelAccessPolicyExample()
    {
       String containerName = "{CONTAINER_NAME}";String policyId = "{POLICY_NAME}";
    
       ContainerLevelAccessPolicyExample example =new ContainerLevelAccessPolicyExample(containerName);
    
       example.AddContainerLevelAccessPolicy(policyId);
       String sharedAccessSignature1 =example.GetSharedAccessSignature(policyId);
    
       example.ModifyContainerLevelAccessPolicy(policyId);
       String sharedAccessSignature2 =example.GetSharedAccessSignature(policyId);
       example.RevokeContainerLevelAccessPolicy(policyId);
       String sharedAccessSignature3 =example.GetSharedAccessSignature(policyId + "1");
    
       example.DeleteContainerLevelAccessPolicies();
    }

How it works...

In steps 1 and 2, we set up the class. In step 3, we add some private members which we initialize in the constructor we add in step 4. We also create a container in the constructor.

In step 5, we create a SharedAccessPolicy instance and add it to the SharedAccessPolicies property of a BlobContainerPermissions object. Finally, we pass this and a policy name into a SetPermissions() method of the CloudBlobContainer class to create a container-level access policy name for the container.

In step 6, we get a shared access signature for a container with a specified container-level access policy. We initialize a CloudStorageAccount from the application configuration and use this to get a CloudBlobContainer instance for a specified container. Finally, we pass the policy name into the CloudBlobContainer.GetSharedAccessSignature() method to get a shared access signature for the container.

In step 7, we again get a CloudBlobContainer instance for the container and invoke GetPermissions() on it to retrieve the shared access policies for the container. We then add one day to the expiration date for a specified container-level access policy. Finally, we invoke CloudBlobContainer.SetPermissions() to update the container-level access policies for the container.

In step 8, we revoke an existing container-level access policy and create a new container-level policy with the same SharedAccessPolicy and a new policy name. We again use GetPermissions() to retrieve the shared access policies for the container and then invoke the Remove() and Add() methods of the SharedAccessPolicies class to perform the revocation. Finally, we invoke CloudBlobContainer.SetPermissions() to update the container-level access policies for the container.

In step 9, we delete all container-level access policies for a container. We create a default BlobContainerPermissions instance and pass this into CloudBlobContainer.SetPermissions() to remove all the container-level access policies for the container.

In step 10, we add a helper method that invokes all the methods we added earlier. We need to replace {CONTAINER_NAME} and {POLICY_NAME} with the names of a container and a policy for it.

Authenticating against the Windows Azure Service Management REST API


The Windows Azure Portal provides a user interface for managing Windows Azure hosted services and storage accounts. The Windows Azure Service Management REST API provides a RESTful interface that allows programmatic control of hosted services and storage accounts. It supports most, but not all, of the functionality provided in the Windows Azure Portal.

The Service Management API uses an X.509 certificate for authentication. This certificate must be uploaded as a management certificate to the Windows Azure Portal. Unlike service certificates, management certificates are not deployed to role instances. Consequently, if the Service Management API is to be accessed from an instance of a role, it is necessary to upload the certificate twice: as a management certificate for authentication and as a server certificate that is deployed to the instance. The latter also requires appropriate configuration in the service definition and service configuration files.

A management certificate can be self-signed because it is used only for authentication. As Visual Studio uses the Service Management API to upload and deploy packages, it contains tooling supporting the creation of the certificate. This is an option on the Publish dialog. A management certificate can also be created using makecert as follows:

C:\Users\Administrator>makecert -r -pe -sky exchange
-a sha1 -len 2048 -ss my -n "CN=Azure Service Management"
AzureServiceManagement.cer

This creates an X.509 certificate and installs it in the Personal (My) branch of the Current User level of the certificate store. It also creates a file named AzureServiceManagement.cer containing the certificate in a form that can be uploaded as a management certificate to the Windows Azure Portal. The certificate is self-signed (-r), with an exportable private key (-pe), created with the SHA-1 hash algorithm (-a), a 2048-bit key (-len), and with a key type of exchange (-sky).

Once created, the certificate must be uploaded to the Management Certificates section of the Windows Azure Portal. This section is at the subscription level—not the service level—as the Service Management API has visibility across all hosted services and storage accounts under the subscription.

In this recipe, we will learn how to authenticate to the Windows Azure Service Management REST API.

How to do it...

We are going to authenticate against the Windows Azure Service Management REST API and retrieve the list of hosted services for the subscription. We do this as follows:

  1. Add a class named ServiceManagementExample to the project.

  2. Add the following using statements to the top of the class file:

    using System.Net;
    using System.IO;
    using System.Xml.Linq;
    using System.Security.Cryptography.X509Certificates;
  3. Add the following private members to the class:

    XNamespace ns = "http://schemas.microsoft.com/windowsazure";
    String apiVersion = "2011-02-25";
    String subscriptionId;
    String thumbprint;
  4. Add the following constructor to the class:

    ServiceManagementExample(String subscriptionId, String thumbprint)
    {
       this.subscriptionId = subscriptionId;
       this.thumbprint = thumbprint;
    }
  5. Add the following method, retrieving the client authentication certificate from the certificate store, to the class:

    private static X509Certificate2 GetX509Certificate2(String thumbprint)
    {
       X509Certificate2 x509Certificate2 = null;
       X509Store store =new X509Store("My", StoreLocation.CurrentUser);
       try
       {
          store.Open(OpenFlags.ReadOnly);
          X509Certificate2Collection x509Certificate2Collection =store.Certificates.Find(X509FindType.FindByThumbprint, thumbprint, false);
          x509Certificate2 = x509Certificate2Collection[0];
       }
       finally
       {
          store.Close();
       }
       return x509Certificate2;
    }
  6. Add the following method, creating a web request, to the project:

    private HttpWebRequest CreateHttpWebRequest(Uri uri, String httpWebRequestMethod)
    {
       X509Certificate2 x509Certificate2 =GetX509Certificate2(thumbprint);
    
       HttpWebRequest httpWebRequest =(HttpWebRequest)HttpWebRequest.Create(uri);
       httpWebRequest.Method = httpWebRequestMethod;
       httpWebRequest.Headers.Add("x-ms-version", apiVersion);
       httpWebRequest.ClientCertificates.Add(x509Certificate2);
       httpWebRequest.ContentType = "application/xml";
    
       return httpWebRequest;
    }
  7. Add the following method, invoking a Get Services operation on the Service Management API, to the class:

    private IEnumerable<String> GetHostedServices()
    {
       String responseFromServer = String.Empty;
       XElement xElement;
    
       String uriPath = String.Format("https://management.core.windows.net/{0}/services/hostedservices", subscriptionId);
       Uri uri = new Uri(uriPath);
    
       HttpWebRequest httpWebRequest =CreateHttpWebRequest(uri, "GET");
       using (HttpWebResponse response =(HttpWebResponse)httpWebRequest.GetResponse())
       {
          Stream responseStream = response.GetResponseStream();
          xElement = XElement.Load(responseStream);
       }
    
       IEnumerable<String> serviceNames =from s in xElement.Descendants(ns + "ServiceName")
         select s.Value;
    
       return serviceNames;
    }
  8. Add the following method, invoking the methods added earlier, to the class:

    public static void UseServiceManagementExample()
    {
       String subscriptionId = "{SUBSCRIPTION_ID}";
       String thumbprint = "{THUMBPRINT}";
    
       ServiceManagementExample example =new ServiceManagementExample(subscriptionId, thumbprint);
       IEnumerable<String> serviceNames =example.GetHostedServices();
       List<String> listServiceNames =serviceNames.ToList<String>();
    }
    

How it works...

In steps 1 and 2, we set up the class. In step 3, we add some private members including the subscriptionId and thumbprint that we initialize in the constructor we add in step 4. The ns member specifies the namespace used in the XML data returned by the Service Management API. In apiVersion, we specify the API version sent with each request to the Service Management API. The supported versions are listed on MSDN at http://msdn.microsoft.com/en-us/library/gg592580.aspx. Note that the API version is updated whenever new operations are added and that an operation can use a newer API version than the one it was released with.

In step 5, we retrieve the X.509 certificate we use to authenticate against the Service Management API. We use the certificate thumbprint to find the certificate in the Personal/Certificates (My) branch of the Current User level of the certificate store.

In step 6, we create and initialize the HttpWebRequest used to submit an operation to the Service Management API. We add the X.509 certificate to the request. We also add an x-ms-version request header specifying the API version of the Service Management API we are using.

In step 7, we submit the request to the Service Management API. We first construct the appropriate URL, which depends on the particular operation to be performed. Then, we make the request and load the response into a XElement. We query this XElement to retrieve the names of the hosted services created for the subscription and return them in an IEnumerable<String>.

In step 8, we add a helper method that invokes each of the methods we added earlier. We must replace {SUBSCRIPTION_ID} with the subscription ID for the Windows Azure subscription and replace {THUMBPRINT} with the thumbprint of the management certificate. These can both be found on the Windows Azure Portal.

There's more...

The Microsoft Management Console (MMC) can be used to navigate the certificate store to view and, if desired, export an X.509 certificate. In MMC, the Certificates snap-in provides the choice of navigating through either the Current User or Local Machine level of the certificate store. The makecert command used earlier inserts the certificate in the Personal branch of the Current User level of the certificate store. The export certificate wizard can be found by selecting the certificate All Tasks on the right-click menu, and then choosing Export…. This wizard supports export both with and without a private key. The former creates the .CER file needed for a management certificate while the latter creates the password-protected .PFX file needed for a service certificate for a hosted service.

The thumbprint is one of the certificate properties that are displayed in the MMC certificate snap-in. However, the snap-in displays the thumbprint as a space-separated, lower-case string. When using the thumbprint in code, it must be converted to an upper-case string with no spaces.

Authenticating with the Windows Azure AppFabric Caching Service


The Windows Azure AppFabric Caching Service provides a hosted data cache along with local caching of that data. It provides a cloud-hosted version of the Windows Server AppFabric Caching Service.

All access to the caching service must be authenticated using a service namespace and an authentication token. These are generated on the Windows Azure Portal. The service namespace is similar to the account name used with the storage services. It forms the base of the service URL used in accessing the caching service.

The Windows Azure Access Control Service (ACS) is used to authenticate requests to the caching service. However, the complexity of this is abstracted by the caching service SDK. A DataCacheSecurity instance is constructed from the authentication token. To reduce the likelihood of the authentication token remaining in memory in an accessible form, the DataCacheSecurity constructor requires that it be passed in as a SecureString rather than a simple String. This DataCacheSecurity instance is then added to a DataCacheFactoryConfiguration instance. This is used to initialize the DataCacheFactory used to create the DataCache object used to interact with the caching service.

In this recipe, we will learn how to authenticate to the Windows Azure AppFabric Caching Service.

Getting ready

This recipe uses the Windows Azure AppFabric SDK. It also requires the creation—on the Windows Azure Portal—of a namespace for the Windows Azure AppFabric Caching Service. We see how to do this in the Creating a namespace for the Windows Azure AppFabric recipe in Chapter 9.

How to do it...

We are going to authenticate against the Windows Azure AppFabric Caching Service and cache an item in the service. We do this as follows:

  1. Add a class named AppFabricCachingExample to the project.

  2. Add the following assembly references to the project:

    Microsoft.ApplicationServer.Caching.Client
    Microsoft.ApplicationServer.Caching.Core
  3. Add the following using statements to the top of the class file:

    using Microsoft.ApplicationServer.Caching;
    using System.Security;
  4. Add the following private members to the class:

    Int32 cachePort = 22233;
    String hostName;
    String authenticationToken;
    DataCache dataCache;
  5. Add the following constructor to the class:

    AppFabricCachingExample(String hostName, String authenticationToken)
    {
       this.hostName = hostName;
       this.authenticationToken = authenticationToken;
    }
  6. Add the following method, creating a SecureString, to the class:

    static private SecureString CreateSecureString(String token)
    {
       SecureString secureString = new SecureString();
       foreach (char c in token)
       {
          secureString.AppendChar(c);
       }
       secureString.MakeReadOnly();
       return secureString;
    }
    
  7. Add the following method, initializing the cache, to the class:

    private void InitializeCache()
    {
       DataCacheSecurity dataCacheSecurity =new DataCacheSecurity(CreateSecureString(authenticationToken), false);
    
       List<DataCacheServerEndpoint> server =new List<DataCacheServerEndpoint>();
       server.Add(new DataCacheServerEndpoint(hostName, cachePort));
    
       DataCacheTransportProperties dataCacheTransportProperties =new DataCacheTransportProperties()
       {
          MaxBufferSize = 10000,
          ReceiveTimeout = TimeSpan.FromSeconds(45)
       };
    
       DataCacheFactoryConfiguration dataCacheFactoryConfiguration= new DataCacheFactoryConfiguration()
       {
          SecurityProperties = dataCacheSecurity,
          Servers = server,
          TransportProperties = dataCacheTransportProperties
       };
    
       DataCacheFactory myCacheFactory =new DataCacheFactory(dataCacheFactoryConfiguration);
       dataCache = myCacheFactory.GetDefaultCache();
    }
  8. Add the following method, inserting an entry to the cache, to the class:

    private void PutEntry( String key, String value)
    {
       dataCache.Put(key, value);
    }
  9. Add the following method, retrieving an entry from the cache, to the class:

    private String GetEntry(String key)
    {
       String playWright = dataCache.Get(key) as String;
       return playWright;
    }
  10. Add the following method, invoking the methods added earlier, to the class:

    public static void UseAppFabricCachingExample()
    {
       String hostName = "{SERVICE_NAMESPACE}.cache.windows.net";
       String authenticationToken = "{AUTHENTICATION_TOKEN}";   
    
       String key = "{KEY}";
       String value = "{VALUE}";
    
       AppFabricCachingExample example =new AppFabricCachingExample();
    
       example.InitializeCache();
       example.PutEntry(key, value);
       String theValue = example.GetEntry(key);
    }

How it works...

In steps 1 through 3, we set up the class. In step 4, we add some private members to hold the caching service endpoint information and the authentication token. We initialize these in the constructor we add in step 5.

In step 6, we add a method that creates a SecureString from a normal String. The authentication token used when working with the caching service SDK must be a SecureString. Typically, this would be initialized in a more secure fashion than from a private member.

In step 7, we first initialize the objects used to configure a DataCacheFactory object. We need to provide the authentication token and the caching service endpoint. We specify a ReceiveTimeout of less than 1 minute to reduce the possibility of an error caused by stale connections. We use the DataCacheFactory to get the DataCache for the default cache for the caching service. Note that in this recipe, we did not configure a local cache.

In step 8, we insert an entry in the cache. Note that we use Put() rather than Add() here, as Add() throws an error if the item is already cached. We retrieve it from the cache in step 9.

In step 10, we add a helper method that invokes each of the methods we added earlier. We must replace {SERVICE_NAMESPACE} and {AUTHENTICATION_TOKEN} with actual values for the caching service namespace and authentication token that we created on the Windows Azure Portal. We can replace {KEY} and {VALUE} with appropriate values.

Left arrow icon Right arrow icon

Key benefits

  • Packed with practical, hands-on cookbook recipes for building advanced, scalable cloud-based services on the Windows Azure platform explained in detail to maximize your learning
  • Extensive code samples showing how to use advanced features of Windows Azure blobs, tables and queues.
  • Understand remote management of Azure services using the Windows Azure Service Management REST API
  • Delve deep into Windows Azure Diagnostics
  • Master the Windows Azure AppFabric Service Bus and Access Control Service

Description

The Windows Azure platform is Microsoft's Platform-as-a-Service environment for hosting services and data in the cloud. It provides developers with on-demand computing, storage, and service connectivity capabilities that facilitate the hosting of highly scalable services in Windows Azure datacenters across the globe. This practical cookbook will show you advanced development techniques for building highly scalable cloud-based services using the Windows Azure platform. It contains over 80 practical, task-based, and immediately usable recipes covering a wide range of advanced development techniques for building highly scalable services to solve particular problems/scenarios when developing these services on the Windows Azure platform. Packed with reusable, real-world recipes, the book starts by explaining the various access control mechanisms used in the Windows Azure platform. Next you will see the advanced features of Windows Azure Blob storage, Windows Azure Table storage, and Windows Azure Queues. The book then dives deep into topics such as developing Windows Azure hosted services, using Windows Azure Diagnostics, managing hosted services with the Service Management API, using SQL Azure and the Windows Azure AppFabric Service Bus. You will see how to use several of the latest features such as VM roles, Windows Azure Connect, startup tasks, and the Windows Azure AppFabric Caching Service.

Who is this book for?

If you are an experienced Windows Azure developer or architect who wants to understand advanced development techniques when building highly scalable services using the Windows Azure platform, then this book is for you. You should have some exposure to Windows Azure and need basic understanding of Visual Studio, C#, SQL, .NET development, XML, and Web development concepts (HTTP, Services).

What you will learn

  • Develop highly scalable services for Windows Azure
  • Handle authentication and authorization in the Windows Azure platform
  • Use advanced features of the Windows Azure Storage Services: blobs, tables, and queues
  • Attach Azure Drives to a role instance
  • Diagnose problems using Windows Azure Diagnostics
  • Perform remote management of Azure services with the Windows Azure Service Management REST API
  • Expose services through the Windows Azure AppFabric Service Bus
  • Learn how to autoscale a Windows Azure hosted service
  • Use cloud-based databases with SQL Azure
  • Improve service performance with the Windows Azure AppFabric Caching Service
  • Understand the latest features ‚Äì including VM roles, Windows Azure Connect and startup tasks
Estimated delivery fee Deliver to Argentina

Standard delivery 10 - 13 business days

$12.95

Premium delivery 3 - 6 business days

$40.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Aug 05, 2011
Length: 392 pages
Edition : 1st
Language : English
ISBN-13 : 9781849682220
Vendor :
Microsoft
Languages :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Estimated delivery fee Deliver to Argentina

Standard delivery 10 - 13 business days

$12.95

Premium delivery 3 - 6 business days

$40.95
(Includes tracking information)

Product Details

Publication date : Aug 05, 2011
Length: 392 pages
Edition : 1st
Language : English
ISBN-13 : 9781849682220
Vendor :
Microsoft
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 152.97
Microsoft Windows Azure Development Cookbook
$54.99
Windows Azure programming patterns for Start-ups
$48.99
Microsoft Azure: Enterprise Application Development
$48.99
Total $ 152.97 Stars icon

Table of Contents

9 Chapters
Controlling Access in the Windows Azure Platform Chevron down icon Chevron up icon
Handling Blobs in Windows Azure Chevron down icon Chevron up icon
Going NoSQL with Windows Azure Tables Chevron down icon Chevron up icon
Disconnecting with Windows Azure Queues Chevron down icon Chevron up icon
Developing Hosted Services for Windows Azure Chevron down icon Chevron up icon
Digging into Windows Azure Diagnostics Chevron down icon Chevron up icon
Managing Hosted Services with the Service Management API Chevron down icon Chevron up icon
Using SQL Azure Chevron down icon Chevron up icon
Looking at the Windows Azure AppFabric Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
(9 Ratings)
5 star 66.7%
4 star 11.1%
3 star 0%
2 star 0%
1 star 22.2%
Filter icon Filter
Top Reviews

Filter reviews by




Adwait Ullal Feb 10, 2012
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book skips the introductory (and educational) aspects of Azure so the assumption is that the reader is familiar (or has worked) with Azure. If you're at that stage, you'll find the book very handy for solving specific issues that you may encounter during an Azure project.The topics that the author has covered are:Chapter 1, Controlling Access in the Windows Azure Platform.Chapter 2, Handling Blobs in Windows Azure.Chapter 3, Going NoSQL with Windows Azure Tables.Chapter 4, Disconnecting with Windows Azure Queues.Chapter 5, Developing Hosted Services for Windows Azure.Chapter 6, Digging into Windows Azure Diagnostics.Chapter 7, Managing Hosted Services with the Service Management API.Chapter 8, Using SQL Azure.Chapter 9, Looking at the Windows Azure AppFabric.Each chapter then has recipes for specific tasks that one may need. Each recipe starts with a task, a description of the task and how to complete that task. If any preparation needs to be done, the author lists it a "Getting Ready" section. Then, an "How to do it..." section goes into detail explaining how to complete the task with code. Lastly, each recipe ends with an "How it works..." section where the author explains how the code seen in the previous section works.A warning to the reader: some of the recipes are not task oriented but will help you make architectural decisions, which I found was a pleasant surprise.In summary, this book is for an intermediate or advanced Azure developer/Architect who is in need of immediate help with a particular issue s/he might be facing in a project.
Amazon Verified review Amazon
Pat Tormey Dec 16, 2011
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Neil's "Cookbook" is a well-organized list of functional tasks, organized around the specific pillars of Azure; each task can be applied independently.Every recipe clearly states "How to Do It" and "How it Works".Wish I'd read this last week.The samples are clear and concise, without sacrificing important concepts. IF I had read his recipe for dealing with the counter intuitive "Append anti-pattern" I could have saved myself a couple of days of experimentation and head scratching.Thanks Neil
Amazon Verified review Amazon
Jeffo Aug 15, 2011
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Given the complexities of Windows Azure, it's very effective when climbing up the learning curve to have a practical reference that helps you code on a variety of topics that span most of the relevant aspects of the subject. This is such a reference.I like the format of each recipe - an introduction to the topic, including explanations for why Azure is architected the way it is and what the various elements of each topic (e.g. associated classes and methods) do within the Azure architecture, then a "How to do it..." section with specific coding steps to generate the code that's included in the accompanying set of Visual Studio solutions, followed by a "How it works..." section that summarizes the coding steps and includes additional explanations and "gotchas".Different recipes build on one another. For example, there is a recipe for using Azure Drives (virtual hard drives mounted as blobs) in the cloud, and the author concludes that recipe by pointing out the differences between using them in the cloud versus in the development environment (locally, before deployment to the cloud). The following recipe then describes simulating Azure Drives in the development environment.I also appreciate that the author is not at all "chatty". This I've come to expect with "recipe" books, as I refer to them when I need to learn something very specific, usually in the middle of a project. The author holds very true to this format.I was a bit surprised to find that some of the recipes are not coding exercises at all, but rather advice in making certain solution architecture or pattern choices. For example, there is a recipe on how to choose the best Azure storage type for a hosted service. The recipe follows the same format as other recipes but replaces the detailed coding instructions with simple statements about why to choose each storage type. Nevertheless, the format works: the information is provided in a logical and concise manner, and I found myself referring back to the storage choice recipe a number of times as I was trying to decide on the storage layout for one of my simple solutions.One nice feature of the eBook is that pages referenced in the Index are hyperlinked so that you can look up topics or class names and go directly to the page where they are discussed.Overall, this is a very comprehensive reference that is easy to navigate and addresses each topic with an appropriate level of detail.
Amazon Verified review Amazon
Evelio Alonso Fuentes Aug 11, 2011
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book starts directly by hitting on the major areas of Microsoft Windows Azure Development that any developer should understand real well to make good use of this platform. Topics like Controlling Access, Blobs, Azure Tables, Azure Queues, Azure Diagnostics and more are discussed in detail. Not only how to use these things, but in which scenarios would you want to utilize each.One more thing I would like to mention is the inclusion of exercises in this book - a great idea in my mind for folks who learn by sample (like myself).My recommendation: Buy it for yourself. It's worth the price!
Amazon Verified review Amazon
NithyananthaBabu Sep 12, 2011
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Awesome Coding Steps!Every Steps will introduce different Approach of doing same thing. This will teach you the way of coding... Excellent explanation about Azure Storage and Access Control. I am in the middle of book. I am working for cloud more that 2 years. Now working as Architect in the field of Distributing computations. Full rank to authors.Who wants the guide for the developing windows azure apps as Web or WCF or Worker roles.. This is the highly recommended books.. Excellent.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela