Configuring the service model for a Cloud Service
The service model for a Cloud Service in Microsoft Azure is specified in two XML files: the service definition file, ServiceDefinition.csdef
, and the service configuration file, ServiceConfiguration.cscfg
. These files are part of the Microsoft Azure project.
The service definition file specifies the roles used in the Cloud Service, up to 25 in a single definition. For each role, the service definition file specifies the following:
- The instance size
- The available endpoints
- The public key certificates
- The pluggable modules used in the role
- The startup tasks
- The local resources
- The runtime execution context
- The multisite support
- The file contents of the role
The following code snippet is an example of the skeleton of the service definition document:
<ServiceDefinition name="<service-name>" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition" upgradeDomainCount="<number-of-upgrade-domains>" schemaVersion="<version>"> <LoadBalancerProbes> </LoadBalancerProbes> <WebRole …> </WebRole> <WorkerRole …> </WorkerRole> <NetworkTrafficRules> </NetworkTrafficRules> </ServiceDefinition>
We can mix up to 25 roles from both the web role and worker role types. In the past, there was also a third kind of supported role, the VM Role, which is now deprecated.
All instances of a role have the same size, chosen from standard sizes (A0-A04), memory intensive sizes (A5-A7), and compute intensive sizes (A8-A9). Each role may specify a number of input endpoints, internal endpoints, and instance-input endpoints. Input endpoints are accessible over the Internet and are load balanced, using a round-robin algorithm, across all instances of the role:
<InputEndpoint name="PublicWWW" protocol="http" port="80" />
Internal endpoints are accessible only by instances of any role in the Cloud Service. They are not load balanced:
<InternalEndpoint name="InternalService" protocol="tcp" />
Instance-input endpoints define a mapping between a public port and a single instance under the load balancer. An instance-input endpoint is linked to a specific role instance, using a port-forwarding technique on the load balancer. Onto it, we must open a range of ports through the AllocatePublicPortFrom
section:
<InstanceInputEndpoint name="InstanceLevelService" protocol="tcp" localPort="10100"> <AllocatePublicPortFrom> <FixedPortRange max="10105" min="10101" /> </AllocatePublicPortFrom> </InstanceInputEndpoint>
An X.509 public key certificate can be uploaded to a Cloud Service either directly on the Microsoft Azure Portal or using the Microsoft Azure Service Management REST API. The service definition file specifies which public key certificates, if any, are to be deployed with the role as well as the certificate store they are put in. A public key certificate can be used to configure an HTTPS endpoint but can also be accessed from code:
<Certificate name="CertificateForSSL" storeLocation="LocalMachine" storeName="My" />
Pluggable modules instruct Azure on how to set up the role. Microsoft Azure tooling for Visual Studio can automatically add/remove modules in order to enable/disable services as follows:
- Diagnostics to inject Microsoft Azure Diagnostics
- Remote access to inject remote desktop capability
- Remote forwarder to inject the forwarding capability used to support remote desktop
- Caching to inject the In-Role caching capability
Tip
Though In-Role caching is not covered in this book, there is a chapter about In-Memory Caching, using the Microsoft Azure Managed Cache service.
The following configuration XML code enables the additional modules:
<Imports> <Import moduleName="Diagnostics" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> <Import moduleName="Caching" /> </Imports>
Startup tasks are scripts or executables that run each time an instance starts, and they modify the runtime environment of the instance, up to and including the installation of the required software:
<Startup> <Task commandLine="run.cmd" taskType="foreground" executionContext="elevated"> <Environment> <Variable name="A" value="B" /> </Environment> </Task> </Startup>
The local resources section specifies how to reserve an isolated storage in the instance, for temporary data, accessible through an API instead of direct access to the filesystem:
<LocalResources> <LocalStorage name="DiagnosticStore" sizeInMB="20000" cleanOnRoleRecycle="false" /> <LocalStorage name="TempStorage" sizeInMB="10000" /> </LocalResources>
The runtime execution context specifies whether the role runs with limited privileges (default) or with elevated privileges that provide full administrative capabilities. Note that in a web role that is running full IIS, the runtime execution context applies only to the web role and does not affect IIS. This runs in a separate process with restricted privileges:
<Runtime executionContext="elevated" />
In a web role that is running full IIS, the site's element in the service definition file contains the IIS configuration for the role. It specifies the endpoint bindings, virtual applications, virtual directories, and host headers for the various websites hosted by the web role. The Hosting multiple websites in a web role recipe contains more information about this configuration. Refer to the following code:
<Sites> <Site name="Web"> <Bindings> <Binding name="Endpoint1" endpointName="Endpoint1" /> </Bindings> </Site> </Sites>
The contents section specifies whether static contents are copied from an application folder to a destination folder on the Azure virtual machine, relative to the %ROLEROOT%\Approot
folder:
<Contents> <Content destination="MyFolder"> <SourceDirectory path="FolderA"/> </Content> </Contents>
The service definition file is uploaded to Microsoft Azure as part of the Microsoft Azure package.
The service configuration file specifies the number of instances of each role. It also specifies the values of any custom configuration settings as well as those for any pluggable modules imported in the service definition file.
Applications developed using the .NET framework typically store application configuration settings in an app.config
or web.config
file. However, in Cloud Services, we can mix several applications (roles), so a uniform and central point of configuration is needed. Runtime code can still use these files; however, changes to these files require the redeployment of the entire service package. Microsoft Azure allows custom configuration settings to be specified in the service configuration file where they can be modified without redeploying the application. Any service configuration setting that could be changed while the Cloud Service is running should be stored in the service configuration file. These custom configuration settings must be declared in the service definition file:
<ConfigurationSettings> <Setting name="MySetting" /> </ConfigurationSettings>
The Microsoft Azure SDK provides a RoleEnvironment.GetConfigurationSetting()
method that can be used to access the values of custom configuration settings. There is also CloudConfigurationManager.GetSetting()
of the Microsoft.WindowsAzure.Configuration
assembly that checks in-service configuration first, and if no Azure environment is found, it goes to the local configuration file.
The service configuration file is uploaded separately from the Microsoft Azure package and can be modified independently of it. Changes to the service configuration file can be implemented either directly on the Microsoft Azure Portal or by upgrading the Cloud Service. The service configuration can also be upgraded using the Microsoft Azure Service Management REST API.
The customization of the service configuration file is limited almost to the role instance count, the actual values of the settings, and the certificate thumbprints:
<Role name="WorkerHelloWorld"> <Instances count="2" /> <ConfigurationSettings> <Setting name="MySetting" value="Value" /> </ConfigurationSettings> <Certificates> <Certificate name="CertificateForSSL" thumbprint="D3E008E45ADCC328CE6BE2AB9AACE2D13F294838" thumbprintAlgorithm="sha1" /> </Certificates> </Role>
The handling of service upgrades is described in the Managing upgrades and changes to a Cloud Service and Handling changes to the configuration and topology of a Cloud Service recipes.
In this recipe, we'll learn how to configure the service model for a sample application.
Getting ready
To use this recipe, we need to have created a Microsoft Azure Cloud Service and deployed an application to it, as described in the Publishing a Cloud Service with options from Visual Studio recipe.
How to do it...
We are going to see how to implement a real service definition file, based on the following scenario, taken from the WAHelloWorld sample. Suppose we have a Cloud Service with two roles (a web role and a worker one). The web role has a medium instance size; it uses the Diagnostics module, has a local storage of 10 GB, has two public endpoints (one at port 80
and another at port 8080
), and has a setting value. The worker role is small, and it has an input endpoint to let the various instances communicate together.
For the web role, we proceed as follows:
- Open the
ServiceDefinition.csdef
file in Visual Studio. - Inside the
<ServiceDefinition>
root element, create a<WebRole>
item:<WebRole name="WebHelloWorld" vmsize="Medium"></WebRole>
- Inside the
WebRole
tag just created, add an<Endpoints>
tag with twoInputEndpoint
tags, one for each public endpoint:<Endpoints> <InputEndpoint name="Endpoint1" protocol="http" port="80" /> <InputEndpoint name="Endpoint2" protocol="http" port="8080" /> </Endpoints>
- Inside the
WebRole
tag, create aSites
element with the correct binding to the web application in the solution:<Sites> <Site name="Web"> <Bindings> <Binding name="Endpoint1" endpointName="Endpoint1" /> <Binding name="Endpoint2" endpointName="Endpoint2" /> </Bindings> </Site> </Sites>
- Inside the
WebRole
tag, declare the usage of theDiagnostics
module:<Imports> <Import moduleName="Diagnostics" /> </Imports>
- Inside the
WebRole
tag, declare a local storage element of 10 GB:<LocalResources> <LocalStorage name="MyStorage" cleanOnRoleRecycle="true" sizeInMB="10240" /> </LocalResources>
- Finally, declare the
Settings
section and a setting inside theWebRole
tag:<ConfigurationSettings> <Setting name="MySetting" /> </ConfigurationSettings>
Note
Worker role.
- Create a
WorkerRole
section like the following:<WorkerRole name="WorkerHelloWorld" vmsize="Small"> ...
- Declare
InternalEndpoint
inside a newEndpoints
section:<Endpoints> <InternalEndpoint name="Internal" protocol="tcp" /> </Endpoints>
- In the corresponding
ServiceConfiguration.cscfg
file, configure the instance count as follows:<Role name="WebHelloWorld"> <Instances count="1" /> </Role> <Role name="WorkerHelloWorld"> <Instances count="2" /> </Role>
- Provide a value called
MySetting
for the configuration setting :<ConfigurationSettings> <Setting name="MySetting" value="Test"/> </ConfigurationSettings>
- Save the file, and check the Visual Studio Error List window to solve any errors.
How it works...
In step 2, we put an XML tag to declare a WebRole
tag. The name of the WebRole
tag must be the name of a valid web application project inside the solution that contains the cloud project itself. In the WebRole
tag, we also specify the instance size, choosing among the ones in the following table (there are more sizes available actually):
Size |
CPU |
Memory |
---|---|---|
ExtraSmall |
Shared |
768 MB |
Small |
1 |
1.75 GB |
Medium |
2 |
3.5 GB |
Large |
4 |
7 GB |
ExtraLarge |
8 |
14 GB |
A5 |
2 |
14 GB |
A6 |
4 |
28 GB |
A7 |
8 |
56 GB |
In step 3, we declared two HTTP-based endpoints on ports 80
and 8080
, respectively. Intend this configuration as a load balancer firewall/forward configuration. Saying "There's an endpoint" does not mean there is a real service under the hood, which replies to the request made to the endpoints (except for the default one on port 80
).
In step 4, we bound the WebHelloWorld web application to both the endpoints declared earlier. It is also possible to specify additional configurations regarding virtual directories and virtual applications.
In step 5, we simply told Azure to inject the Diagnostics module into the VM that runs our service. As said earlier, other modules can be injected here.
In step 6, we told Azure to allocate an amount of 10 GB of space on a folder located somewhere on the virtual machine. As this folder will be accessed through an API, it doesn't matter where it's located. What we have to know is the meaning of the cleanOnRoleRecycle
attribute. If it is true, we agree that isolated storage won't be retained across role recycles; if it is false, we ask it to preserve the data (if possible) instead.
In step 7, we declared the presence of a setting value but not the setting value itself, which is shown instead in the service configuration in step 11.
In step 8, we repeated the process for the worker role, but as it does not run IIS, we don't declare any sites. Instead, due to the initial goal, we declare an internal endpoint. In step 9 in fact, we said that the VM will have an open TCP port. It will be our code's responsibility to actually bind a service to this port.
Tip
In the InternalEndpoint
tag, we can specify a fixed port number. In the example given earlier, there is no port so that Azure can decide which port to allocate. We can use the ServiceRuntime
API as the local storage to find out the information at runtime.
Finally, we populate the service configuration with the actual values for the parameters specified in the service definition. One of these is the instance count (for both the roles) and the configuration setting value for the web role.
There's more…
Is there more to the service definition document? First, for example, the capability to influence the update process of our services/roles/instances. Let's introduce the concepts of fault domain and update domain. Microsoft Azure assures that if two or more instances are deployed, it will put them onto an isolated hardware to reduce as much as it can the possibility of a downtime due to a failure. This concept is also known as Fault Domain, as Azure creates instances in separate areas to increase the availability. Therefore, an Update Domain is about how Azure manages the update flow on our instances, taking them offline one by one or group by group to reduce, again, the possibility of a downtime. Think about Upgrade Domain as groups of instances, which have the default value of 5
. This means that if five or fewer instances are deployed, they will be updated one by one. If there are more than five instances, the default behavior makes five groups and updates the instances of a group altogether.
Tip
It is not always necessary to update instances one by one, or it is often not feasible to update the system in parts. Despite the occurrence of a downtime, systems often bring online new databases and new logic that modify actual data. Bringing new instances online one by one could lead to the same time coexisting on a different version of data/code that runs on the system. In this case, a simultaneous upgrade as well as the related downtime should be taken into consideration. During development, it is advisable to keep a single instance deployed, to save time during upgrades; however, during testing, it is recommended that you scale out and verify that the application is behaving correctly.
We can suggest a different value for Azure for the Upgrade Domain behavior; we can suggest up to a value of 20. The higher the value, the lower the impact of the upgrade on the entire infrastructure:
<ServiceDefinition name="WAHelloWorld" upgradeDomainCount="20"…
Tip
Consider letting Azure decide about the Upgrade Domains due to the nature of the PaaS in the event of a breaking change happening to the platform in the future. Designing a workflow based on an Azure constraint is not recommended. Instead, design your update to be resilient without telling Azure anything.
Finally, instances will know about the change that occurred in the topology (upgrades and configuration) that walks the upgrade domains. This means that the instances will know about the change one by one, only when it is their respective turn to change. This is the default behavior.
See also
- There is more information on topology changes in the Handling changes to the configuration and topology of a Cloud Service recipe
- The complete reference to the Service Definition schema at http://msdn.microsoft.com/en-us/library/ee758711.aspx