Introduction
Cloud Services are classified typically as Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS). In the IaaS model, the core service provided is a Virtual Machine (VM) with a guest OS. The customer is responsible for everything about the guest OS, including hardening it and adding any required software. In the PaaS model, the core service provided is a VM with a hardened guest OS and an application-hosting environment. The customer is responsible only for the service injected into this environment. In the SaaS model, a service is exposed over the Internet, and the customer merely has to access it.
Microsoft Azure Cloud Services is the paradigm of the Platform-as-a-Service (PaaS) model of cloud computing. A Cloud Service can be developed and deployed to any of the desired Microsoft Azure datacenters (regions) located across the world. A service hosted in Microsoft Azure can leverage the high scalability and reduced administrative benefits of the PaaS model.
In later chapters, we do not see how Azure also offers an IaaS alternative, Microsoft Azure Virtual Machines, to let customers get the ability to deploy customized solutions in fully customized environments. Either way, the benefits of using PaaS are strongly evident compared to IaaS. In PaaS, we can reduce the governance of the whole system, focusing only on technology and processes, instead of managing the IT, as we did earlier. PaaS enforces the use of best practices throughout the development process, forcing us to take the right decisions in terms of design patterns and architectural choices.
From an IT architect's perspective, using PaaS is similar to trusting a black-box model. We know that the input is our code, which might be written with some environmental constraints or specific features, and the output is the running application on top of instances, virtual machines, or, generically, something that is managed, in the case of Azure, by Microsoft.
A Cloud Service provides the management and security boundaries for a set of roles. It is a management boundary because a Cloud Service is deployed, started, stopped, and deleted as a unit. A Cloud Service represents a security boundary because roles can expose input endpoints to the public internet, and they can also expose internal endpoints that are visible only to other roles in the service. We will see how roles work in the Configuring the service model for a Cloud Service recipe.
Roles are the scalability unit for a Cloud Service, as they provide vertical scaling by increasing the instance size and horizontal scaling by increasing the number of instances. Each role is deployed as one or more instances. The number of deployed instances for a role scales independent of other roles, as we will see in the Handling changes to the configuration and topology of a Cloud Service recipe. For example, one role could have two instances deployed, while another could have 200 instances. Furthermore, the compute capacity (or size) of each deployed instance is specified at the role level so that all instances of a role are the same size, though instances of different roles might have different sizes.
The application functionality of a role is deployed to individual instances that provide the compute capability for the Cloud Service. Each instance is hosted on its own VM. An instance is stateless because any changes made to later deployment will not survive an instance failure and will be lost. Note that the word role is used frequently where the word instance should be used.
A central driver of interest in cloud computing has been the realization that horizontal scalability, by adding commodity servers, is significantly more cost effective than vertical scalability achieved through increasing the power of a single server. Just like other cloud platforms, the Microsoft Azure platform emphasizes horizontal scalability rather than vertical scalability. The ability to increase and decrease the number of deployed instances to match the workload is described as elasticity.
Microsoft Azure supports two types of roles: web roles and worker roles. The web and worker roles are central to the PaaS model of Microsoft Azure.
A web role hosts websites using the complete Internet Information Services (IIS). It can host multiple websites with a single endpoint, using host headers to distinguish them, as we will see in the Hosting multiple websites in a web role recipe. However, this deployment strategy is gradually going into disuse due to an emerging powerful PaaS service that is called Microsoft Azure Web Sites. An instance of a web role implements two processes: the running code that interacts with the Azure fabric and the process that runs IIS.
A worker role hosts a long-running service and essentially replicates the functionality of a Windows service. Otherwise, the only real difference between a worker role and web role is that a web role hosts IIS, while the worker role does not. Furthermore, a worker role can also be used to host web servers other than IIS; in fact, worker roles are suggested by Microsoft when someone needs to deploy some kind of software that is not designed to run on the default Microsoft web stack. For example, if we want to run a Java application, my worker role should load a process of the JEE Application Server (that is, Glassfish or JBoss) and load the related files needed by the application to run, into it. This deployment model should be helped by some components, often called accelerators, that encapsulate the logic to install, run, and deploy the third-party stack (such as the Java one, for example) in a box, in a stateless fashion.
Visual Studio is a central theme in this chapter as well as for the whole book. In the Setting up solutions and projects to work with Cloud Services recipe, we will see the basics of the Visual Studio integration, while in the Debugging a Cloud Service locally with either Emulator or Emulator Express and Debugging a Cloud Service remotely with Visual Studio recipes, we will see something more advanced.