There is little doubt that the concept of cloud computing is still in its infancy and as such, the definition of what cloud computing means is still much debated. This book doesn't seek to enter into the debate, instead prefers to focus on the services and capabilities that Azure can provide.
A good (and impartial) definition is available at
http://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-145.pdf
provided by The National Institute of Standards and Technology (NIST), who defined cloud computing as follows:
"Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction."
NIST go on to specify five essential characteristics of a cloud computing platform as follows:
- On-demand self-service: Resources can be provisioned by the consumer at will and in a timely fashion, in a fully automated way. For example, it should be possible for a consumer to spin up a virtual machine as required with no service ticket needing to be raised with the provider; instead, the VM should be able to be provisioned in a self-serviced, automated way.
- Broad network access: Resources should be available over the Internet and other networks via many different client devices, for example, laptop, mobile phone, and tablet.
- Resource pooling: Consumers are provisioned computing resources via a common resource pool (multitenancy), which can be assigned dynamically, matching the highs and lows of consumer demand. Resource pools are geographically dispersed across different countries, states, and data centers.
- Rapid elasticity: Resource utilization can go up or down automatically (or manually) to provide greater or lesser computing power based on current resource requirements.
- Measured service: Feedback is readily available in regard to current resource usage and cost with metrics available (for example, storage and processing costs, bandwidth usage, the number of users). Billing is automated, transparent, consistent, and reliable.
Armed with these essential characteristics, it is apparent that there is a difference between a simpler hosting platform and a more fully featured cloud platform.
With a hosting platform, a company may employ VMware ESX hypervisor software on the private network to deploy and manage VMs. If a developer, for instance, requires a shared development VM on the company domain, it is usually not possible for the developer to start a VM in a self-service model: this would no doubt cause a few eyebrows to be raised by the IT infrastructure team! There is a finite limit on the total amount of resources assigned to the farm (disk, CPU, RAM, and so on), and this needs to be carefully managed. Also, there is a whole raft of questions and requirements around a regime of installing software patches, to mention just a few questions. In fact, in most organizations, a job ticket would need to be raised with the service provider (or internal IT infrastructure support team) for the VM to be provisioned. This proves a time-consuming and often onerous task.
Compare the description of a hosting platform with a cloud platform. With the cloud, the developer would be able to spin up a VM on a shared public platform, with the required supporting infrastructure and required specifications, on-demand. In Azure, for instance, this could be achieved via the web-based Azure Portal or using a scripting language such as Windows PowerShell. Costing would be pay as you go: as long as the VM is running, the developer will be charged a finely tuned fee based on the resource consumption of the VM (disk usage, CPU load, network usage, and so on). If the VM is switched off, the fee would just be for storing the VM image file. The VM can be spun up and shut down on-demand, as required, by the developer. If extra resources are required (for indicative software load testing, for instance), these can be assigned to the VM and the capacity is limitless. However, it would be the responsibility of the developer to install software and OS patches, so maintaining and supporting the VM.
So, we can see here that a hosting platform is missing many of the essential characteristics of a cloud platform. What many may regard as a cloud platform is in fact a hosting platform. Azure is firmly a cloud platform. A key difference is that in a cloud platform, computing power is a commodity and as such needs to be measured and easily provisioned.
A cloud platform can be deployed in one of two modes:
- Private: The platform is accessible by one organization only. A technology called Azure Stack running in a private company data center is an example of this. Stack is a version of the software and services provided by Azure that can be installed and run in a private data center.
- Public: The platform is available on a public network, shared with multiple different organizations in multitenancy.
Following on from this, an additional mode may be applied: hybrid. This typically describes a private or public cloud that is hooked up to one or more other, separate, cloud platforms (public or private). So this is an aggregation of at least two separate cloud platforms, each hosted on their own dedicated infrastructure, possibly providing extra capability to one or other cloud service provider in a way that is transparent to the user.
Azure is a public cloud owned, hosted, and operated by Microsoft, available to most organizations (and countries) across the globe. However, it is true that solutions can be built on Azure such that they are a hybrid. Consider, for example, a solution that is hosted primarily in Azure but leverages services in a company private data center running Azure Stack, a RESTful API. In this case, the solution can be considered a hybrid because it utilizes services provided by both a public and private cloud.
Another example hybrid solution may expose endpoints in Azure that forward requests to an endpoint in the local data center. Service Bus relays, for instance, provide this functionality. This is a pattern that is becoming more prevalent, as companies wish to leverage cloud solutions without opening wide the company on-premises firewall and proxy, relying instead on the security mechanisms offered by Azure.
Cloud providers typically break down their service offerings into three categories, which build on top of each other, as shown in the diagram here:
A diagram showing the relationships between Cloud Platform Services
- Infrastructure as a Service (IaaS): This layer describes the base hardware and software resources required to run application software in the cloud. This provides the ability to create and configure VMs and their hardware allocation (disk, CPU cores, RAM, and so on). Also, you can specify their base OS and configuration as required. With Infrastructure as a Service (IaaS) the cloud vendor supports the infrastructure as well, such as, network configuration (virtual switches, firewall configuration, and so on).
- Platform as a Service (PaaS): This layer is built on top of the IaaS layer; in fact, the cloud provider is responsible for maintaining the IaaS layer and this is transparent to the end user. What is presented to the software provider (the vendor) is a readily scalable, configurable, and reliable application-hosting environment for its user base. Developers use development tools provided by the cloud vendor and deploy to the hosting platform. Deployments include the software developed to the specifications of the platform and associated configuration. Microsoft Azure App Services, discussed in this book, is there in this layer.
- Software as a Service (SaaS): This is a term made famous by cloud applications such as
https://www.salesforce.com/in/?ir=1
(provider of CRM software). In this layer, application software runs transparently in the cloud. Via a web browser, for example, a user can use a word processing program via a tablet, not knowing that this program runs in a hosting platform (PaaS), which is in turn provisioned by various load-balanced servers (IaaS).
Inevitably, it has become a great source of amusement to hijack the phrase ... as a service in humorous ways!
Jokes as a Service (JaaS)
Cloud computing - something old or something new?
It will be apparent to some that the ideas presented here touch on a great many old paradigms (indeed computing truths if one may be so bold, that is, concepts that have been proven true time and time again through many implementations and as such are proven to be beneficial). Cloud computing is an agglomeration of a great many old ideas: for one, the concept of a shared pool of computing power invokes parallels with mainframes, running advanced time-sharing operating systems developed in the 1960s; also, the idea of software services that offer high cohesion provokes memories of SOA.
The base enabler for cloud computing is virtualization of computing resources and in many people's minds, this then puts Azure on par with an operating system that is essentially an abstraction of computing hardware for the purposes of ease of understanding and to ensure optimal use of the underlying hardware. But it is apparent that Azure is much more than an OS since it provides services typically in the area that would be considered application software, running on the OS.
Azure touches on so many aspects of computing, which is fascinating and at the same time overwhelming, in terms of effectively unlimited services that can be provided. But it is worth taking heart that core principles and characteristics exist that provide a jumpstart to learning about cloud platform services, which we hope to have introduced in this section of the book. So, all the old learnings are still relevant and provide a pattern for the future; rather, it is a case of something old for something new!
What is integration in the cloud?
Now that we have a good understanding of what cloud computing is and the benefits that it can offer, let's examine the heart of this book: integrating systems and applications using the cloud.
Software integration is the process of connecting disparate systems and applications together that would not normally talk to each other, allowing data and business rules to be shared to drive automated business processes that add value to the business.
Traditional on-premises integration is concerned with linking internal systems and applications together and communicating with other businesses. An enterprise application integration (EAI) product such as BizTalk Server is very good at this and provides useful features out of the box such as error handling and retry capability. However, it requires specialist knowledge, and also, it is now apparent that the demands of modern IT have changed the face of integration is several ways, as listed later, which has required new approaches to integration:
- The proliferation of mobile devices means greater demands in throughput requirements and the ability to scale to cope with peak demands. The microservices pattern (discussed in detail later on in this chapter) allows granular tuning and scaling of individual services to meet modern demands.
- Consumers expect to be able to use applications 24x7, and mobile applications are easily accessible and so promote this. It is therefore becoming increasingly unacceptable to have significant system downtime for software releases and patching. This has led to the development of new platform capabilities, where individual services can be brought offline without needing to disable an entire system or large sections of a system. These concepts are not easily adaptable to traditional integration platforms, which tend to be monolithic in nature.
- The rise of SaaS solutions hosted in cloud platforms means that it makes sense to host integration platforms in the cloud for performance reasons, and also the skillset required to enter the realm of the mainstream developer, where the platform can handle common integration tasks on behalf of the developer, removing the need for in-depth specialist integration knowledge.
- Society is now much more demanding in terms of new functionality, and this promotes rapid evolution of consumer demand. As a result, developers need to release new features rapidly to maintain a competitive edge (and also retire old or unsuccessful features equally as quickly): such rapid development is harder to achieve using the more traditional platforms (which require highly trained specialists), and this had led to the rise of Integration Platform as a Service (iPaaS) solutions that provide a hosting platform for integration solutions and reduce the time to market, by handling common integration development tasks automatically. Again, the microservices architectural style is a key enabler of building integration solutions that can support rapid change and versioning.
The nature of the cloud, with its elastic scalability and the investment of cloud providers in PaaS solutions that ease and speed up the development process (such as Azure App Service), are strongly positioned to solve these new integration problems of today.
The benefits of integration using the cloud
As touched upon in the previous section, the cloud is well positioned to solve the new integration challenges, as the list of following properties demonstrates:
- Elastic scale: As mentioned, a key benefit of the cloud is that a seemingly endless supply of computing power is available that can react in an automated way to peak demand. This is crucial in today's world, with the proliferation of devices available that can produce sudden spikes in load (network bandwidth, RAM, disk, and so on).
- Granular service hosting: The availability of application hosting platforms (such as the Azure App Service iPaaS) allows applications to be hosted such that they can be scaled and released independently. In this way, application downtime can be reduced since a complete solution does not need to be brought offline to enable new feature releases, for example. This is very relevant to the demands of today, where customers expect high availability of solutions in a 24x7 servicing fashion. This is particularly relevant for businesses with subsidiaries and customers in different time zones across the globe, matching the demands of a global market. An example of the benefits of platform hosting is a solution with services running in two data centers: by configuring the load balancer, it is possible to direct traffic to one data center only, thereby allowing services on the second data center to be updated. This procedure may then be reversed to allow the other data center services to be updated. In this way, consumers of the software experience zero downtime.
- Simplified development with application platform hosting: Customers demand a fast turnaround of new functionality, and sometimes, IT struggles to keep up with the consumer demand. The complexities of traditional EAI platforms (such as BizTalk Server) and the skillsets required are sometimes a bottleneck on new initiatives. The rise of PaaS solutions assists with reducing the time to market, by reducing the development effort, leading to greater customer satisfaction and loyalty, which in turn affects the bottom line positively. iPaaS solutions do this by offering pre canned components to developers for common integration tasks; in Azure App Service, for example, Logic Apps can make use of connectors that enable connectivity to popular SaaS solutions, such as Salesforce, SAP, and Twitter (to name a few). In this way, specialist knowledge is much reduced.
Design patterns for cloud integration
The risk associated with the new wave of cloud technologies is that the hype and excitement surrounding them lends too much focus on the technologies themselves and not enough consideration regarding how they can be used as part of the integration toolbox, to build robust (hence the name of this book, Robust Cloud Integration with Azure) and supportable solutions that are:
- Maintainable: It is easy to fix errors with existing components without impacting unrelated components.
- Extensible: It is straightforward to implement new features (and also to remove unused features) without affecting existing functionality.
- Supportable: In Production, it is easy to locate and troubleshoot application errors. Logging and tracking is readily available and accessible by the support staff.
These characteristics can be achieved through good design, which should not be forgotten.
One aim of this book is to show that integration design for the cloud is as important as ever, to prevent a proliferation of hard to maintain and fragile integration platforms that cannot be changed and expanded on in the future.
The evolution of integration design and how this applies to the cloud
Modern web based integration could be described as one of simplicity (for example, a single point-to-point solution), increasing to the complexities of the service-first approach associated with SOA, leading naturally to a fully decoupled integration layer with an inference engine, using technologies such as Enterprise Service Bus (ESB) and the simpler hub and spoke/publish and subscribe pattern of integration.
iPaaS solutions such as Azure App Service build on the service-first approach but to a more granular level (the microservice level). If a service represents a discrete function, the microservice idea goes one step further, breaking a service down into even more discrete micro functions.
The timeline here represents an example company's journey from no integration, to a complex mesh of many varied point-to-point solutions, to integration in the cloud over the course of a few years:
- Year 0 – No integration: There is no integration or sharing of data and business rules between the various systems. Data is siloed in each system, and where necessary, keyed manually into each system.
- Disadvantages: Business processes are siloed and cannot be automated end to end, requiring costly manual steps. Since business rules are siloed, there are no common business processes across the business and hence no easy way to have a single view of the business. This leads to errors and lack of visibility of business flows across the enterprise.
- Repercussions: Duplication of effort where lack of visibility of current processes leads to the retriggering of activities already in flight, bad data leading to incorrect business decisions (affecting the bottom line for example, let's build a widget for customer x, even though they have a poor credit rating), and poor customer satisfaction. In this scenario, IT is commonly seen not as an asset to the business, adding real value and driving business opportunities (like it should) but viewed as an operational overhead (a budget black hole, with no return on investment).
- Years 1-3 – Costly and fragile point-to-point solutions: Recognizing the need to break out from silos of data and business rules, the company exposes data using web services. Each system connects to each other, leading to a complex mesh of connections that prove hard to maintain over time (see the example diagram later). If one endpoint changes, for example, this affects all consumers, who each need to change their programs that consume the endpoint.
- Years 4-6 – Rationalization and manageability through a service-first approach using SOA principles and service decoupling using ESB: In an effort to tackle the issues of point-to-point connectivity, the company decides to build an intermediary integration layer that decouples clients and endpoints. They implement and mandate the principles of SOA where the functionality is exposed as standalone, reusable services. Progressing from this, they implement ESB where a single endpoint is exposed and a Business Rules Engine infers what service should receive the client's request. The ESB routing slip pattern is also used to chain service functionality together, leading to service reuse across different users.
- Advantages: There is endpoint decoupling through an intermediary integration layer, which means that endpoints can be changed.
- Disadvantages: Services are not easily scalable using traditional EAI platforms. As services are reusable, new projects make increasing use of them, but they prove difficult to scale to match demand. The processing overhead of ESB is also not suitable for real-time requests servicing mobile applications. Typically, an SOA governance team and a dedicated SOA development team determine what services should become SOA services and these are then built and maintained by the SOA team. However, these teams often become a bottleneck where they cannot analyze, design, and build services quickly enough to match demand, due to the workload. They also build up specialist knowledge that takes time to learn, meaning that it takes time to bring new team members up to speed to cope with the workload.
- Repercussions: Over time, services become difficult to scale and the SOA team becomes a bottleneck to project delivery, resulting in a breakdown and stagnation of the service-first approach. This damages the foreseen benefits of SOA, and projects seek other ways to deliver services, desiring a more simplistic approach, so nonintegration developers can assist with the service development backlog. This often leads to a proliferation of services that are nothing more than simply point-to-point solutions mediated through the integration layer (which prompts discussion around the benefit of an integration layer).
- Years 7-Now - Movement into the cloud: The elastic scale of the cloud and the rise of iPaaS are two key drivers in order to adopt the cloud to maintain the benefits of good integration design (service decoupling and reuse). The company decides to progressively adopt cloud solutions to link with SaaS solutions such as Salesforce, using technologies such as Azure Service Bus Relay to permit access beyond the company firewall and using Logic Apps and its connectors to quickly build fine-grained services that talk to LOB SaaS systems. BizTalk Server is a key enabler as an on-premises platform that bridges connectivity between the on-premises and cloud-based LOB systems.
- Advantages: Using the BizTalk Server cloud adapters, it is possible for the company to keep its current investment in on-premises integration while using the benefits of the cloud, expanding its LOB system line-up to include cloud-based solutions, such as Salesforce and SAP. The benefit of using an iPaaS solution such as Azure App Service is that specialist integration knowledge is not required, and it is easier to building loosely coupled and maintainable components by virtue of the platform design. The rise of simpler REST-based APIs (compared with the complex WS* Standards) also reduces the complexity of service interfaces, reducing the barriers to integration development.
- Disadvantages: The range of iPaaS connectors is limited, and solutions need to be built to fit the requirements of the platform: creativity is therefore constrained/sacrificed for the benefit of the standard approach offered by the platform. This may make integration with nonmainstream systems more difficult or simply not able to be supported (in the worst case). iPaaS offerings are also in their infancy and subject to intense work by cloud providers; it is therefore possible for frequent software releases to cause system outages and incompatibility issues with existing production solutions.
- Repercussions: It becomes increasingly obvious to the company that there is a dichotomy in their LOB systems: on-premises and cloud based. It may become increasingly the case that more and more functionality moves to the cloud such that eventually, the whole business IT architecture sits in the cloud and the on-premises solutions are disabled, and this proves more cost-effective and secure than hosting on-premises.
Introduction to the microservices architecture
Throughout this chapter, we have talked about Azure, PaaS, and the evolution of integration. The microservices architectural pattern has also been briefly touched upon and this will be fleshed out further in the following sections, because it is a pattern underpinning many of the current PaaS solutions.
In order to maximize the benefits of the cloud, it is essential to understand what architectural principles we should follow to maximize the use of cloud elasticity and also to be aware of the different design patterns that can provide increased granularity and isolation to a solution.
We have seen so far that cloud solutions are innovative: they have changed the way businesses are targeting potential customers today. If you have a product that is catering a large customer base, you can leverage cloud to have infrastructure and services running over multiple geographic regions and that too with no time. This was not the case a couple of years ago where you devote months to get hardware procurement and provisioning done. Cloud has eased the process of creating new customers and expanded the business horizon across multiple demographic boundaries.
As the business grows, the complexities around delivering services also increase. Today, businesses want to work with the SaaS approach to have continuous delivery along with continuous updates. No business house wants to shut down for a patching activity or any service feature enhancement.
We have seen business requirements where new features need to be added to a product, and this requires multiple updates to the hosted service within a single day. We also have seen use cases where a business needs to do scale up/scale down based on the current and future demand. How can this be done, whether the software solution is simple or complex? The answer is to follow correct design while building the software.
The evolution of architectures
In this decade, we have seen software design evolving. It has changed from desktop-based applications to applications running on the Internet and on devices. In the following diagram, we tried to summarize evolution stages:
The evolution of software from the desktop to microservices architecture
From the earlier diagram, we can easily analyze the changes to software.
We started building software for desktops, and with immergence of networking and Internet, we started slicing software design vertically into layers or we can say divided the software into tiers. This is where we have all heard terms such as client server architecture or two-tiered architecture, three-tiered architecture, and multi-tiered architecture. The main objective of tiered architecture was to divide software responsibility into layers.
A simple diagram for three-tiered architecture
If we look at three-tiered architecture, every layer has to perform a certain set of functions. The UI layer is responsible for a user interaction; the business layer takes care of business logic, whereas the database layer is responsible for storing data.
With the emergence of cloud virtualization, infrastructure automation, continuous delivery, and domain-driven design, businesses started looking for SaaS-based approaches to provide services to end users. Tiered architecture has certain limitations to this and that's where microservices fits the overall concept to design distributed systems.
Limitation of monolithic application design
- Autonomocity: In the actual world, the application tiers or layers are not totally independent units or autonomous. The layers always overlap with each other to some extent, and this is one of the drawbacks in monolithic application design. For example, if you look at the diagram of three -tiered architecture earlier, you cannot change the UI layer without making similar changes to business or database layer.
- Isolation: As tiers are not autonomous, isolation is very difficult. It makes harder for a developer to make changes to one layer, test it, and deploy to the production. He needs to go through testing each connecting layer and make necessary modification to tiers if required.
- Fault tolerance: Monolithic Application Designs are not always fault-tolerant. If any of the layer/tiers started malfunction, then it can crash the whole application. This is one of the big drawbacks we see with the monolithic approach. If we take an example of three-tiered architecture and if the business layer or database layer starts giving exception or get corrupted, then it will halt the overall processing of application.
- Technology-driven design: With a monolithic approach, an organization is always divided in terms of technology. In every organization, you can see dedicated team for database administrators, networks, integration, UI, and so on. The division of team based on technology makes harder to make the business agile. No one holds entire business information, and changes are very difficult to make. For example, a database administrator does not have full knowledge of what is being implemented in the business layer or integration layer.
- Frequent changes: Technology is always business-driven, and if our business requires frequent change, then the technology should be in a position to accommodate those changes. Monolithic applications are not good candidates for frequent change; you cannot change your code base on an hourly basis and do a whole round of unit testing, regression testing, and the deployment on-the-fly. All these changes will take time, and it might halt your business to make frequent changes and updates to the service.
- Security: With a monolithic approach, a single component or framework will be responsible for the overall security of an application. For example, the same security mechanism will be used for the inventory and order modules of our application. We require a way to spilt this; there should be different security requirements for different modules.
- Shared data: In a monolithic or tiered design, the concept of shared data is being widely used. Data is stored in a database in a relational format (for example) and is being shared among other components involved in the service. If we make changes to one table design, it might break other components of the application; for example, as shown in the diagram later. If we try to change a single table schema of payment or rename a specific column in the table, it might break the code for the invoice component.
A Shared DATA model for Monolith Architecture
With a shared data approach, we have structured data, but it is not agile. Today, business is changing fast; to accommodate these fast-paced changes we need to move from the shared data approach.
The list does not end here: you can find multiple content over the Internet that discusses other limitations of the monolithic application design.
Because of the limitations of Monolithic Architecture in designing distributed system* discussed earlier, James Lewis and Martin Fowler first came up with an application architecture model named microservices. It has gained popularity as the basis to build distributed systems.
So what is microservices and what are different characteristic of microservices?
Microservice can be explained as self-contained small unit of functionality, which will be used for specific business capability.
In simple words, microservices are an independent unit, and it follows the principle of single responsibility. Single responsibility means each microservice has a set of well-defined features and has a boundary and should run on a separate process.
This is the pattern where we divide an application into component parts, and each component is an independent unit of business. The basic principle of microservices is that it should not overlap with other services or share any common data storage. This way, microservices provide a layer of abstraction and isolation to other microservices in a distributed system.
While we are discussing the microservices architecture, it is very important to understand the set of common characteristics that each microservices will have.
Note
A distributed system
A distributed system is a model in which component on the network communicate with each other by sending and receiving messages. The message format can be of multiple types such as flat files, XML, and JSON. To learn more about distributed computing, refer to the Wiki link,
https://en.wikipedia.org/wiki/Distributed_computing
.
The characteristics of microservices
The following points show the characteristics of the microservices architecture:
- The Decentralization of data storage: The decentralization of data storage means each microservice will be an independent service and does not share any data storage among themselves. The communication among microservices should only be done through a common set of protocols such as HTTP.
The decentralization of data storage
From the earlier diagram, we can easily see how microservices are independently structured and do not share any common data storage.
microservices should be independent to choose data source of its choice; some may choose a relational database, some may choose NoSQL, and another might use queues, a filesystem, and so on. This is the way we are removing dependency across multiple microservices.
- Independent deployment and versioning: As microservices are self-contained processes, any change to a microservices can be versioned, tested, and deployed independently. This is one of the key features of the microservice design pattern. You just need to concentrate on the business capability of a single microservice instead of thinking about the whole application. This provides the benefit of quick application enhancement, and you can update and add features to a service on the fly.
- A Service broken into logical components: When we talk about microservices characteristics, the basic principle is your services must be broken into multiple components. Each component should be independent and should have a well-defined capability via an interface. In this way, each component can be made language-agnostic, and we can choose any language of choice to build the microservice. One microservice can be a good candidate for .NET, whereas another can use the benefits of Java and Node.js, and so on.
A component model with microservices
Each microservice has well-defined boundaries, and together, they make a complete service offering. While we think of microservices, we always ask how big the microservices should be. We would say divide services as independent chunks such that a small team can handle the overall responsibility. Another driving factor is how easily you can enhance, replace, or upgrade component without affecting functioning of other services.
- Organize teams around business capability: The concept has been taken from Melvin Conway's Law:
"Organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations".
Further information is available via
https://en.wikipedia.org/wiki/Conway%27s_law
.
Conway's "Two Pizza Theory" of team distribution
Based on Conway's Law, Amazon came up with the Two Pizza Theory. It states that divide your team so small such that it is possible to feed them with two pizzas!
If we combine Conway's Law and Amazon's Two Pizza Theory and try to think in terms of the microservices pattern, we can say that to have an optimal output for the business, it is a good choice to have teams organized around business capabilities rather than teams driven by technology. This will give service ownership to a team and the team will have full control over service changes as long as it does not break consuming services.
If we take the example presented in the earlier diagram, a team dedicated to SHIPMENT will function better than a team responsible for the whole business. Make your team master of any specific business area instead of training them in everything. A team claiming to have knowledge of everything might not provide you the same output as a small dedicated business team can provide.
- Infrastructure automation: Microservices comes with a lot of complexity; with microservices design, you need to deal with lots of moving parts. A better IT infrastructure, supporting automation, is most important.
Automated testing, automated deployment, automated scale up/scale down of systems: all these are key aspect of microservices. When you design microservices, you should keep in mind where you want to run your services-on premises or cloud-based architecture. With cloud-based infrastructure, you have a lot of flexibility toward automation.
- Hide internal details: What microservices design states is that you should hide internal service implementation details from the consumer. If you expose internal service implementation it might cause you tight coupling between service consumers and your exposed service. You need to provide a layer of abstraction between your service and consumers.
- Design to cope with failure: We are human, we cannot predict the future, and for that reason, no design is 100% correct.
What you build today will need to be modified in future as per business requirements.
Keeping this into consideration, you design your microservices for failures. Failures can be technical failure, hardware failure, or implementation failure; your application design should gracefully handle these exceptions.
An Example Microservices Architecture with a Faulting Component (the Payment Service)
Consider the earlier diagram; if the PAYMENT module for the enterprise is not working, with the concept of microservice, it should not halt the whole business. Other modules should continue to work such as Order, Wish List, and so on. This way, each component execution is independent of the others.
- Decentralize governance and monitoring: With the microservices design pattern, you need to have smart governance in place. With microservices, you won't be dealing with a single point of failure; there will be small services that communicate with each other and come together to do a task.
In a real distributed system, you will be dealing with multiple servers, multiple log files, and maybe multiple network as well. So how will it go if some service starts troubling you, it will be a nightmare to monitor all the moving parts!
This is where the concept of smart monitoring and decentralized governance comes into place. If you are working on a cloud-first approach to design a distributed system, Azure provides you lot in the smart monitoring space. Throughout this book, we will discuss the concept of different monitoring techniques.
In the earlier sections, we have discussed a lot about monolithic design and microservices. The following table summarizes the key differences between the two architectures:
Differences between the Monolithic and microservices Architectural Styles