Discussing this topic is very interesting and relevant to the microservices ecosystem. Deploy, upgrade, replace, and scale are great advantages that microservices have over monolithic systems, when it comes to most aspects of their functions.
Independently deploy, upgrade, scale, and replace
Independent deployment
A standard aspect of the professional software development world is version control. This is because developers are pretty much working on features and maintenance of legacy code in the same application, at the same time.
In the end, a landmark (tag)Â is created in the application and this landmark is sent to production; this process is called deployment. At that same point, some problems may arise.
Let's consider a situation; in our news portal, one developer is working on an important feature for recommendations while another developer is working on fixing bugs. Both commit to hitting the same target. At the time of deployment, there is a major problem. The bug in news was not fixed successfully, which prevented the new feature from going into production. Software can be thoroughly tested, and even though much attention is given to each task, the unforeseen can still happen.
When it comes to microservices, this kind of problem is reduced drastically. Rethink the same scenario.
On our news portal, one developer is working for a few weeks on an important feature for the Recommendations microservice and another developer is working on a bug in the news microservice. Both commit to hitting the same target, each in their respective microservice; however, we still encounter a problem during deployment. The bug in news was not fixed successfully, which prevents the new version of the news microservice from going into production, but the Recommendations microservice is perfect and the new feature goes into production without any problems.
This is perhaps one of the main positive points when it comes to independent deployment. Of course, the complexity of maintaining the operation of multiple machine instances generates more complexity, but if you think about it, in a world of cloud computing, the complexity of multiple instances would be the same even though the application was monolithic, as the need for scalability is always real.
Later in the book, we see some patterns of deployment; we just focus on reducing complexity and practicality to perform deploys continuously.
Upgrade
There is an item that is mandatory for microservices to really be microservices; this item is an independent upgrade. Some rules must be followed to upgrade as independently as possible:
- Never share libraries between microservices: This means that each microservice has a stack that is totally independent of any other microservice. Sharing libraries is an error that generates high coupling and problems at the time of deployment. The microservices can start with the same stack, although it is usually best to analyze the domain and the data structure to see if the stack proposal is compatible or not. However, starting with the same stack does not mean keeping concurrent versioning. Another aspect that needs attention is to completely avoid creating business components on specific versions of a library stack. This approach prevents any technological developments in the microservice and means that, for example, security patches cannot be applied.
- Strong delimitation of microservice domains: We have already talked about bounded contexts, but it is worth reiterating again. The microservice limits are essential to determine whether the domain really is compatible with microservices architecture, or whether what is being designed is only a monolithic part decoupled from the rest. The loose coupling is what defines a microservice subject to upgrades and changes in the level of business without major conflicts with the ecosystem, in which the microservice is inserted.
- Establish a client-server relationship between microservices: This means that each microservice is a separate application and has complete autonomy over itself. When a microservice depends on another microservice's business resolutions, we have an alert point. The microservices can communicate with each other freely to ask for information, but never to solve business issues. When a microservice sends a message to another and is waiting for the answer to complete a task, there is an error. This error is critical and will result in scalability and transactional issues. When a microservice sends a message to another, there is a very strong idea there: asynchrony. As one microservice server performs tasks and provides information, another client microservice requests information. When the two faces—server and client—are intrinsically linked to a microservice, there is a design error.
- Deploy in separate containers: This approach not only facilitates the independent structure of a microservice, but also ensures that a fault in one microservice is totally individual, without disrupting an entire microservice pool. When we speak of separate containers, we are not necessarily talking about virtualization. The containers in question can be physical; it is a matter of the strategy and resources of a company, but the fact is that it is not healthy to keep more than one microservice in a container. It is important to remember that failures will occur, and when they occur, it is important to be prepared to mitigate the failures. Microservices as a group in a single container means that there will be a failure when a cyclomatic microservices burst occurs.
Separate containers are also essential for upgrading tools that are part of the stack, but that are not properly coded, such as databases and caches.
Scale
Scalability of speech is a common approach; see The Scale Cube which is discussed in the book The Art of Scalability from Martin L. Abbott and Michael T. Fisher. The concepts of the Scale Cube are fully applicable to microservices, and web applications in general, that need to be scalable:
The concept of a scale cube shows that there are basically three forms of scalability:Â x-axis, y-axis, and z-axis. To better understand each of these three approaches, we will use some diagrams.
The x-axis
On the x-axis, this strategy targets the horizontal scalability with the same application server replicated n times in full and in a balanced order of 1/n.
The problem with this strategy is that resources such as databases and caches will be required, since the number of applications that accesses these features gradually increases, as necessary, to scale. For this strategy, caches require more memory and databases need a pool of greater connections, something that does not always result in a benefit:
The y-axis
In this strategy, a verb or route is used by the balancer to identify where to go with the request. The following image represents the y-axis:
The principle does not seem to be very scalable, but it is exactly the junction with the y-axis and x-axis that is used to scale microservices.
This join between y-axis and x-axis allows us, occasionally, to bring scalability to just part of the microservices. In the following diagram, it can be seen that News was the most scaled microservice, followed by Recommendations, but Users have no major changes. This type of scalability technique greatly reduces the drawbacks of shared resource access, as each microservice structure manages and uses only its own resources, such as caches and databases. Take a look at the following diagram:
The z-axis
The z-axis is very similar to the x-axis when it comes to scalability structure, as it distributes exactly the same code on each server. The big difference is that each server responds to a specific subset of data. In this strategy, the search is providing not only scalability regarding the application, but also the data you use.
The following diagram shows a little example:
This strategy is not entirely ruled out when it comes to microservices, but its use is a little different. The applicability ends up not being on the verbs, but on geolocation. This means that, in a global application, the database of a microservice is distributed by region and is preferably available for this region, that is, people who access the website in Europe will, preferably, see the European news.
The definition of how microservices are scaled is directly linked to business strategy. From a technical point of view, the focus is to provide a flexible software strategy allowing changes as they certainly occur.
Replace
Updates to microservices are normal, but sometimes these updates may compromise the health of a microservice. New features can cause the microservice to absorb many responsibilities that go beyond the original domain idea.
A common mistake is adding new features and invalidating old ones without removing them completely. Some features of the development processes become more clear when a new microservice is created that is intended to replace an old one.
This process may seem more time consuming, however, it is very healthy for the application as a whole. Rethink whether old features still make sense, remove any zombie code which has no more relevance to the business, becoming consumers of resources and aggregators of complexity.
The replace process, when it comes to microservices, is very simple, as shown in the following diagram:
The concept applied to the replacement process is very simple. With control as the balancing layer, which will direct 90% of the requests for the old microservice and 10% for the new microservice, it is possible to monitor and analyze how mature a new application is and if no feature has been forgotten or has unwanted side effects.
This approach reduces the error effect on production, and provides real data on the new application. As the new microservice gains maturity and confidence in the availability of features, a higher percentage of requests is released for the new microservice. Importantly, the microservices, due to the size of the small business scope and low coupling, are easily replaceable. A total replacement service is a natural process when it comes to evolution, both in business and as a stack.