Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Building Microservices with .NET Core 2.0
Building Microservices with .NET Core 2.0

Building Microservices with .NET Core 2.0: Transitioning monolithic architectures using microservices with .NET Core 2.0 using C# 7.0 , Second Edition

eBook
$9.99 $39.99
Paperback
$48.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

Building Microservices with .NET Core 2.0

An Introduction to Microservices

The focus of this chapter is to get you acquainted with microservices. We will start with a brief introduction. Then, we will define their predecessors: monolithic architecture and Service-Oriented Architecture (SOA). After this, we will see how microservices fare against both SOA and the monolithic architecture. We will then compare the advantages and disadvantages of each one of these architectural styles. This will enable us to identify the right scenario for these styles. We will understand the problems that arise from having a layered monolithic architecture. We will discuss the solutions available to these problems in the monolithic world. At the end, we will be able to break down a monolithic application into a microservice architecture. We will cover the following topics in this chapter:

  • Origin of microservices
  • Discussing microservices
  • Understanding the microservice architecture
  • Advantages of microservices
  • SOA versus microservices
  • Understanding the problems with the monolithic architectural style
  • Challenges in standardizing a .NET stack
  • Overview of Azure Service Fabric

Origin of microservices

The term microservices was used for the first time in mid-2011 at a workshop on software architects. In March 2012, James Lewis presented some of his ideas about microservices. By the end of 2013, various groups from the IT industry started having discussions about microservices, and by 2014, they had become popular enough to be considered a serious contender for large enterprises.

There is no official introduction available for microservices. The understanding of the term is purely based on the use cases and discussions held in the past. We will discuss this in detail, but before that, let's check out the definition of microservices as per Wikipedia (https://en.wikipedia.org/wiki/Microservices), which sums it up as:

"Microservices is a specialization of and implementation approach for SOA used to build flexible, independently deployable software systems."

In 2014, James Lewis and Martin Fowler came together and provided a few real-world examples and presented microservices (refer to http://martinfowler.com/microservices/) in their own words and further detailed it as follows:

"The microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies."

It is very important that you see all the attributes Lewis and Fowler defined here. They defined it as an architectural style that developers could utilize to develop a single application with the business logic spread across a bunch of small services, each having their own persistent storage functionality. Also, note its attributes—it can be independently deployable, can run in its own process, is a lightweight communication mechanism, and can be written in different programming languages.

We want to emphasize this specific definition since it is the crux of the whole concept. And as we move along, it will come together by the time we finish this book.

Discussing microservices

We have gone through a few definitions of microservices; now, let's discuss microservices in detail.

In short, a microservice architecture removes most of the drawbacks of SOAs. It is more code-oriented (we will discuss this in detail in the coming sections) than SOA services.

Slicing your application into a number of services is neither SOA nor microservices. However, combining service design and best practices from the SOA world along with a few emerging practices, such as isolated deployment, semantic versioning, providing lightweight services, and service discovery in polyglot programming, is microservices. We implement microservices to satisfy business features and implement them with reduced time to market and greater flexibility.

Before we move on to understanding the architecture, let's discuss the two important architectures that led to its existence:

  • The monolithic architecture style
  • SOA

Most of us would be aware of the scenario where, during the life cycle of an enterprise application development, a suitable architectural style is decided. Then, at various stages, the initial pattern is further improved and adapted with changes that cater to various challenges, such as deployment complexity, large code base, and scalability issues. This is exactly how the monolithic architecture style evolved into SOA, further leading up to microservices.

Monolithic architecture

The monolithic architectural style is a traditional architecture type and has been widely used in the industry. The term monolithic is not new and is borrowed from the UNIX world. In UNIX, most of the commands exist as a standalone program whose functionality is not dependent on any other program. As seen in the following image, we can have different components in the application, such as:

  • User interface: This handles all of the user interaction while responding with HTML or JSON or any other preferred data interchange format (in the case of web services).
  • Business logic: All the business rules applied to the input being received in the form of user input, events, and database exist here.
  • Database access: This houses the complete functionality for accessing the database for the purpose of querying and persisting objects. A widely accepted rule is that it is utilized through business modules and never directly through user-facing components.

Software built using this architecture is self-contained. We can imagine a single .NET assembly that contains various components, as described in the following diagram:

As the software is self-contained here, its components are interconnected and interdependent. Even a simple code change in one of the modules may break a major functionality in other modules. This would result in a scenario where we'd need to test the whole application. With the business depending critically on its enterprise application frameworks, this amount of time could prove to be very critical.

Having all the components tightly coupled poses another challenge—whenever we execute or compile such software, all the components should be available or the build will fail; refer to the preceding diagram that represents a monolithic architecture and is a self-contained or a single .NET assembly project. However, monolithic architectures might also have multiple assemblies. This means that even though a business layer (assembly, data access layer assembly, and so on) is separated, at runtime, all of them will come together and run as one process.

A user interface depends on other components' direct sales and inventory in a manner similar to all other components that depend upon each other. In this scenario, we will not be able to execute this project in the absence of any one of these components. The process of upgrading any one of these components will be more complex as we may have to consider other components that require code changes too. This results in more development time than required for the actual change.

Deploying such an application will become another challenge. During deployment, we will have to make sure that each and every component is deployed properly; otherwise, we may end up facing a lot of issues in our production environments.

If we develop an application using the monolithic architecture style, as discussed previously, we might face the following challenges:

  • Large code base: This is a scenario where the code lines outnumber the comments by a great margin. As components are interconnected, we will have to bear with a repetitive code base.
  • Too many business modules: This is in regard to modules within the same system.
  • Codebase complexity: This results in a higher chance of code-breaking due to the fix required in other modules or services.
  • Complex code deployment: You may come across minor changes that would require whole system deployment.
  • One module failure affecting the whole system: This is in regard to modules that depend on each other.
  • Scalability: This is required for the entire system and not just the modules in it.
  • Intermodule dependency: This is due to tight coupling.
  • Spiraling development time: This is due to code complexity and interdependency.
  • Inability to easily adapt to a new technology: In this case, the entire system would need to be upgraded.

As discussed earlier, if we want to reduce development time, ease of deployment, and improve maintainability of software for enterprise applications, we should avoid the traditional or monolithic architecture.

Service-Oriented architecture

In the previous section, we discussed the monolithic architecture and its limitations. We also discussed why it does not fit into our enterprise application requirements. To overcome these issues, we should take a modular approach where we can separate the components such that they should come out of the self-contained or single .NET assembly.

The main difference between SOA and monolithic is not one or multiple assemblies. As the service in SOA runs as a separate process, SOA scales better compared to monolithic.

Let's discuss the modular architecture, that is, SOA. This is a famous architectural style where enterprise applications are designed as a collection of services. These services may be RESTful or ASMX Web Services. To understand SOA in more detail, let's discuss service first.

What is a service?

Service, in this case, is an essential concept of SOA. It can be a piece of code, program, or software that provides functionality to other system components. This piece of code can interact directly with the database or indirectly through another service. Furthermore, it can be consumed by clients directly, where the client may be a website, desktop app, mobile app, or any other device app. Refer to the following diagram:

Service refers to a type of functionality exposed for consumption by other systems (generally referred to as clients/client applications). As mentioned earlier, it can be represented by a piece of code, program, or software. Such services are exposed over the HTTP transport protocol as a general practice. However, the HTTP protocol is not a limiting factor, and a protocol can be picked as deemed fit for the scenario.

In the following image, Service - direct selling is directly interacting with Database, and three different clients, namely Web, Desktop, and Mobile, are consuming the service. On the other hand, we have clients consuming Service - partner selling, which is interacting with Service - channel partners for database access.

A product selling service is a set of services that interacts with client applications and provides database access directly or through another service, in this case, Service – Channel partners. In the case of Service – direct selling, shown in the preceding image, it is providing functionality to a web store, a desktop application, and a mobile application. This service is further interacting with the database for various tasks, namely fetching and persisting data.

Normally, services interact with other systems via some communication channel, generally the HTTP protocol. These services may or may not be deployed on the same or single servers:

In the preceding image, we have projected an SOA example scenario. There are many fine points to note here, so let's get started. First, our services can be spread across different physical machines. Here, Service - direct selling is hosted on two separate machines. It is possible that instead of the entire business functionality, only a part of it will reside on Server 1 and the remaining on Server 2. Similarly, Service - partner selling appears to be having the same arrangement on Server 3 and Server 4. However, it doesn't stop Service - channel partners being hosted as a complete set on both the servers: Server 5 and Server 6.

A system that uses a service or multiple services in a fashion mentioned in the preceding diagram is called SOA. We will discuss SOA in detail in the following sections.

Let's recall the monolithic architecture. In this case, we did not use it because it restricts code reusability; it is a self-contained assembly, and all the components are interconnected and interdependent. For deployment, in this case, we will have to deploy our complete project after we select the SOA (refer to the preceding diagram and subsequent discussion). Now, because of the use of this architectural style, we have the benefit of code reusability and easy deployment. Let's examine this in the wake of the preceding diagram:

  • Reusability: Multiple clients can consume the service. The service can also be simultaneously consumed by other services. For example, OrderService is consumed by web and mobile clients. Now, OrderService can also be used by the Reporting Dashboard UI.
  • Stateless: Services do not persist any state between requests from the client, that is, the service doesn't know, nor care that the subsequent request has come from the client that has/hasn't made the previous request.
  • Contract-based: Interfaces make it technology-agnostic on both sides of implementation and consumption. It also serves to make it immune to the code updates in the underlying functionality.
  • Scalability: A system can be scaled up; SOA can be individually clustered with appropriate load balancing.
  • Upgradation: It is very easy to roll out new functionalities or introduce new versions of the existing functionality. The system doesn't stop you from keeping multiple versions of the same business functionality.

Understanding the microservice architecture

The microservice architecture is a way to develop a single application containing a set of smaller services. These services are independent of each other and run in their own processes. An important advantage of these services is that they can be developed and deployed independently. In other words, we can say that microservices are a way to segregate our services so they can be handled completely independent of each other in the context of design, development, deployment, and upgrades.

In a monolithic application, we have a self-contained assembly of a user interface, direct sales, and inventory. In the microservice architecture, the services part of the application changes to the following depiction:

Here, business components have been segregated into individual services. These independent services now are the smaller units that existed earlier within the self-contained assembly, in the monolithic architecture. Both direct sales and inventory services are independent of each other, with the dotted lines depicting their existence in the same ecosystem yet not bound within a single scope. Refer to the following diagram:

From the preceding diagram, it's clear that our user interface can interact with any of the services. There is no need to intervene in any service when a UI calls a service. Both the services are independent of each other, without being aware of when the other one would be called by the user interface. Both services are liable for their own operations and not for any other part in the whole system. Although much closer to the microservice architecture, the preceding visualization is not entirely a complete visualization of the intended microservice architecture.

In the microservice architecture, services are small, independent units with their own persistent stores.

Now let's bring this final change so that each service will have its own database persisting the necessary data. Refer to the following diagram:

Here, User interface is interacting with those services that have their own independent storage. In this case, when a user interface calls a service for direct sales, the business flow for direct sales is executed independently of any data or logic contained within the inventory service.

The solution that the use of microservices provides has a lot of benefits, as discussed next:

  • Smaller codebase: Each service is small, and therefore, easier to develop and deploy as a unit
  • Ease of independent environment: With the separation of services, all developers work independently, deploy independently, and no one is bothered about any module dependency

With the adoption of the microservice architecture, monolithic applications are now harnessing the associated benefits, as it can now be scaled easily and deployed using a service independently.

Messaging in microservices

It is very important to carefully consider the choice of the messaging mechanism when dealing with the microservice architecture. If this one aspect is ignored, then it can compromise the entire purpose of designing using the microservice architecture. In monolithic applications, this is not a concern as the business functionality of the components gets invoked through function calls. On the other hand, this happens via a loosely coupled web service level messaging feature, where services are primarily based on SOAP. In the case of the microservice messaging mechanism, it should be simple and lightweight.

There are no set rules for making a choice between various frameworks or protocols for a microservice architecture. However, there are a few points worth considering here. First, it should be simple enough to implement, without adding any complexity to your system. Second, it should be lightweight enough, keeping in mind the fact that the microservice architecture could heavily rely on interservice messaging. Let's move ahead and consider our choices for both synchronous and asynchronous messaging along with the different messaging formats.

Synchronous messaging

When a timely response is expected from a service by a system and the system waits on it until a response is received from the service, which is synchronous messaging. What's left is the most sought-after choice in the case of microservices. It is simple and supports HTTP request-response, thereby leaving little room to look for an alternative. This is also one of the reasons that most implementations of microservices use HTTP (API-based styles).

Asynchronous messaging

When a system is not immediately expecting a timely response from the service and the system can continue processing without blocking on that call, which is asynchronous messaging.

Let's incorporate this messaging concept into our application and see how it would change and look:

Message formats

Over the past few years, working with MVC and the like has got me hooked on the JSON format. You could also consider XML. Both formats would be fine on HTTP with the API style resource. Binary message formats are also easily available in case you need to use one. We are not recommending any particular format; you can go ahead with any of the selected message formats.

Why should we use microservices?

Tremendous patterns and architectures have been explored with some gaining popularity; there are others, though, that are losing the battle of internet traffic. With each solution having its own advantages and disadvantages, it has become increasingly important for companies to quickly respond to fundamental demands, such as scalability, high performance, and easy deployment. Any single aspect failing to be fulfilled in a cost-effective manner could easily impact large businesses negatively, making a big difference between a profitable and non-profitable venture.

This is where we see microservices coming to the rescue of enterprise system architects. They can ensure their designs against problems mentioned previously, with the help of this architectural style. It is also important to consider the fact that this objective is met in a cost-effective manner while respecting the time involved.

How does the microservice architecture work?

Until now, we have discussed various things about the microservice architecture, and we can now depict how the microservice architecture works; we can use any combination as per our design approach or bet on a pattern that would fit in it. Here are a few points that favor the working of the microservice architecture:

  • It's programming of the modern era, where we are expected to follow all SOLID principles. It's object-oriented programming (OOP).
  • It is the best way is to expose the functionality to other or external components in a way so that any other programming language will be able to use the functionality without adhering to any specific user interfaces, that is, services (web services, APIs, rest services, and so on).
  • The whole system works as per a type of collaboration that is not interconnected or interdependent.
  • Every component is liable for its own responsibilities. In other words, components are responsible for only one functionality.
  • It segregates code with a separation concept, and segregated code is reusable.

Advantages of microservices

Now let's try to quickly understand where microservices takes a leap ahead of the SOA and monolithic architectures:

  • Cost effective to scale: You don't need to invest a lot to make the entire application scalable. In terms of a shopping cart, we could simply load balance the product search module and the order-processing module while leaving out less frequently used operation services, such as inventory management, order cancellation, and delivery confirmation.
  • Clear code boundaries: This action should match an organization's departmental hierarchies. With different departments sponsoring product development in large enterprises, this can be a huge advantage.
  • Easier code changes: The code is done in a way that it is not dependent on the code of other modules and is only achieving isolated functionality. If it were done right, then the chances of a change in a microservice affecting another microservice are very minimal.
  • Easy deployment: Since the entire application is more like a group of ecosystems that are isolated from each other, deployment could be done one microservice at a time, if required. Failure in any one of these would not bring the entire system down.
  • Technology adaptation: You could port a single microservice or a whole bunch of them overnight to a different technology without your users even knowing about it. And yes, hopefully, you don't expect us to tell you that you need to maintain those service contracts, though.
  • Distributed system: This comes as implied, but a word of caution is necessary here. Make sure that your asynchronous calls are used well and synchronous ones are not really blocking the whole flow of information. Use data partitioning well. We will come to this a little later, so don't worry for now.
  • Quick market response: The world being competitive is a definite advantage; as otherwise, users tend to quickly lose interest if you are slow to respond to new feature requests or adoption of a new technology within your system.

SOA versus microservices

You'll get confused between microservices and SOA if you don't have a complete understanding of both. On the surface of it, microservices' features and advantages sound almost like a slender version of SOA, with many experts suggesting that there is, in fact, no need for an additional term, such as microservices, and that SOA can fulfill all the attributes laid out by microservices. However, this is not the case. There is enough difference to isolate them technologically.

The underlying communication system of SOA inherently suffers from the following problems:

  • The fact that a system developed in SOA depends upon its components, which are interacting with each other. So no matter how hard you try, it is eventually going to face a bottleneck in the message queue.
  • Another focal point of SOA is imperative monogramming. With this, we lose the path to make a unit of code reusable with respect to OOP.

We all know that organizations are spending more and more on infrastructure. The bigger the enterprise is, the more complex the question of the ownership of the application being developed. With an increasing number of stakeholders, it becomes impossible to accommodate all of their ever-changing business needs. This is where microservices clearly stand apart. Although cloud development is not in the current scope of our discussion, it won't harm us to say that the scalability, modularity, and adaptability of the microservice architecture can be easily extended further with the use of cloud platforms. It's time for a change.

Prerequisites of the microservice architecture

It is important to understand the resulting ecosystem from the microservice architecture implementation. The impact of microservices is not just preoperational in nature. So profound will the changes in any organization opting for the microservice architecture be that, if they are not well-prepared to handle it, it won't be long before advantages turn into disadvantages.

After the adoption of the microservice architecture is agreed upon, it would be wise to have the following prerequisites in place:

  • Deployment and QA: Requirements will become more demanding, with a quicker turnaround from development requirements. It would require you to deploy and test as quickly as possible. If it is just a small number of services, then it would not be a problem. However, if the number of services is going up, it could very quickly challenge the existing infrastructure and practices. For example, your QA and staging environment may no longer suffice to test the number of builds that would come back from the development team.
  • A collaboration platform for the development and operations team: As the application goes to the public domain, it won't be long before the age-old script of dev versus QA is played out again. The difference this time would be that the business will be at stake. So, you need to be prepared to quickly respond in an automated manner to identify the root cause when required.
  • A monitoring framework: With the increasing number of microservices, you would quickly need a way to monitor the functioning and health of the entire system for any possible bottlenecks or issues. Without any means of monitoring the status of the deployed microservices and the resultant business function, it would be impossible for any team to take a proactive deployment approach.

Understanding the problems with the monolithic architectural style

In this section, we will discuss all the problems with the monolithic .NET stack-based application. In a monolithic application, the core problem is this: scaling monolithic is difficult. The resultant application ends up having a very large code base and poses challenges in regard to maintainability, deployment, and modifications.

Challenges in standardizing a .NET stack

In monolithic application technology, stack dependency stops the introduction of the latest technologies from the outside world. The present stack poses challenges as a web service itself suffers from some challenges:

  • Security: There is no way to identify the user via web services (no clear consensus on a strong authentication scheme). Just imagine a banking application sending unencrypted data containing user credentials without encryption. All airports, cafes, and public places offering free Wi-Fi could easily become victims of increased identity theft and other cybercrimes.
  • Response time: Though the web services themselves provide some flexibility in the overall architecture, it quickly diminishes due to the high processing time taken by the service itself. So, there is nothing wrong with the web service in this scenario. It is a fact that a monolithic application involves huge code; complex logic makes the response time of a web service high, and therefore, unacceptable.
  • Throughput rate: This is on the higher side, and as a result, hampers subsequent operations. A checkout operation relying on a call to the inventory web service that has to search for a few million records is not a bad idea. However, when the same inventory service feeds the main product searching/browsing for the entire portal, it could result in a loss of business. One service call failure out of ten calls would mean a 10% lower conversion rate for the business.
  • Frequent downtime: As the web services are part of the whole monolith ecosystem, they are bound to be down and unavailable each time there is an upgrade or an application failure. This means that the presence of any B2B dependency from the outside world on the application's web services would further complicate decision-making, thereby seeking downtime. This absolutely makes the smaller upgrades of the system look expensive; thus, it further increases the backlog of the pending system upgrades.
  • Technology adoption: In order to adopt or upgrade a technology stack, it would require the whole application to be upgraded, tested, and deployed, since modules are interdependent and the entire code base of the project is affected. Consider the payment gateway module using a component that requires a compliance-related framework upgrade. The development team has no option but to upgrade the framework itself and carefully go through the entire code base to identify any code breaks preemptively. Of course, this would still not rule out a production crash, but this can easily make even the best of the architects and managers sweat and lose some sleep.

Availability: A percentage of time during which a service is operating.

Response time: The time a service takes to respond.

Throughput: The rate of processing requests.

Fault tolerance

Monolithic applications have high module interdependency as they are tightly coupled. The different modules utilize functionality in such an intramodule manner that even a single module failure brings the system down due to the cascading effect, which is very similar to dominoes falling. We all know that a user not getting results for a product search would be far less severe than the entire system coming down to its knees.

Decoupling using web services has been traditionally attempted at the architecture level. For database-level strategies, ACID has been relied upon for a long time. Let's examine both these points further:

  • Web services: In the current monolithic application, customer experience is degraded due to this very reason. Even as a customer tries to place an order, reasons such as the high response time of web services or even a complete failure of the service itself results in a failure to place the order successfully. Not even a single failure is acceptable, as users tend to remember their last experience and assume a possible repeat. Not only does this result in the loss of possible sales, but also the loss of future business prospects. Web services' failures can cause a cascading failure in the systems that rely on them.
  • ACID: ACID is the acronym for atomicity, consistency, isolation, and durability; it's an important concept in databases. It is in place, but whether it's a boon or bane is to be judged by the sum total of the combined performance. It takes care of failures at the database level, and there is no doubt that it does provide some insurance against the database errors that creep in. At the same time, every ACID operation hampers/delays operations by other components/modules. The point at which it causes more harm than benefit needs to be judged very carefully.

Scaling

Factors such as availability of different means of communication, easy access to information, and open world markets are resulting in businesses growing rapidly and diversifying at the same time. With this rapid growth of business, there is an ever-increasing need to accommodate an increasing client base. Scaling is one of the biggest challenges that any business faces while trying to cater to an increased user base.

Scalability is nothing but the capability of a system/program to handle the growth of work better. In other words, scalability is the ability of a system/program to scale.

Before starting the next section, let's discuss and understand scaling in detail, as this will be an integral part of our exercise as we work on transitioning from monolithic to microservices.

Scalability of a system is its capability to handle an increasing/increased load of work. There are two main strategies or types of scalability in which we can scale our application.

Vertical scaling or scale up

In vertical scaling, we analyze our existing application to find the parts of the modules that cause the application to slow down due to higher execution time. Making the code more efficient could be one strategy so that less memory is consumed. This exercise of reducing memory consumption could be for a specific module or the whole application. On the other hand, due to obvious challenges involved with this strategy, instead of changing the application, we could add more resources to our existing IT infrastructure, such as upgrading the RAM or adding more disk drives. Both these paths in vertical scaling have a limit for the extent to which they can be beneficial. After a specific point in time, the resulting benefit will plateau. It is important to keep in mind that this kind of scaling requires downtime.

Horizontal scaling or scale out

In horizontal scaling, we dig deep into modules that show a higher impact on the overall performance for factors such as high concurrency; so this will enable our application to serve our increased user base, which is now reaching the million mark. We also implement load balancing to process a greater amount of work. The option of adding more servers to the cluster does not require downtime, which is a definite advantage. Each case is different, so whether the additional costs of power, licenses, and cooling are worthwhile, and up to what point, will be evaluated on a case-by-case basis.

Scaling will be covered in detail in Chapter 8, Scaling Microservices.

Deployment challenges

The current application also has deployment challenges. It is designed as a monolithic application, and any change in the order module would require the entire application to be deployed again. This is time-consuming and the whole cycle will have to be repeated with every change. This means this could be a frequent cycle. Scaling could only be a distant dream in such a scenario.

As discussed about scaling regarding current applications having deployment challenges that require us to deploy the entire assembly, the modules are interdependent, and it is a single assembly application of .NET. The deployment of the entire application in one go also makes it mandatory to test the entire functionality of our application. The impact of such an exercise would be huge:

  • High-risk deployment: Deploying an entire solution or application in one go poses a high risk as all modules are going to be deployed even for a single change in one of the modules.
  • Higher testing time: As we have to deploy the complete application, we will have to test the functionality of the entire application. We can't go live without testing. Due to higher interdependency, the change might cause a problem in some other module.
  • Unplanned downtime: Complete production deployment needs code to be fully tested and hence we need to schedule our production deployment. This is a time-consuming task that results in high downtime. Although planned downtime, during this time, both business and customers will be affected due to the unavailability of the system; this could cause revenue loss to the business.
  • Production bugs: A bug-free deployment would be the dream of any project manager. However, this is far from reality and every team dreads this possibility. Monolithic applications are no different from this scenario and the resolution of production bugs is easier said than done. The situation can only become more complex with some previous bug remaining unresolved.

Organizational alignment

In a monolithic application, having a large code base is not the only challenge that you'll face. Having a large team to handle such a code base is one more problem that will affect the growth of the business and application.

  • Same goal: In a team, all the team members have the same goal, which is timely and bug-free delivery at the end of each day. However, having a large code base and current, the monolithic architectural style will not be a comfortable feeling for the team members. With team members being interdependent due to the interdependent code and associated deliverables, the same effect that is experienced in the code is present in the development team as well. Here, everyone is just scrambling and struggling to get the job done. The question of helping each other out or trying something new does not arise. In short, the team is not a self-organizing team.

Roy Osherove defined three stages of a team in his book, Teamleader:

Survival phase: No time to learn.

Learning phase: Learning to solve your own problems.

Self-organizing phase: Facilitate, experiment.

  • A different perspective: The development team takes too much time for deliverables due to reasons, such as feature enhancement, bug fixes, or module interdependency stopping easy development. The QA team is dependent upon the development team and the dev team has its own problems. The QA team is stuck once developers start working on bugs, fixes, or feature enhancements. There is no separate environment or build available for QA to proceed with their testing. This delay hampers overall delivery, and customers or end users would not get the new features or fixes on time.

Modularity

In respect to our monolithic application, where we may have an Order module, a change in the module Orders affects the module Stock and so on. It is the absence of modularity that has resulted in such a condition.

This also means that we can't reuse the functionality of a module within another module. The code is not decomposed into structured pieces that could be reused to save time and effort. There is no segregation within the code modules, and hence, no common code is available.

Business is growing and its customers are growing by leaps and bounds. New or existing customers from different regions have different preferences when it comes to the use of the application. Some like to visit the website, but others prefer to use mobile apps. The system is structured in a way that we can't share the components across a website and a mobile app. This makes introducing a mobile/device app for the business a challenging task. Business is affected as in such scenarios the company loses out on customers who prefer mobile apps.

The difficulty in replacing the component's application is using third-party libraries, an external system such as payment gateways, and an external order-tracking system. It is a tedious job to replace the old components in the currently styled monolithic architectural application. For example, if we consider upgrading the library of our module that is consuming an external order-tracking system, then the whole change would prove to be very difficult. Also, it would be an intricate task to replace our payment gateway with another one.

In any of the preceding scenarios, whenever we upgraded the components, we upgraded everything within the application, which called for a complete testing of the system and required a lot of downtime. Apart from this, the upgrade would possibly result in the form of production bugs, which would require you to repeat the whole cycle of development, testing, and deployment.

Big database

Our current application has a mammoth database containing a single schema with plenty of indexes. This structure poses a challenging job when it comes down to fine-tuning the performance:

  • Single schema: All the entities in the database are clubbed under a single schema named dbo. This again hampers business due to the confusion with the single schema regarding various tables that belong to different modules; for example, customer and supplier tables belong to the same schema, that is, dbo.
  • Numerous stored procedures: Currently, the database has a large number of stored procedures, which also contain a sizeable chunk of the business logic. Some of the calculations are being performed within the stored procedures. As a result, these stored procedures prove to be a baffling task to tend to when the time comes to optimize them or break them down into smaller units.

Whenever deployment is planned, the team will have to look closely at every database change. This, again, is a time-consuming exercise and many times would turn out to be even more complex than the build and deployment exercise itself.

Prerequisites for microservices

To understand better, let's take up an imaginary example of FlixOne Inc. With this example as our base, let's discuss all the concepts in detail and see what it looks like to be ready for microservices.

FlixOne is an e-commerce player (selling books) that is spread all over India. They are growing at a very fast pace and diversifying their business at the same time. They have built their existing system on the .NET framework, and it is traditional three-tier architecture. They have a massive database that is central to this system, and there are peripheral applications in their ecosystem. One such application is for their sales and logistics team, and it happens to be an Android app. These applications connect to their centralized data center and face performance issues. FlixOne has an in-house development team supported by external consultants. Refer to the following diagram:

The preceding diagram depicts a broader sense of our current application, which is a single .NET assembly application. Here we have the user interfaces we use for search, order, products, tracking order, and checkout. Now look at the following diagram:

The preceding diagram depicts our Shopping cart module only. The application is built with C#, MVC5, and Entity Framework, and it has a single project application. This image is just a pictorial overview of the architecture of our application. This application is web-based and can be accessed from any browser. Initially, any request that uses the HTTP protocol will land on the user interface that is developed using MVC5 and JQuery. For cart activities, the UI interacts with the Shopping cart module, which is nothing but a business logic layer that further talks with the database layer (written in C#); data is persisted within the database (SQL Server 2008R2).

Functional overview of the application

Here we are going to understand the functional overview of the FlixOne bookstore application. This is only for the purpose of visualizing our application. The following is the simplified functional overview of the application until Shopping cart:

In the current application, the customer lands on the home page, where they see featured/highlighted books. They have the option to search for a book item if they do not get their favorite one. After getting the desired result, the customer can choose book items and add them to their shopping cart. Customers can verify the book items before the final checkout. As soon as the customer decides to check out, the existing cart system redirects them to an external payment gateway for the specified amount you need to pay for the book items in the shopping cart.

As discussed previously, our application is a monolithic application; it is structured to be developed and deployed as a single unit. This application has a large code base that is still growing. Small updates need to deploy the whole application at once.

Solutions for current challenges

Business is growing rapidly, so we decide to open our e-commerce website in 20 more cities; however, we are still facing challenges with the existing application and struggling to serve the existing user base properly. In this case, before we start the transition, we should make our monolithic application ready for its transition to microservices.

In the very first approach, the Shopping cart module will be segregated into smaller modules, then you'll be able to make these modules interact with each other as well as external or third-party software:

This proposed solution is not sufficient for our existing application, though developers would be able to divide the code and reuse it. However, the internal processing of the business logic will remain the same without any change in the way it would interact with the UI or the database. The new code will interact with the UI and the database layer with the database still remaining as the same old single database. With our database remaining undivided and as tightly coupled layers, the problems of having to update and deploy the whole code base will still remain. So this solution is not suitable for resolving our problem.

Handling deployment problems

In the preceding section, we discussed the deployment challenges we will face with the current .NET monolithic application. In this section, let's take a look at how we can overcome these challenges by making or adapting a few practices within the same .NET stack.

With our .NET monolithic application, our deployment is made up of XCOPY deployments. After dividing our modules into different submodules, we can adapt to deployment strategies with the help of these. We can simply deploy our business logic layer or some common functionality. We can adapt to continuous integration and deployment. The XCOPY deployment is a process where all the files are copied to the server, mostly used for web projects.

Making much better monolithic applications

We understand all the challenges with our existing monolithic application. We have to serve better with our new growth. As we are growing widely, we can't miss the opportunity to get new customers. If we miss fixing any challenge, then we would lose business opportunities as well. Let's discuss a few points to solve these problems.

Introducing dependency injections

Our modules are interdependent, so we are facing issues, such as reusability of code and unresolved bugs, due to changes in one module. These are deployment challenges. To tackle these issues, let's segregate our application in such a way that we will be able to divide modules into submodules. We can divide our Order module in such a way that it would implement the interface, and this can be initiated from the constructor. Here is a small code snippet that shows how we can apply this in our existing monolithic application.

Here is a code example that shows our Order class, where we use the constructor injection:

    namespace FlixOne.BookStore.Common
{
public class Order : IOrder
{
private readonly IOrderRepository _orderRepository;
public Order(IOrderRepository orderRepository)
{
_orderRepository = orderRepository;
}
public OrderModel GetBy(Guid orderId)
{
return _orderRepository.Get(orderId);
}
}
}
The inversion of control, or IoC, is nothing but a way in which objects do not create other objects on whom they rely to do their work.

In the preceding code snippet, we abstracted our Order module in such a way that it could use the IOrder interface. Afterward, our Order class implements the IOrder interface, and with the use of inversion of control, we create an object, as this is resolved automatically with the help of inversion of control.

Furthermore, the code snippets of IOrderRepository and OrderRepository are as follows:

    namespace FlixOne.BookStore.Common
{
public interface IOrderRepository
{
OrderModel Get(Guid orderId);
}
}
namespace FlixOne.BookStore.Common
{
public class OrderRepository : IOrderRepository
{
public OrderModel Get(Guid orderId)
{
//call data method here
return new OrderModel
{
OrderId = Guid.NewGuid(),
OrderDate = DateTime.Now,
OrderStatus = "In Transit"
};
}
}
}

Here we are trying to showcase how our Order module gets abstracted. In the preceding code snippet, we return default values for our order just to demonstrate the solution to the actual problem.

Finally, our presentation layer (the MVC controller) will use the available methods, as shown in the following code snippet:

    namespace FlixOne.BookStore.Controllers
{
public class OrderController : Controller
{
private readonly IOrder _order;
public OrderController(IOrder order)
{
_order = order;
}
// GET: Order
public ActionResult Index()
{
return View();
}
// GET: Order/Details/5
public ActionResult Details(string id)
{
var orderId = Guid.Parse(id);
var orderModel = _order.GetBy(orderId);
return View(orderModel);
}
}
}

The following is a class diagram that depicts how our interfaces and classes are associated with each other and how they expose their methods, properties, and so on:

Here again, we used the constructor injection, where IOrder passed and got the Order class initialized; hence, all the methods are available within our controller.

By achieving this, we would overcome a few problems, such as:

  • Reduced module dependency: With the introduction of IOrder in our application, we are reducing the interdependency of the Order module. This way, if we are required to add or remove anything to/from this module, then other modules would not be affected, as IOrder is only implemented by the Order module. Let's say we want to make an enhancement to our Order module; it would not affect our Stock module. This way, we reduce module interdependency.
  • Introducing code reusability: If you are required to get the order details of any of the application modules, you can easily do so using the IOrder type.
  • Improvements in code maintainability: We have divided our modules into submodules or classes and interfaces now. We can now structure our code in such a manner that all the types, that is, all the interfaces, are placed under one folder and follow the suit for the repositories. With this structure, it would be easier for us to arrange and maintain code.
  • Unit Testing: Our current monolithic application does not have any kind of unit testing. With the introduction of interfaces, we can now easily perform unit testing and adopt the system of test-driven development with ease.

Database refactoring

As discussed in the preceding section, our application database is huge and depends on a single schema. This huge database should be considered while refactoring. We will go for this as:

  • Schema correction: In general practice (not required), our schema depicts our modules. As discussed in previous sections, our huge database has a single schema, that is dbo now, and every part of the code or table should not be related to dbo. There might be several modules that will interact with specific tables. For example, our Order module should contain some related schema name, such as Order. So whenever we need to use the table, we can use them with their own schema instead of a general dbo schema. This will not impact any functionality related to how data would be retrieved from the database. But it will have structured or arranged our tables in such a way that we would be able to identify and correlate each and every table with their specific modules. This exercise will be very helpful while we are in the stage of transitioning a monolithic application to microservices. Refer to the following image:

In the preceding figure, we see how the database schema is separated logically. It is not separated physically—our Order Schema and Stock Schema belong to the same database. So here we separate the database schema logically, not physically.

We can also take an example of our users—not all users are admin or belong to a specific zone, area, or region. But our user table should be structured in such a way that we should be able to identify the users by the table name or the way they are structured. Here we can structure our user table on the basis of regions. We should map our user table to a region table in such a way it should not impact or lay any changes in the existing code base.

  • Moving business logic to code from stored procedures: In the current database, we have thousands of lines of Stored Procedure with a lot of business logic. We should move the business logic to our codebase. In our monolithic application, we are using Entity Framework; here we can avoid the creation of stored procedures. We can incorporate all of our business logic to code.

Database sharding and partitioning

Between database sharding and partitioning, we can go with database sharding, where we will break it into smaller databases. These smaller databases will be deployed on a separate server:

In general, database sharding is simply defined as a shared-nothing partitioning scheme for large databases. This way, we can achieve a new level of high performance and scalability. Sharding comes from shard and spreading, which means dividing a database into chunks (shards) and spreading to different servers.

The preceding diagram is a pictorial overview of how our database is divided into smaller databases. Take a look at the following diagram:

DevOps culture

In the preceding sections, we discussed the challenges and problems with the team. Here, we propose a solution to the DevOps team: the collaboration of the development team with another operational team should be emphasized. We should set up a system where development, QA, and the infrastructure teamwork in collaboration.

Automation

Infrastructure setup can be a very time-consuming job; developers would remain idle while the infrastructure is being readied for them. They will take some time before joining the team and contributing. The process of infrastructure setup should not stop a developer from becoming productive, as it would reduce overall productivity. This should be an automated process. With the use of Chef or PowerShell, we can easily create our virtual machines and quickly ramp up the developer count as and when required. This way, our developer can be ready to start the work on day one of joining the team.

Chef is a DevOps tool that provides a framework to automate and manage your infrastructure.

PowerShell can be used to create our Azure machines and to set up TFS.

Testing

We are going to introduce automated testing as a solution to our prior problems, those we faced while testing during deployment. In this part of the solution, we have to divide our testing approach as follows:

  • Adopt Test-Driven Development (TDD). With TDD, a developer is required to test his or her own code. The test is nothing but another piece of code that could validate whether the functionality is working as intended. If any functionality is found to not satisfy the test code, the corresponding unit test fails. This functionality can be easily fixed, as you know this is where the problem is. In order to achieve this, we can utilize frameworks such as MS test or unit tests.
  • The QA team can use scripts to automate their tasks. They can create scripts by utilizing QTP or the Selenium framework.

Versioning

The current system does not have any kind of versioning system, so there is no way to revert if something happens during a change. To resolve this issue, we need to introduce a version control mechanism. In our case, this should be either TFS or Git. With the use of version control, we can now revert to our change in case it is found to break some functionality or introduce any unexpected behavior in the application. We now have the capability of tracking the changes being made by the team members working on this application, at an individual level. However, in the case of our monolithic application, we did not have the capability of doing this.

Deployment

In our application, deployment is a huge challenge. To resolve this, we introduce Continuous Integration (CI). In this process, we need to set up a CI server. With the introduction of CI, the entire process is automated. As soon as the code is checked in by any team member, using version control TFS or Git, in our case, the CI process kicks into action. It ensures that the new code is built and unit tests are run along with the integration test. In either scenarios of a successful build or otherwise, the team is alerted to the outcome. This enables the team to quickly respond to the issue.

Next we move to continuous deployment. Here we introduce various environments, namely a development environment, staging environment, QA environment, and so on. Now, as soon as the code is checked in by any team member, CI kicks into action. It invokes the unit/integration test suits, builds the system, and pushes it out to the various environments we have set up. This way, the turnaround time of the development team to provide a suitable build for QA is reduced to minimal.

Identifying decomposition candidates within monolithic

We have now clearly identified the various problems that the current FlixOne application architecture and its resultant code are posing for the development team. Also, we understand which business challenges the development team is not able to take up and why.

It is not that the team is not capable enough—it is just the code. Let's move ahead and check what would be the best strategy to zero in on for the various parts of the FlixOne application that we need to move to the microservice-styled architecture. You should know that you have a candidate with a monolith architecture, which poses problems in one of the following areas:

  • Focused deployment: Although this comes at the final stage of the whole process, it demands more respect and rightly so. It is important to understand here that this factor shapes and defines the whole development strategy from the very initial stages of identification and design. Here's an example of this: the business is asking you to resolve two problems of equal importance. One of the issues might require you to perform testing for many more associated modules, and the resolution for the other might allow you to get away with limited testing. Having to make such a choice would be wrong. A business shouldn't have the option of making such a choice.
  • Code complexity: Having smaller teams is the key here. You should be able to assign small development teams for a change that is associated with a single functionality. Small teams comprise one or two members. Any more than this and a project manager will be needed. This means that something is more interdependent across modules than it should be.
  • Technology adoption: You should be able to upgrade components to a newer version or a different technology without breaking stuff. If you have to think about the components that depend on it, you have more than one candidate. Even if you have to worry about the modules that this component depends upon, you still have more than one candidate. I remember one of my clients who had a dedicated team to test out whether the technology being released was a suitable candidate for their needs. I learned later that they would actually port one of the modules and measure the performance impact, effort requirement, and turnaround time of the whole system. I don't agree with this, though.
  • High resources: In my opinion, everything in a system, from memory, CPU time, and I/O requirements, should be considered a module. If any one of the modules spends more time, and/or more frequently, it should be singled out. In any operation that involves higher-than-normal memory, the processing time blocks the delay and the I/O keeps the system waiting; this would be good in our case.
  • Human dependency: If moving team members across modules seems like too much work, you have more candidates. Developers are smart, but if they struggle with large systems, it is not their fault. Break the system down into smaller units, and the developers will be both more comfortable and more productive.

Important microservices advantages

We have performed the first step of identifying our candidates for moving to microservices. It will be worthwhile going through the corresponding advantages that microservices provide.

Technology independence

With each one of the microservices being independent of each other, we now have the power to use different technologies for each microservice. The Payment gateway could be using the latest .NET framework, whereas the product search could be shifted to any other programming language.

The entire application could be based on an SQL server for data storage, whereas the inventory could be based on NoSQL. The flexibility is limitless.

Interdependency removal

Since we try to achieve isolated functionality within each microservice, it is easy to add new features, fix bugs, or upgrade technology within each one. This will have no impact on other microservices. Now you have vertical code isolation that enables you to perform all of this and still be as fast with the deployments.

This doesn't end here. The FlixOne team now has the ability to release a new option for the Payment gateway alongside the existing one. Both the Payment gateways could coexist until the time that both the team and the business owners are satisfied with the reports. This is where the immense power of this architecture comes into play.

Alignment with business goals

It is not necessarily a forte of business owners to understand why a certain feature would be more difficult or time-consuming to implement. Their responsibility is to keep driving the business and keep growing it. The development team should become a pivot to the business goal and not a roadblock.

It is extremely important to understand that the capability to quickly respond to business needs and adapt to marketing trends is not a by-product of microservices, but their goal.

The capability to achieve this with smaller teams only makes it more suitable to business owners.

Cost benefits

Each microservice becomes an investment for the business since it can easily be consumed by other microservices without having to redo the same code again and again. Every time a microservice is reused, time is saved by avoiding the testing and deployment of that part.

User experience is enhanced since the downtime is either eliminated or reduced to minimal.

Easy scalability

With vertical isolation in place and each microservice rendering a specific service to the whole system, it is easy to scale. Not only is the identification easier for the scaling candidates, but the cost is less. This is because we only scale a part of the whole microservice ecosystem.

This exercise can be cost-intensive for the business; hence, prioritization of which microservice should be scaled first can now be a choice for the business team. This decision no longer has to be a choice for the development team.

Security

Security is similar to what is provided by the traditional layered architecture; microservices can be secured as easily. Different configurations can be used to secure different microservices. You can have a part of the microservice ecosystem behind firewalls and another part for user encryption. Web-facing microservices could be secured differently from the rest of the microservices. You can suit your needs as per choice, technology, or budget.

Data management

It is common to have a single database in the majority of monolithic applications. And almost always, there is a database architect or a designated owner responsible for its integrity and maintenance. The path to any application enhancement that requires a change in the database has to go through this route. For me, it has never been an easy task. This further slows down the process of application enhancement, scalability, and technology adoption.

Because each microservice has its own independent database, the decision-making related to changes required in the database can be easily delegated to the respective team. We don't have to worry about the impact on the rest of the system, as there will not be any.

At the same time, this separation of the database brings forth the possibility for the team to become self-organized. They can now start experimenting.

For example, the team can now consider using the Azure Table storage or Azure Redis Cache to store the massive product catalog instead of the database, as is being done currently. Not only can the team now experiment, their experience could easily be replicated across the whole system as required by other teams in the form of a schedule convenient to them.

In fact, nothing is stopping the FlixOne team now from being innovative and using a multitude of technologies available at the same time, then comparing performance in the real world and making a final decision. Once each microservice has its own database, this is how FlixOne will look:

Integrating monolithic

Whenever a choice is made to move away from the monolithic architecture in favor of the microservice-styled architecture, the time and cost axis of the initiative will pose some resistance. A business evaluation might rule against moving some parts of the monolithic application that do not make a business case for the transition.

It would have been a different scenario if we were developing the application from the beginning. However, this is also the power of microservices, in my opinion. A correct evaluation of the entire monolithic architecture can safely identify the monolithic parts to be ported later.

However, to ensure that these isolated parts do not cause a problem to other microservices in future, we must take one safeguard against the risk.

The goal for these parts of the monolithic application is to make them communicate in the same way as that of other microservices. Doing this involves various patterns and you utilize the technology stack in which the monolithic application was developed.

If you use the event-driven pattern, make sure that the monolithic application can publish and consume events, including a detailed modification of the source code to make these actions possible. This process can also be performed by creating an event proxy that publishes and consumes events. The event proxy can then translate these events to the monolithic application in order to keep the changes in the source code to a minimum. Ultimately, the database would remain the same.

If you plan to use the API gateway pattern, be sure that your gateway is able to communicate with the monolithic application. To achieve this, one option is to modify the source code of the application to expose RESTful services that can be consumed easily by the gateway. This can also be achieved by the creation of a separate microservice to expose the monolithic application procedures as REST services. The creation of a separate microservice avoids big changes in the source code. However, it demands the maintenance and deployment of a new component.

Overview of Azure Service Fabric

While we were talking about microservices in .NET Core world, Azure Service Fabric is the name that is widely used for microservices. In this section, we will discuss Fabric services.

This is a platform that helps us with easy packaging, deployment and managing scalable and reliable microservices (the container is also like Docker, and so on). Sometimes it is difficult to focus on your main responsibility as a developer, due to complex infrastructural problems, and such. With the help of Azure service fabric, developers need not worry about the infrastructural issues.

This bundles and has the power of Azure SQL Database, Cosmos DB, Microsoft Power BI, Azure Event Hubs, Azure IoT Hub, and many more core services.

As per official documentation (https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-overview):

  • Service fabric—any OS, any cloud: You just need to create a cluster of service fabric and this cluster runs on Azure (cloud) or on premises, on Linux, or on a Windows server. Moreover, you can also create clusters on other public clouds.
  • Service fabric - stateless and stateful microservices: Yes, with the help of service fabric you can build applications as stateless and/or stateful.
"As per official documentation (https://docs.microsoft.com/en-us/azure/service-fabric/) of microservices:
Stateless microservices (such as protocol gateways and web proxies) do not maintain a mutable state outside a request and its response from the service. Azure Cloud Services worker roles are an example of a stateless service. Stateful microservices (such as user accounts, databases, devices, shopping carts, and queues) maintain a mutable, authoritative state beyond the request and its response."

There are different service fabric programming models available that are beyond the scope of this chapter. For more information, refer to: https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-choose-framework.

Summary

In this chapter, we discussed the microservice architectural style in detail, its history, and how it differs from its predecessors, monolithic and SOA. We further defined the various challenges that monolithic faces when dealing with large systems. Scalability and reusability are some definite advantages that SOA provides over monolithic. We also discussed the limitations of the monolithic architecture, including scaling problems, by implementing a real-life monolithic application. The microservice architecture style resolves all of these issues by reducing code interdependency and isolating the dataset size that any one of the microservices works upon. We utilized dependency injection and database refactoring for this. We further explored automation, CI, and deployment. These easily allow the development team to let the business sponsor choose what industry trends to respond to first. This results in cost benefits, better business response, timely technology adoption, effective scaling, and removal of human dependency. Finally, we discussed Azure Service Fabric and got an idea about service fabric and its different programming models.

In the next chapter, we will go ahead and transition our existing application to the microservice-style architecture and put our knowledge to the test.

Left arrow icon Right arrow icon

Key benefits

  • Start your microservices journey and get a broader perspective on microservices development using C# 7.0 with .NET Core 2.0
  • Build, deploy, and test microservices using ASP.Net Core, ASP.NET Core API, and Microsoft Azure Cloud
  • Get the basics of reactive microservices

Description

The microservices architectural style promotes the development of complex applications as a suite of small services based on business capabilities. This book will help you identify the appropriate service boundaries within your business. We'll start by looking at what microservices are and their main characteristics. Moving forward, you will be introduced to real-life application scenarios; after assessing the current issues, we will begin the journey of transforming this application by splitting it into a suite of microservices using C# 7.0 with .NET Core 2.0. You will identify service boundaries, split the application into multiple microservices, and define service contracts. You will find out how to configure, deploy, and monitor microservices, and configure scaling to allow the application to quickly adapt to increased demand in the future. With an introduction to reactive microservices, you’ll strategically gain further value to keep your code base simple, focusing on what is more important rather than on messy asynchronous calls.

Who is this book for?

This book is for .NET Core developers who want to learn and understand the microservices architecture and implement it in their .NET Core applications. It’s ideal for developers who are completely new to microservices or just have a theoretical understanding of this architectural approach and want to gain a practical perspective in order to better manage application complexities.

What you will learn

  • Get acquainted with Microsoft Azure Service Fabric
  • Compare microservices with monolithic applications and SOA
  • Learn Docker and Azure API management
  • Define a service interface and implement APIs using ASP.NET Core 2.0
  • Integrate services using a synchronous approach via RESTful APIs with ASP.NET Core 2.0
  • Implement microservices security using Azure Active Directory, OpenID Connect, and OAuth 2.0
  • Understand the operation and scaling of microservices in .NET Core 2.0
  • Understand the key features of reactive microservices and implement them using reactive extensions
Estimated delivery fee Deliver to United States

Economy delivery 10 - 13 business days

Free $6.95

Premium delivery 6 - 9 business days

$21.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Dec 22, 2017
Length: 300 pages
Edition : 2nd
Language : English
ISBN-13 : 9781788393331
Languages :
Concepts :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to United States

Economy delivery 10 - 13 business days

Free $6.95

Premium delivery 6 - 9 business days

$21.95
(Includes tracking information)

Product Details

Publication date : Dec 22, 2017
Length: 300 pages
Edition : 2nd
Language : English
ISBN-13 : 9781788393331
Languages :
Concepts :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 177.97
Building Microservices with .NET Core 2.0
$48.99
Learning ASP.NET Core 2.0
$48.99
C# 7.1 and .NET Core 2.0 ??? Modern Cross-Platform Development
$79.99
Total $ 177.97 Stars icon
Banner background image

Table of Contents

10 Chapters
An Introduction to Microservices Chevron down icon Chevron up icon
Implementing Microservices Chevron down icon Chevron up icon
Integration Techniques and Microservices Chevron down icon Chevron up icon
Testing Microservices Chevron down icon Chevron up icon
Deploying Microservices Chevron down icon Chevron up icon
Securing Microservices Chevron down icon Chevron up icon
Monitoring Microservices Chevron down icon Chevron up icon
Scaling Microservices Chevron down icon Chevron up icon
Introduction to Reactive Microservices Chevron down icon Chevron up icon
Creating a Complete Microservice Solution Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.2
(15 Ratings)
5 star 33.3%
4 star 20%
3 star 6.7%
2 star 13.3%
1 star 26.7%
Filter icon Filter
Top Reviews

Filter reviews by




Nidhi malwe Jan 22, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This is an excellent book. I have completed reading half of the book on day 1 itself
Amazon Verified review Amazon
Ashutosh Upmanyu Jan 09, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
A book to look out, for those who want to have knowledge in microservices and SOA. Topics have been covered with real code examples and images/diagrams. A well written book to build your base in microservices.
Amazon Verified review Amazon
Dani Mago Feb 11, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I'm starting to read it and I found it very simple to understand, if you are looking for a starting guide this is for you.
Amazon Verified review Amazon
Amit Seth Feb 11, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
A great read for someone get started on fundamentals and implementation aspects of microservices. With the world moving to single responsibility principle this is a perfect crawl walk run guide.
Amazon Verified review Amazon
Dheeraj M. Feb 11, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
A good book if want to start microwave from scratch. .net core and c# based examples help to understand.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela