Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Building Microservices with .NET Core
Building Microservices with .NET Core

Building Microservices with .NET Core: Develop skills in Reactive Microservices, database scaling, Azure Microservices, and more

Arrow left icon
Profile Icon Lalit Kale Profile Icon Soumya Mukherjee Profile Icon Gaurav Aroraa Profile Icon Manish Kanwar
Arrow right icon
€36.99
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.3 (4 Ratings)
Paperback Jun 2017 274 pages 1st Edition
eBook
€20.98 €29.99
Paperback
€36.99
Subscription
Free Trial
Renews at €18.99p/m
Arrow left icon
Profile Icon Lalit Kale Profile Icon Soumya Mukherjee Profile Icon Gaurav Aroraa Profile Icon Manish Kanwar
Arrow right icon
€36.99
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.3 (4 Ratings)
Paperback Jun 2017 274 pages 1st Edition
eBook
€20.98 €29.99
Paperback
€36.99
Subscription
Free Trial
Renews at €18.99p/m
eBook
€20.98 €29.99
Paperback
€36.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Table of content icon View table of contents Preview book icon Preview Book

Building Microservices with .NET Core

What Are Microservices?

The focus of this chapter is to get you acquainted with microservices. We will start with a brief introduction. Then, we will define its predecessors: monolithic architecture and service-oriented architecture (SOA). After this, we will see how microservices fare against both SOA and the monolithic architecture. We will then compare the advantages and disadvantages of each one of these architectural styles. This will enable us to identify the right scenario for these styles. We will understand the problems that arise from having a layered monolithic architecture. We will discuss the solutions available to these problems in the monolithic world. At the end, we will be able to break down a monolithic application into a microservice architecture. We will cover the following topics in this chapter:

  • Origin of microservices
  • Discussing microservices
  • Understanding the microservice architecture
  • Advantages of microservices
  • SOA versus microservices
  • Understanding problems with the monolithic architectural style
  • Challenges in standardizing the .NET stack

Origin of microservices

The term microservices was used for the first time in mid-2011 at a workshop of software architects. In March 2012, James Lewis presented some of his ideas about microservices. By the end of 2013, various groups from the IT industry started having discussions on microservices, and by 2014, it had become popular enough to be considered a serious contender for large enterprises.

There is no official introduction available for microservices. The understanding of the term is purely based on the use cases and discussions held in the past. We will discuss this in detail, but before that, let's check out the definition of microservices as per Wikipedia (https://en.wikipedia.org/wiki/Microservices), which sums it up as:

Microservices is a specialization of and implementation approach for SOA used to build flexible, independently deployable software systems.

In 2014, James Lewis and Martin Fowler came together and provided a few real-world examples and presented microservices (refer to http://martinfowler.com/microservices/) in their own words and further detailed it as follows:

The microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.

It is very important that you see all the attributes James and Martin defined here. They defined it as an architectural style that developers could utilize to develop a single application with the business logic spread across a bunch of small services, each having their own persistent storage functionality. Also, note its attributes: it can be independently deployable, can run in its own process, is a lightweight communication mechanism, and can be written in different programming languages.

We want to emphasize this specific definition since it is the crux of the whole concept. And as we move along, it will come together by the time we finish this book.

Discussing microservices

Until now, we have gone through a few definitions of microservices; now, let's discuss microservices in detail.

In short, a microservice architecture removes most of the drawbacks of SOA architectures. It is more code-oriented (we will discuss this in detail in the coming sections) than SOA services.

Slicing your application into a number of services is neither SOA nor microservices. However, combining service design and best practices from the SOA world along with a few emerging practices, such as isolated deployment, semantic versioning, providing lightweight services, and service discovery in polyglot programming, is microservices. We implement microservices to satisfy business features and implement them with reduced time to market and greater flexibility.

Before we move on to understand the architecture, let's discuss the two important architectures that have led to its existence:

  • The monolithic architecture style
  • SOA

Most of us would be aware of the scenario where during the life cycle of an enterprise application development, a suitable architectural style is decided. Then, at various stages, the initial pattern is further improved and adapted with changes that cater to various challenges, such as deployment complexity, large code base, and scalability issues. This is exactly how the monolithic architecture style evolved into SOA, further leading up to microservices.

Monolithic architecture

The monolithic architectural style is a traditional architecture type and has been widely used in the industry. The term monolithic is not new and is borrowed from the Unix world. In Unix, most of the commands exist as a standalone program whose functionality is not dependent on any other program. As seen in the succeeding image, we can have different components in the application such as:

  • User interface: This handles all of the user interaction while responding with HTML or JSON or any other preferred data interchange format (in the case of web services).
  • Business logic: All the business rules applied to the input being received in the form of user input, events, and database exist here.
  • Database access: This houses the complete functionality for accessing the database for the purpose of querying and persisting objects. A widely accepted rule is that it is utilized through business modules and never directly through user-facing components.

Software built using this architecture is self-contained. We can imagine a single .NET assembly that contains various components, as described in the following image:

As the software is self-contained here, its components are interconnected and interdependent. Even a simple code change in one of the modules may break a major functionality in other modules. This would result in a scenario where we'd need to test the whole application. With the business depending critically on its enterprise application frameworks, this amount of time could prove to be very critical.

Having all the components tightly coupled poses another challenge: whenever we execute or compile such software, all the components should be available or the build will fail; refer to the preceding image that represents a monolithic architecture and is a self-contained or a single .NET assembly project. However, monolithic architectures might also have multiple assemblies. This means that even though a business layer (assembly, data access layer assembly, and so on) is separated, at run time, all of them will come together and run as one process. 

A user interface depends on other components' direct sale and inventory in a manner similar to all other components that depend upon each other. In this scenario, we will not be able to execute this project in the absence of any one of these components. The process of upgrading any one of these components will be more complex as we may have to consider other components that require code changes too. This results in more development time than required for the actual change.

Deploying such an application will become another challenge. During deployment, we will have to make sure that each and every component is deployed properly; otherwise, we may end up facing a lot of issues in our production environments.

If we develop an application using the monolithic architecture style, as discussed previously, we might face the following challenges:

  • Large code base: This is a scenario where the code lines outnumber the comments by a great margin. As components are interconnected, we will have to bear with a repetitive code base.
  • Too many business modules: This is in regard to modules within the same system.
  • Code base complexity: This results in a higher chance of code breaking due to the fix required in other modules or services.
  • Complex code deployment: You may come across minor changes that would require whole system deployment.
  • One module failure affecting the whole system: This is in regard to modules that depend on each other.
  • Scalability: This is required for the entire system and not just the modules in it.
  • Intermodule dependency: This is due to tight coupling.
  • Spiraling development time: This is due to code complexity and interdependency.
  • Inability to easily adapt to a new technology: In this case, the entire system would need to be upgraded.

As discussed earlier, if we want to reduce development time, ease of deployment, and improve maintainability of software for enterprise applications, we should avoid the traditional or monolithic architecture.

Service-oriented architecture

In the previous section, we discussed the monolithic architecture and its limitations. We also discussed why it does not fit into our enterprise application requirements. To overcome these issues, we should go with some modular approach where we can separate the components such that they should come out of the self-contained or single .NET assembly.

The main difference between SOA & monolithic is not one or multiple assembly. But as the service in SOA runs as separate process, SOA scales better compared to monolithic.

Let's discuss the modular architecture, that is, SOA. This is a famous architectural style using which the enterprise applications are designed with a collection of services as its base. These services may be RESTful or ASMX Web services. To understand SOA in more detail, let's discuss service first.

What is service?

Service, in this case, is an essential concept of SOA. It can be a piece of code, program, or software that provides some functionality to other system components. This piece of code can interact directly with the database or indirectly through another service. Furthermore, it can be consumed by clients directly, where the client may either be a website, desktop app, mobile app, or any other device app. Refer to the following diagram:

Service refers to a type of functionality exposed for consumption by other systems (generally referred to as clients/client applications). As mentioned earlier, it can be represented by a piece of code, program, or software. Such services are exposed over the HTTP transport protocol as a general practice. However, the HTTP protocol is not a limiting factor, and a protocol can be picked as deemed fit for the scenario.

In the following image, Service – direct selling is directly interacting with Database, and three different clients, namely Web, Desktop, and Mobile, are consuming the service. On the other hand, we have clients consuming Service – partner selling, which is interacting with Service – channel partners for database access.

A product selling service is a set of services that interacts with client applications and provides database access directly or through another service, in this case, Service – Channel partner. In the case of Service – direct selling, shown in the preceding example, it is providing some functionality to a Web Store, a desktop application, and a mobile application. This service is further interacting with the database for various tasks, namely fetching data, persisting data, and so on.

Normally, services interact with other systems via some communication channel, generally the HTTP protocol. These services may or may not be deployed on the same or single servers.

In the preceding image, we have projected an SOA example scenario. There are many fine points to note here, so let's get started. Firstly, our services can be spread across different physical machines. Here, Service-direct selling is hosted on two separate machines. It is a possible scenario that instead of the entire business functionality, only a part of it will reside on Server 1 and the remaining on Server 2. Similarly, Service – partner selling appears to be having the same arrangement on Server 3 and Server 4. However, it doesn't stop Service – channel partners being hosted as a complete set on both the servers: Server 5 and Server 6.

A system that uses a service or multiple services in a fashion mentioned in the preceding figure is called an SOA. We will discuss SOA in detail in the following sections.

Let's recall the monolithic architecture. In this case, we did not use it because it restricts code reusability; it is a self-contained assembly, and all the components are interconnected and interdependent. For deployment, in this case, we will have to deploy our complete project after we select the SOA (refer to preceding image and subsequent discussion). Now, because of the use of this architectural style, we have the benefit of code reusability and easy deployment. Let's examine this in the wake of the preceding figure:

  1. Reusability: Multiple clients can consume the service. The service can also be simultaneously consumed by other services. For example, OrderService is consumed by web and mobile clients. Now, OrderService can also be used by the Reporting Dashboard UI.
  2. Stateless: Services do not persist any state between requests from the client, that is, the service doesn't know, nor care, that the subsequent request has come from the client that has/hasn't made the previous request.
  3. Contract-based: Interfaces make it technology-agnostic on both sides of implementation and consumption. It also serves to make it immune to the code updates in the underlying functionality.
  4. Scalability: A system can be scaled up; SOA can be individually clustered with appropriate load balancing.
  5. Upgradation: It is very easy to roll out new functionalities or introduce new versions of the existing functionality. The system doesn't stop you from keeping multiple versions of the same business functionality.

Understanding the microservice architecture

The microservice architecture is a way to develop a single application containing a set of smaller services. These services are independent of each other and run in their own processes. An important advantage of these services is that they can be developed and deployed independently. In other words, we can say that microservices are a way to segregate our services so they can be handled completely independent of each other in the context of design, development, deployment, and upgrades.

In a monolithic application, we have a self-contained assembly of user interface, direct sale, and inventory. In the microservice architecture, the services part of the application changes to the following depiction:

Here, business components have been segregated into individual services. These independent services now are the smaller units that existed earlier within the self-contained assembly, in the monolithic architecture. Both direct sales and inventory services are independent of each other, with the dotted lines depicting their existence in the same ecosystem yet not bound within a single scope. Refer to the following diagram:

From the preceding image, it's clear that our user interface can interact with any of the services. There is no need to intervene in any service when a UI calls a service. Both the services are independent of each other, without being aware of when the other one would be called by the user interface. Both services are liable for their own operations and not for any other part in the whole system. Although much closer to the microservice architecture, the preceding visualization is not entirely a complete visualization of the intended microservice architecture.

In the microservice architecture, services are small, independent units with their own persistent stores.

Now let's bring this final change so that each service will have its own database persisting the necessary data:

Here, User interface is interacting with those services that have their own independent storage. In this case, when a user interface calls a service for direct sales, the business flow for direct sales is executed independently of any data or logic contained within the inventory service.

The solution that the use of microservices provides has a lot of likely benefits, as discussed next:

  • Smaller codebase: Each service is small, therefore, easier to develop and deploy as a unit
  • Ease of independent environment: With the separation of services, all developers work independently, deploy independently, and no one is bothered about any module dependency

With the adoption of the microservice architecture, monolithic applications are now harnessing the associated benefits, as it can now be scaled easily and deployed using a service independently.

Messaging in microservices

It is very important to carefully consider the choice of the messaging mechanism when dealing with the microservice architecture. If this one aspect is ignored, then it can compromise the entire purpose of designing using the microservice architecture. In monolithic applications, this is not a concern as the business functionality of the components gets invoked through function calls. On the other hand, this happens via a loosely coupled web service level messaging feature, where services are primarily based on SOAP.  In the case of the microservice messaging mechanism, it should be simple and lightweight.

There are no set rules for making a choice between various frameworks or protocols for a microservice architecture. However, there are a few points worth consideration here. Firstly, it should be simple enough to implement, without adding any complexity to your system. Secondly, it should be lightweight enough, keeping in mind the fact that the microservice architecture could heavily rely on interservice messaging. Let's move ahead and consider our choices for both synchronous and asynchronous messaging along with the different messaging formats.

Synchronous messaging

When a timely response is expected from a service by a system and the system waits on it till a response is received from the service, it is synchronous messaging. What's left is the most sought-after choice in the case of microservices. It is simple and supports HTTP request-response, thereby leaving little room to look for an alternative. This is also one of the reasons that most implementations of microservices use HTTP (API-based styles).

Asynchronous messaging

When a system is not immediately expecting a timely response from the service and the system can continue processing without blocking on that call, it is asynchronous messaging.

Let's incorporate this messaging concept into our application and see how it would change and look:

Message formats

Over the past many years, working with MVC and the like has got me hooked on the JSON format. You could also consider XML. Both the formats would do fine on HTTP with the API style resource. There are easily available binary message formats in case you need to use one. We are not recommending any format; you can go ahead with any of the selected message formats.

Why should we use microservices?

Tremendous patterns and architectures have been explored with some gaining popularity; there are others, though, which are losing the battle of Internet traffic. With each solution having its own advantages and disadvantages, it has become increasingly important for companies to quickly respond to fundamental demands, such as scalability, high performance, and easy deployment. Any single aspect failing to be not fulfilled in a cost-effective manner could easily impact large businesses negatively, making a likely difference between a profitable and non-profitable venture.

This is where we see microservices coming to the rescue of enterprise system architects. They can ensure their designs against problems quoted previously, with the help of this architectural style. It is also important to consider the fact that this objective is met in a cost-effective manner while respecting the factor of time involved.

How does the microservice architecture work?

Until now, we have discussed various things about the microservice architecture, and we can now depict how the microservice architecture works; we can use any combination as per our design approach or bet to a pattern that would fit in it. Here are a few points that favor the working of the microservice architecture:

  • It's programming of the modern era, where we are expected to follow all SOLID principles. It's object-oriented programming (OOP).
  • It is the best way is to expose the functionality to other or external components in a way so that any other programming language will be able to use the functionality without adhering to any specific user interfaces, that it, services (web services, APIs, rest services, and so on).
  • The whole system works as per a type of collaboration that is not interconnected or interdependent.
  • Every component is liable for its own responsibilities. In other words, components are responsible for only one functionality.
  • It segregates code with a separation concept, and segregated code is reusable.

Advantages of microservices

Now let's try to quickly understand where microservices takes a leap ahead of the SOA and monolithic architectures:

  • Cost effective to scale: You don't need to invest a lot to make the entire application scalable. In terms of a Shopping cart, we could simply load balance the product search module and the order-processing module while leaving out less frequently used operation services, such as inventory management, order cancellation, and delivery confirmation.
  • Clear code boundaries: This action should match an organization's departmental hierarchies. With different departments sponsoring product development in large enterprises, this can be a huge advantage.
  • Easier code changes: The code is done in a way that it is not dependent on the code of other modules and is only achieving isolated functionality. If it were done right, then the chances of a change in a microservice affecting another microservice are very minimal.
  • Easy deployment: Since the entire application is more like a group of ecosystems that are isolated from each other, deployment could be done one microservice at a time, if required. Failure in any one of these would not bring the entire system down.
  • Technology adaptation: You could port a single microservice or a whole bunch of them overnight to a different technology without your users even knowing about it. And yes, hopefully, you don't expect us to tell you that you need to maintain those service contracts, though.
  • Distributed system: This comes implied, but a word of caution is necessary here. Make sure that your asynchronous calls are used well and synchronous ones are not really blocking the whole flow of information. Use data partitioning well. We will come to this little later, so don't worry for now.
  • Quick market response: The world being competitive is a definite advantage, as otherwise, users tend to quickly lose interest if you are slow to respond to new feature requests or adoption of a new technology within your system.

SOA versus microservices

You'll get confused between microservices and SOA if you don't have a complete understanding of both. On the surface of it, microservices' features and advantages sound almost like a slender version of SOA, with many experts suggesting that there is, in fact, no need for an additional term, such as microservices, and that SOA can fulfill all the attributes laid out by microservices. However, this is not the case. There is enough difference to isolate them miles apart technologically.

The underlying communication system of SOA inherently suffers from the following problems:

  • The communication system of SOA inherently suffers from the fact that a system developed in SOA depends upon its components, which are interacting with each other. So no matter how hard you try, it is eventually going to face a bottleneck in the message queue.
  • Another focal point of SOA is imperative monogramming. With this, we lose the path to make a unit of code reusable with respect to OOP.

We all know how organizations spend more and more on infrastructure. The bigger the enterprise is, the more the complex is the question of ownership of the application being developed. With an increasing number of stakeholders, it becomes impossible to accommodate all of their ever-changing business needs. This is where microservices clearly stands apart. Although cloud development is not in the current scope of our discussion, it won't harm us to say that the scalability, modularity, and adaptability of the microservice architecture can be easily extended further with the use of cloud platforms. Time for a change.

Prerequisites of the microservice architecture

It is important to understand the resulting ecosystem from the microservice architecture implementation. The impact of microservices is not just preoperational in nature. So profound will the changes in any organization opting for the microservice architecture be that if they are not well prepared to handle it, it won't be long before advantages turn into disadvantages.

After the adoption of the microservice architecture is agreed upon, it would be wise to have the following prerequisites in place:

  • Deployment and QA: Requirements would become more demanding, with a quicker turnaround from development requirements. It would require you to deploy and test as quickly as possible. If it is just a small number of services, then it would not be a problem. However, if the number of services is going up, it could very quickly challenge the existing infrastructure and practices. For example, your QA and staging environment may no longer suffice to test the number of builds that would come back from the development team.
  • A collaboration platform for development and operations team: As the application goes to the public domain, it won't be long before the age-old script of Dev versus QA is played out again. The difference this time would be that the business will be at stake. So, you need to be prepared to quickly respond in an automated manner to identify the root cause when required.
  • A monitoring framework: With the increasing number of microservices, you would quickly need a way to monitor the functioning and health of the entire system for any possible bottlenecks or issues. Without any means of monitoring the status of the deployed microservices and the resultant business function, it would be impossible for any team to take a proactive deployment approach.

Understanding problems with the monolithic architecture style

In this section, we will discuss all the problems with the monolithic .NET-stack-based application. In a monolithic application, the core problem is this: scaling monolithic is difficult. The resultant application ends up having a very large code base and poses challenges in regard to maintainability, deployment, and modifications.

Challenges in standardizing a .NET stack

In monolithic application technology, stack dependency stops the introduction of the latest technologies from the outside world. The present stack poses challenges as a web service itself suffers from some challenges:

  • Security: There is no way to identify the user via webservices (no clear consensus on a strong authentication scheme). Just imagine a banking application sending unencrypted data containing user credentials without encryption. All airports, cafes, and public places offering free Wi-Fi could easily become victims of increased identity theft and other cybercrimes.
  • Response time: Though the web services themselves provide some flexibility in the overall architecture, it quickly diminishes due to the high processing time taken by the service itself. So, there is nothing wrong with the web service in this scenario. It is a fact that a monolithic application involves huge code; complex logic makes the response time of a web service high, and therefore, unacceptable.
  • Throughput rate: This is on the higher side, and as a result, hampers subsequent operations. A checkout operation relying on a call to the inventory web service that has to search for a few million records is not a bad idea. However, when the same inventory service feeds the main product searching/browsing for the entire portal, it could result in a loss of business. One service call failure out of ten calls would mean a 10 percent lower conversion rate for the business.
  • Frequent downtime: As the web services are part of the whole monolith ecosystem, they are bound to be down and unavailable each time there is an upgrade or an application failure. This means that the presence of any B2B dependency from the outside world on the application's web services would further complicate decision-making, thereby seeking downtime. This absolutely makes the smaller upgrades of the system look expensive; thus, it further increases the backlog of the pending system upgrades.
  • Technology adoption: In order to adopt or upgrade a technology stack, it would require the whole application to be upgraded, tested, and deployed, since modules are interdependent and the entire code base of the project is affected. Consider the payment gateway module using a component that requires a compliance-related framework upgrade. The development team has no option but to upgrade the framework itself and carefully go through the entire code base to identify any code breaks preemptively. Of course, this would still not rule out a production crash, but this can easily make even the best of the architects and managers sweat and lose some sleep.

Availability is a percentage of time during which a service is operating.

Response time is the time a service responds.

Throughput is the rate of processing requests.

Fault tolerance

Monolithic applications have high module interdependency as they are tightly coupled. The different modules utilize functionality in such an intramodule manner that even a single module failure brings the system down due to the cascading effect, which is very similar to dominoes falling. We all know that a user not getting results for a product search would be far less severe than the entire system coming down to its knees.

Decoupling using web services has been traditionally attempted at the architecture level. For database-level strategies, ACID has been relied upon for a long time. Let's examine both these points further.

  • Web services: In the current monolithic application, customer experience is degraded due to this very reason. Even as a customer tries to place an order, reasons such as high response time of web services or WCF or even a complete failure of the services itself results in a failure to place the order successfully. Not even a single failure is acceptable as the users tend to remember their last experience and assume a possible repeat. Not only is this loss of possible sales, but also loss of future business prospects. Web services' failures can cause a cascading failure in the systems that rely on them.
  • ACID: ACID is the acronym for atomicity, consistency, isolation, and durability; it's an important concept in databases. It is in place, but whether it's a boon or bane is to be judged by the sum total of the combined performance. It takes care of failures at the database level, and there is no doubt that it does provide some insurance against the database errors that creep in. But at the same time, every ACID operation hampers/delays operations by other components/modules. The point at which it brings the system where it causes more harm than benefit  needs to be judged very carefully.

Scaling

Factors such as availability of different means of communication, easy access to information, and open world markets are resulting in businesses growing rapidly and diversifying at the same time. With this rapid growth of business, there is an ever-increasing need to accommodate an increasing client base. Scaling is one of the biggest challenges that any business faces while trying to cater for an increased user base.

Scalability is nothing but the capability of a system/program to handle the growth of work better. In other words, scalability is the ability of a system/program to scale.

Before starting the next section, let's discuss and understand scaling in detail, as this will be an integral part of our exercise as we work on transitioning from monolithic to microservices.

Scalability of a system is its capability to handle an increasing/increased load of work. There are two main strategies or types of scalability in which we can scale our application.

Vertical scaling or scale up

In vertical scaling, we analyze our existing application to find out the parts of modules that cause the application to slow down due to higher execution time. Making the code more efficient could be one strategy so that less memory is consumed. This exercise of reducing memory consumption could be for a specific module or the whole application. On the other hand, due to obvious challenges involved in this strategy, instead of changing the application, we could add more resources to our existing IT infrastructure, such as upgrading the RAM or adding more disk drives and so on. Both these paths in vertical scaling have a limit for the extent to which they could be beneficial. After a specific point in time, the resulting benefit will plateau out. It is important here to keep in mind that this kind of scaling requires downtime.

Horizontal scaling or scale out

In horizontal scaling, we dig deep into modules that show a higher impact on the overall performance for factors such as high concurrency; so this will enable our application to serve our increased user base, which is now reaching the million mark. We also implement load balancing to process a greater amount of work. The option of adding more servers to the cluster does not require downtime, which is a definite advantage. It differs from case to case and it needs to be seen whether all the additional cost of power, licenses, and cooling is worthwhile and up to what point.

Scaling will be covered in detail in Chapter 8, Scaling.

Deployment challenges

The current application also has deployment challenges. It is designed as a monolithic application, and any change in the order module would require the entire application to be deployed again. This is time-consuming and the whole cycle will have to be repeated with every change. This means this could be a frequent cycle. Scaling could only be a distant dream in such a scenario.

As discussed in scaling about current application having deployment challenges which requires us to deploy the entire assembly. The modules are interdependent, and it is a single assembly application of .NET. The deployment of the entire application in one go also makes it mandatory to test the entire functionality of our application. The impact of such an exercise would be huge:

  • High-risk deployment: Deploying an entire solution or application in one go poses a high risk as all modules are going to be deployed even for a single change in one of the modules.
  • Higher testing time: As we have to deploy the complete application, we will have to test the functionality of the entire application. We can't go live without testing. Due to higher interdependency, the change might cause a problem in some other module.
  • Unplanned downtime: Complete production deployment needs code to be fully tested and hence we need to schedule our production deployment. This is a time-consuming task that results in high downtime. Although planned downtime, during this time, both business and customers will be affected due to the unavailability of the system; this could cause revenue loss to the business.
  • Production bugs: A bug-free deployment would be the dream of any project manager. However, this is far from reality and every team dreads this very possibility. Monolithic applications are no different from this scenario and productions bugs' resolution is easier said than done. The situation can only become more complex with some previous bug getting unresolved.

Organizational alignment

In a monolithic application, having a large code base is not the only challenge that you'll face. Having a large team to handle such a code base is one more problem that will affect the growth of the business and application.

  • Same goal: In a team, all the team members have the same goal, which is timely and bug-free delivery at the end of each day. However, having a large code base and current, the monolithic architectural style will not be a comfortable feeling for the team members. With team members being interdependent due to the interdependent code and associated deliverables, the same effect that is experienced in the code is present in the development team as well. Here everyone is just scrambling and struggling to get the job done. The question of helping each other out or trying something new does not arise. In short, the team is not a self-organizing team anyway.

Roy Osherove defined three stages of a team in his book, Teamleader:

Survival phase: No time to learn

Learning phase: Learning to solve your own problems

Self-organizing phase: Facilitate, experiment

  • A different perspective: The development team takes too much time for deliverables due to reasons such as feature enhancement, bug fixes, or module interdependency stopping easy development. The QA team is dependent upon the development team and the dev team has its own problems. The QA team is stuck once developers start working on bugs, fixes, or feature enhancements. There is no separate environment or build available for QA to proceed with their testing. This delay hampers overall delivery, and customers or end users would not get the new features or fixes on time.

Modularity

In respect to our monolithic application, where we may have an Order module, a change in the module Orders affects the module Stock and so on. It is the absence of modularity that has resulted in such a condition.

This also means that we can't reuse the functionality of a module within another module. The code is not decomposed into structured pieces, which could be reused to save time and effort. There is no segregation within the code modules, and hence, no common code is available.

Business is growing and its customers are growing by leaps and bounds. New or existing customers from different regions have different preferences when it comes to the use of the application. Some like to visit the website, but others prefer to use mobile apps. The system is structured in a way that we can't share the components across a website and a mobile app. This makes introducing a mobile/device app for the business a challenging task. Business is affected, as in such scenarios, the company loses out on customers who prefer mobile apps.

The difficulty in replacing the component's application using some third-party libraries; external system, such as payment gateways; and an external order-tracking system. It is a tedious job to replace the old components in the currently styled monolithic architectural application. For example, if we consider upgrading the library of our module that is consuming an external order-tracking system, then the whole change would prove to be very difficult. Also, it would be an intricate task to replace our payment gateway with another one.

In any of the preceding scenarios, whenever we upgraded the components, we upgraded everything within the application, which called for a complete testing of the system and required a lot of downtime. Apart from this, the upgrade would possibly result in the form of production bugs, which would require you to repeat the whole cycle of development, testing, and deployment.

Big database

Our current application has a mammoth database containing a single schema with plenty of indexes. This structure poses a challenging job when it comes down to fine-tuning the performance:

  • Single schema: All the entities in the database are clubbed under a single schema named dbo. This again hampers business due to the confusion with the single schema regarding various tables that belong to different modules; for example, Customer and Supplier tables belong to the same schema, that is, dbo.
  • Numerous stored procedures: Currently, the database has a large number of stored procedures, which also contain a sizeable chunk of the business logic. Some of the calculations are being performed within the stored procedures. As a result, these stored procedures prove to be a baffling task to tend to when the time comes to optimize them or break them down into smaller units.

Whenever deployment is planned, the team will have to look closely at every database change. This again is a time-consuming exercise and many times would turn out to be even more complex than the build and deployment exercise itself.

Prerequisites for microservices

To understand better, let's take up an imaginary example of Flix One Inc. With this example as our base, let's discuss all the concepts in detail and see what it looks like to be ready for microservices.

FlixOne is an e-commerce player (selling books) that is spread all over India. They are growing at a very fast pace and diversifying their business at the same time. They have built their existing system on the .NET framework, and it is a traditional three-tier architecture. They have a massive database that is central to this system, and there are peripheral applications in their ecosystem. One such application is for their sales and logistics team, and it happens to be an Android app. These applications connect to their centralized data center and face performance issues. FlixOne has an in-house development team supported by external consultants. Refer to the following figure:

The preceding image depicts a broader sense of our current application, which is a single .NET assembly application. Here we have the user interfaces we use for search, order, products, tracking order, and checkout. Now check out the following diagram:

The preceding image depicts our Shopping cart module only. The application is built with C#, MVC5, and Entity Framework, and it has a single project application. This image is just a pictorial overview of the architecture of our application. This application is web-based and can be accessed from any browser. Initially, any request that uses the HTTP protocol will land on the user interface that is developed using MVC5 and JQuery. For cart activities, the UI interacts with the Shopping cart module, which is nothing but a business logic layer that further talks with the database layer (written in C#); data is persisted within the database (SQL Server 2008R2).

Functional overview of the application

Here we are going to understand the functional overview of the FlixOne bookstore application. This is only for the purpose of visualizing our application. The following is the simplified functional overview of the application until Shopping cart:

In the current application, the customer lands on the home page, where they see featured/highlighted books. They have the option to search for a book item if they do not get their favorite one. After getting the desired result, the customer can choose book items and add them to their shopping cart. Customers can verify the book items before the final checkout. As soon as the customer decides to check out, the existing cart system redirects them to an external payment gateway for the specified amount you need to pay for the book items in the shopping cart.

As discussed previously, our application is a monolithic application; it is structured to be developed and deployed as a single unit. This application has a large code base that is still growing. Small updates need to deploy the whole application at once.

Solutions for current challenges

Business is growing rapidly, so we decide to open our e-commerce website in 20 more cities; however, we are still facing challenges with the existing application and struggling to serve the existing user base properly. In this case, before we start the transition, we should make our monolithic application ready for its transition to microservices.

In the very first approach, the Shopping cart module will be segregated into smaller modules, then you'll be able to make these modules interact with each other as well as external or third-party software:

This proposed solution is not sufficient for our existing application, though developers would be able to divide the code and reuse it. However, the internal processing of the business logic will remain the same without any change in the way it would interact with the UI or the database. The new code will interact with the UI and the database layer with the database still remaining as the same old single database. With our database remaining undivided and as tightly coupled layers, the problems of having to update and deploy the whole code base will still remain. So this solution is not suitable for resolving our problem.

Handling deployment problems

In the preceding section, we discussed the deployment challenges we will face with the current .NET monolithic application. In this section, let's take a look at how we can overcome these challenges by making or adapting a few practices within the same .NET stack.

With our .NET monolithic application, our deployment is made up of xcopy deployments. After dividing our modules into different submodules, we can adapt to deployment strategies with the help of these. We can simply deploy our business logic layer or some common functionality. We can adapt to continuous integration and deployment. The xcopy deployment is a process where all the files are copied to the server, mostly used for web projects.

Making much better monolithic applications

We understand all the challenges with our existing monolithic application. We have to serve better with our new growth. As we are growing widely, we can't miss the opportunity to get new customers. If we miss fixing any challenge, then we would lose business opportunities as well. Let's discuss a few points to solve these problems.

Introducing dependency injections

Our modules are interdependent, so we are facing issues such as reusability of code and unresolved bugs due to changes in one module. These are deployment challenges. To tackle these issues, let's segregate our application in such a way that we will be able to divide modules into submodules. We can divide our Order module in such a way that it would implement the interface, and this can be initiated from the constructor. Here is a small code snippet that shows how we can apply this in our existing monolithic application.

Here is a code example that shows our Order class, where we use the constructor injection:

    namespace FlixOne.BookStore.Common
{
public class Order : IOrder
{
private readonly IOrderRepository _orderRepository;
public Order(IOrderRepository orderRepository)
{
_orderRepository = orderRepository;
}
public OrderModel GetBy(Guid orderId)
{
return _orderRepository.Get(orderId);
}
}
}
The inversion of control or IoC is nothing but a way in which objects do not create other objects on whom they rely to do their work.

In the preceding code snippet, we abstracted our Order module in such a way that it could use the IOrder interface. Afterward, our Order class implements the IOrder interface, and with the use of inversion of control, we create an object, as this is resolved automatically with the help of inversion of control.

Furthermore, the code snippets of IOrderRepository and OrderRepository are as follows:

    namespace FlixOne.BookStore.Common
{
public interface IOrderRepository
{
OrderModel Get(Guid orderId);
}
}
namespace FlixOne.BookStore.Common
{
public class OrderRepository : IOrderRepository
{
public OrderModel Get(Guid orderId)
{
//call data method here
return new OrderModel
{
OrderId = Guid.NewGuid(),
OrderDate = DateTime.Now,
OrderStatus = "In Transit"
};
}
}
}

Here we are trying to showcase how our Order module gets abstracted. In the preceding code snippet, we return default values for our order just to demonstrate the solution to the actual problem.

Finally, our presentation layer (the MVC controller) will use the available methods, as shown in the following code snippet:

    namespace FlixOne.BookStore.Controllers
{
public class OrderController : Controller
{
private readonly IOrder _order;
public OrderController(IOrder order)
{
_order = order;
}
// GET: Order
public ActionResult Index()
{
return View();
}
// GET: Order/Details/5
public ActionResult Details(string id)
{
var orderId = Guid.Parse(id);
var orderModel = _order.GetBy(orderId);
return View(orderModel);
}
}
}

The following is a class diagram that depicts how our interfaces and classes are associated with each other and how they expose their methods, properties, and so on:

Here again, we used the constructor injection, where IOrder passed and got the Order class initialized; hence, all the methods are available within our controller.

By achieving this, we would overcome a few problems such as:

  • Reduced module dependency: With the introduction of IOrder in our application, we are reducing the interdependency of the Order module. This way, if we are required to add or remove anything from/to this module, then other modules would not be affected, as IOrder is only implemented by the Order module. Let's say we want to make an enhancement to our Order module; it would not affect our Stock module. This way, we reduce module interdependency.
  • Introducing code reusability: If you are required to get the order details of any of the application modules, you can easily do so using the IOrder type.
  • Improvements in code maintainability: We have divided our modules into submodules or classes and interfaces now. We can now structure our code in such a manner that all the types, that is, all the interfaces, are placed under one folder and follow the suit for the repositories. With this structure, it would be easier for us to arrange and maintain code.
  • Our current monolithic application does not have any kind of unit testing. With the introduction of interfaces, we can now easily perform unit testing and adopt the system of test-driven development with ease.

Database refactoring

As discussed in the preceding section, our application database is huge and depends on a single schema. This huge database should be considered while refactoring. We will go for this as:

  • Schema correction: In general practice (not required), our schema depicts our modules. As discussed in previous sections, our huge database has a single schema, that is dbo now, and every part of the code or table should not be related to dbo. There might be several modules that will interact with specific tables. For example, our Order module should contain some related schema name, such as Order. So whenever we need to use the table, we can use them with their own schema instead of a general dbo schema. This will not impact any functionality related to how data would be retrieved from the database. But it will have structured or arranged our tables in such a way that we would be able to identify and correlate each and every table with their specific modules. This exercise will be very helpful while we are in the stage of transitioning of a monolithic application to microservices. Refer to the following image:

In the preceding figure, we see how the database schema is separated logically. It is not separated physically--our Order Schema and Stock Schema belong to the same database. So here we separate the database schema logically, not physically.

We can also take an example of our users: not all users are admin or belong to a specific zone, area, or region. But our user table should be structured in such a way that we should be able to identify the users by the table name or the way they are structured. Here we can structure our user table on the basis of regions. We should map our user table to a region table in such a way it should not impact or lay any changes in the existing codebase.

  • Moving business logic to code from stored procedures: In the current database, we have thousands of lines Stored Procedure with a lot of business logic. We should move the business logic to our codebase. In our monolithic application, we are using Entity Framework; here we can avoid the creation of stored procedures. We can incorporate all of our business logic to code.

Database sharding and partitioning

Between database sharding and partitioning, we can go with database sharding, where we will break it into smaller databases. These smaller databases will be deployed on a separate server:

In general, database sharding is simply defined as a shared-nothing partitioning scheme for large databases. This way, we can achieve a new level of high performance and scalability. Sharding comes from shard and spreading, which means dividing a database into chunks(shards) and spreading to different servers.

The preceding diagram is a pictorial overview of how our database is divided into smaller databases.Take a look at the following diagram:

DevOps culture

In the preceding sections, we discussed the challenges and problems with the team. Here, we propose a solution to the DevOps team: the collaboration of the development team with another operational team  should be emphasized. We should set up a system where development, QA, and the infrastructure team work in collaboration.

Automation

Infrastructure setup can be a very time-consuming job; developers would remain idle while the infrastructure is being readied for them. He or she will take some time before joining the team and contributing. The process of infrastructure setup should not stop a developer from becoming productive, as it would reduce overall productivity. This should be an automated process. With the use of Chef or PowerShell, we can easily create our virtual machines and quickly ramp up the developer count as and when required. This way, our developer can be ready to start the work from day one of joining the team.

Chef is a DevOps tool that provides a framework to automate and manage your infrastructure.

PowerShell can be used to create our Azure machines and to setup TFS.

Testing

We are going to introduce automated testing as a solution to our prior problems, those we faced while testing during deployment. In this part of the solution, we have to divide our testing approach as follows:

  • Adopt Test-Driven Development (TDD). With TDD, a developer is required to test his or her own code. The test is nothing but another piece of code that could validate whether the functionality is working as intended. If any functionality is found to not satisfy the test code, the corresponding unit test fails. This functionality can be easily fixed, as you know this is where the problem is. In order to achieve this, we can utilize frameworks such as MS test or unit tests.
  • The QA team can use scripts to automate their tasks. They can create scripts by utilizing QTP or the Selenium framework.

Versioning

The current system does not have any kind of versioning system. So there is no way to revert if something happens during a change. To resolve this issue, we need to introduce a version control mechanism. In our case, this should be either TFS or Git. With the use of version control, we can now revert to our change in case it is found to break some functionality or introduce any unexpected behavior in the application. We now have the capability of tracking the changes being done by the team members working on this application, at an individual level. However, in the case of our monolithic application, we did not have the capability of doing this.

Deployment

In our application, deployment is a huge challenge. To resolve this, we introduce Continuous Integration (CI). In this process, we need to set up a CI server. With the introduction of CI, the entire process is automated. As soon as the code is checked in by any team member, using version control TFS or Git, in our case, the CI process kicks into action. It ensures that the new code is built and unit tests are run along with the integration test. In both the scenarios of a successful build or otherwise, the team is alerted to the outcome. This enables the team to quickly respond to the issue.

Next we move to continuous deployment. Here we introduce various environments, namely a development environment, staging environment, QA environment, and so on. Now as soon as the code is checked in by any team member, CI kicks into action. It invokes the unit/integration test suits, builds the system, and pushes it out to the various environments we have set up. This way, the turnaround time of the development team to provide a suitable build for QA is reduced to minimal.

Identifying decomposition candidates within monolithic

We have now clearly identified the various problems that the current Flix One application architecture and its resultant code is posing for the development team. Also, we understand which business challenges the development team is not able to take up and why.

It is not that the team is not capable enough--it is just the code. Let's move ahead and check what would be the best strategy to zero in on for the various parts of the Flix One application that we need to move to the microservice-styled architecture. You should know that you have a candidate with a monolith architecture, which poses problems in one of the following areas:

  • Focused deployment: Although this comes at the final stage of the whole process, it demands more respect and rightly so. It is important to understand here that this factor shapes and defines the whole development strategy from the very initial stages of identification and design. Here's an example of this: the business is asking you to resolve two problems of equal importance. One of the issues might require you to perform testing for many more associated modules, and the resolution for the other might allow you to get away with limited testing. Having to make such a choice would be wrong. A business shouldn't have the option of making such a choice.
  • Code complexity: Having smaller teams is the key here. You should be able to assign small development teams for a change that is associated with a single functionality. Small teams comprise one or two members. Any more than this and the need for a project manager should ring a bell in your ears. This means that something is more interdependent across modules than it should be.
  • Technology adoption: You should be able to upgrade components to a newer version or a different technology without breaking stuff. If you have to think about the components that depend on it, you have more than one candidate. Even if you have to worry about the modules that this component depends upon, you still have more than one candidate. I remember one of my clients who had a dedicated team to test out whether the technology being released was a suitable candidate for their needs. I learned later that they would actually port one of the modules and measure the performance impact, effort requirement, and turnaround time of the whole system. I don't agree with this, though.
  • High resources: Everything in a system from memory, CPU time, and I/O requirements should be considered a module in my opinion. If any one of the modules spends more time, and or more frequently, it should be singled out. In any operation that involves higher than normal memory, the processing time blocks the delay and I/O keeps the system waiting; this would be good in our case.
  • Human dependency: If moving team members across modules seems like a herculean task that requires hand over, you have more candidates. Developers are smart, but if they have to struggle with large systems to get productive, it is not their fault. Break the system down into smaller units and you will have productive developers more easily.

Important microservices advantages

We have performed the first step of identifying our candidates for moving to microservices. It will be worthwhile going through the corresponding advantages that microservices provide.

Technology independence

With each one of the microservices being independent of each other, we now have the power to use different technologies for each microservice. The payment gateway could be using the latest .NET framework, whereas the product search could be shifted to any other programming language.

The entire application could be based on an SQL server for data storage, whereas the inventory could be based on NoSQL. The flexibility is limitless.

Interdependency removal

Since we try to achieve isolated functionality within each microservice, it is easy to add new features, fix bugs, or upgrade technology within each one. This will have no impact on other microservices. Now you have vertical code isolation that enables you to perform all of this and still be as fast with the deployments.

This doesn't end here. The Flix One team now has the ability to release a new option for the payment gateway alongside the existing one. Both the payment gateways could coexist till the time both the team and the business owners are satisfied with the reports. This is where the immense power of this architecture comes into play.

Alignment with business goals

It is not necessarily a forte of business owners to understand why a certain feature would be difficult or time-consuming to implement. Their responsibility is to keep driving the business and keep growing it. The development team should become a pivot to the business goal and not a roadblock.

It is extremely important to understand that the capability to quickly respond to business needs and adapt to marketing trends is not a by-product of microservices, but their goal.

The capability to achieve this with smaller teams only makes it more suitable to business owners.

Cost benefits

Each microservice becomes an investment for the business since it can easily be consumed by other microservices without having to redo the same code again and again. Every time a microservice is reused, time is saved by avoiding the testing and deployment of that part.

User experience is enhanced since the downtime is either eliminated or reduced to minimal.

Easy scalability

With vertical isolation in place and each microservice rendering a specific service to the whole system, it is easy to scale. Not only is the identification easier for the scaling candidates, but the cost is less. This is because we only scale a part of the whole microservice ecosystem.

This exercise can be cost-intensive for the business; hence, prioritization of which microservice should be scaled first can now be a choice of the business team. This decision no longer has to be a choice of the development team.

Security

Security is similar to what is provided by the traditional layered architecture; microservices can be secured as easily. Different configurations can be used to secure different microservices. You can have a part of the microservice ecosystem behind firewalls and another part to user encryption. Web-facing microservices could be secured differently from the rest of the microservices. You can suit your needs as per choice, technology, or budget.

Data management

It is common to have a single database in the majority of monolithic applications. And almost always, there is a database architect or a designated owner responsible for its integrity and maintenance. The path to any application enhancement that requires a change in the database has to go through this route. For me, it has never been an easy task. This further slows down the process of application enhancement, scalability, and technology adoption.

Because each microservice has its own independent database, the decision-making related to changes required in the database can be easily delegated to the respective team. We don't have to worry about the impact on the rest of the system, as there will not be any.

At the same time, this separation of the database brings forth the possibility for the team to become self-organized. They can now start experimenting.

For example, the team can now consider using the Azure Table storage or Azure Redis Cache to store the massive product catalog instead of the database, as is being done currently. Not only can the team now experiment, their experience could easily be replicated across the whole system as required by other teams in the form of a schedule convenient to them.

In fact, nothing is stopping the FlixOne team now from being innovative and using a multitude of technologies available at the same, then comparing performance in the real world and making a final decision. Once each microservice has its own DB, this is how Flix One will look:

Integrating monolithic

Whenever a choice is made to move away from the monolithic architecture in favor of the microservice-styled architecture, the time and cost axis of the initiative would pose some resistance. Business evaluation might rule against moving some parts of the monolithic application that do not make a business case for the transition.

It would have been a different scenario if we were developing the application from the beginning. However, this is also the power of microservices in my opinion. A correct evaluation of the entire monolithic architecture can safely identify the monolithic parts to be ported later.

However, to ensure that these isolated parts do not cause a problem to other microservices in future, we must take one safeguard against the risk.

The goal for such parts of the monolithic application is to make them communicate in the same way as that of other microservices. Doing this involves various patterns and you utilize the technology stack in which the monolithic application was developed.

If you use the event-driven pattern, make sure that the monolithic application can publish and consume events, including a detailed modification of the source code to make these actions possible. This process can also be performed by creating an event proxy that publishes and consumes events. The event proxy can then translate these events to the monolithic application in order to keep the changes in the source code to a minimum. Ultimately, the database would remain the same.

If you plan to use the API gateway pattern, be sure that your gateway is able to communicate with the monolithic application. To achieve this, one option is to modify the source code of the application to expose RESTful services that can be consumed easily by the gateway. This can also be achieved by the creation of a separate microservice to expose the monolithic application procedures as REST services. The creation of a separate microservice avoids big changes in the source code. However, it demands the maintenance and deployment of a new component.

Summary

In this chapter, we discussed what the microservice architectural style is in detail, its history, and how it differs from its predecessors: monolithic and SOA. We further defined the various challenges that monolithic faces when dealing with large systems. Scalability and reusability are some definite advantages that SOA provides over monolithic. We also discussed the limitations of the monolithic architecture, including scaling problems, by implementing a real-life monolithic application. The microservice architecture style resolves all these issues by reducing code interdependency and isolating the dataset size that any one of the microservices works upon. We utilized dependency injection and database refactoring for this. We further explored automation, CI, and deployment. These easily allow the development team to let the business sponsor choose what industry trends to respond to first. This results in cost benefits, better business response, timely technology adoption, effective scaling, and removal of human dependency.

In the next chapter, we will go ahead and transition our existing application to the microservice-style architecture and put our knowledge to a real test.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Start your microservices journey and understand a broader perspective of microservices development
  • Build, deploy, and test microservices using ASP.Net MVC, Web API, and Microsoft Azure Cloud
  • Get started with reactive microservices and understand the fundamentals behind it

Description

Microservices is an architectural style that promotes the development of complex applications as a suite of small services based on business capabilities. This book will help you identify the appropriate service boundaries within the business. We'll start by looking at what microservices are, and what the main characteristics are. Moving forward, you will be introduced to real-life application scenarios, and after assessing the current issues, we will begin the journey of transforming this application by splitting it into a suite of microservices. You will identify the service boundaries, split the application into multiple microservices, and define the service contracts. You will find out how to configure, deploy, and monitor microservices, and configure scaling to allow the application to quickly adapt to increased demand in the future. With an introduction to the reactive microservices, you strategically gain further value to keep your code base simple, focusing on what is more important rather than the messy asynchronous calls.

Who is this book for?

This book is for .NET Core developers who want to learn and understand microservices architecture and implement it in their .NET Core applications. It’s ideal for developers who are completely new to microservices or have just a theoretical understanding of this architectural approach and want to gain a practical perspective in order to better manage application complexity.

What you will learn

  • Compare microservices with monolithic applications and SOA
  • Identify the appropriate service boundaries by mapping them to the relevant bounded contexts
  • Define the service interface and implement the APIs using ASP.NET Web API
  • Integrate the services via synchronous and asynchronous mechanisms
  • Implement microservices security using Azure Active Directory, OpenID Connect, and OAuth 2.0
  • Understand the operations and scaling of microservices in .NET Core
  • Understand the testing pyramid and implement consumer-driven contract using pact net core
  • Understand what the key features of reactive microservices are and implement them using reactive extension
Estimated delivery fee Deliver to Latvia

Premium delivery 7 - 10 business days

€25.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jun 14, 2017
Length: 274 pages
Edition : 1st
Language : English
ISBN-13 : 9781785887833
Vendor :
Microsoft
Languages :
Concepts :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Estimated delivery fee Deliver to Latvia

Premium delivery 7 - 10 business days

€25.95
(Includes tracking information)

Product Details

Publication date : Jun 14, 2017
Length: 274 pages
Edition : 1st
Language : English
ISBN-13 : 9781785887833
Vendor :
Microsoft
Languages :
Concepts :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 111.97
Enterprise Application Architecture with .NET Core
€41.99
Building Microservices with .NET Core
€36.99
.NET Design Patterns
€32.99
Total 111.97 Stars icon

Table of Contents

10 Chapters
What Are Microservices? Chevron down icon Chevron up icon
Building Microservices Chevron down icon Chevron up icon
Integration Techniques Chevron down icon Chevron up icon
Testing Strategies Chevron down icon Chevron up icon
Deployment Chevron down icon Chevron up icon
Security Chevron down icon Chevron up icon
Monitoring Chevron down icon Chevron up icon
Scaling Chevron down icon Chevron up icon
Reactive Microservices Chevron down icon Chevron up icon
Creating a Complete Microservice Solution Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.3
(4 Ratings)
5 star 25%
4 star 25%
3 star 25%
2 star 0%
1 star 25%
Delano O'bryant Jan 12, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I found out about this book researching Mircoservices. I read clips of information from the web out of this book. It sparked my interest and got me developing Microservices. This book is a great quick references as well.
Amazon Verified review Amazon
Amazon Customer Aug 07, 2017
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
Good work!, This book introduce to you to Microservices paradigm with most clearly.However is focused to Microsoft Azure Cloud could be touch AWS services or custom hosted only with Kestrel and IIS services.
Amazon Verified review Amazon
Simon Apr 15, 2022
Full star icon Full star icon Full star icon Empty star icon Empty star icon 3
Most of the explanations of this book just waffle on. About coverting a monolith to microservices rather than a step by step, I'm not very impressed by it.
Amazon Verified review Amazon
Paul R. Chitoiu Jul 17, 2018
Full star icon Empty star icon Empty star icon Empty star icon Empty star icon 1
This is obviously aa very overpriced, very low quality book. It's published by Packt, which means pretty much anything can write one of these. Just look at the sample. Full of grammar errors and other weird things. Clearly written by non-english speakers who did not bother to use a proofreader. And it's over $40! What a joke...
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela