Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Implementing Event-Driven Microservices Architecture in .NET 7
Implementing Event-Driven Microservices Architecture in .NET 7

Implementing Event-Driven Microservices Architecture in .NET 7: Develop event-based distributed apps that can scale with ever-changing business demands using C# 11 and .NET 7

Arrow left icon
Profile Icon Joshua Garverick Profile Icon Omar Dean McIver
Arrow right icon
€17.99 €26.99
Full star icon Full star icon Full star icon Full star icon Half star icon 4.5 (16 Ratings)
eBook Mar 2023 326 pages 1st Edition
eBook
€17.99 €26.99
Paperback
€33.99
Subscription
Free Trial
Renews at €18.99p/m
Arrow left icon
Profile Icon Joshua Garverick Profile Icon Omar Dean McIver
Arrow right icon
€17.99 €26.99
Full star icon Full star icon Full star icon Full star icon Half star icon 4.5 (16 Ratings)
eBook Mar 2023 326 pages 1st Edition
eBook
€17.99 €26.99
Paperback
€33.99
Subscription
Free Trial
Renews at €18.99p/m
eBook
€17.99 €26.99
Paperback
€33.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

Implementing Event-Driven Microservices Architecture in .NET 7

The Sample Application

Over the past several years, the emergence of high-volume, scalable, event-driven applications has caused an interesting shift in application development. Complimentary design patterns have made writing and implementing event-driven architectures more appealing and have helped to reduce the learning curve when it comes to fully leveraging the elasticity and resiliency of cloud platform components. We will be taking a look at an application that utilizes event-driven architectures, implemented using .NET 7 and leveraging cloud-native applications and data constructs.

The purpose of this chapter is to outline the sample application we will be using throughout this book, along with the business drivers and goals it intends to satisfy. This will provide you with the opportunity to get a baseline understanding of the application's structure, source code, mechanics, and domains.

In this chapter, we'll cover the following main topics:

  • Exploring business drivers and the application
  • Architectural structures and paradigms
  • Implementation details

Technical requirements

There are several prerequisites you will need to have an understanding of or have installed on your machine to use the code base and follow along with the examples. These include the following:

  • Git
  • Visual Studio or Visual Studio Code
  • Docker
  • Kubernetes
  • Service-oriented architectures
  • Domain-Driven Design (DDD)

We will be using an application that has been custom-developed and is included with the source code for this book. The primary platform we will be using to develop in will be .NET 7. All examples will use Visual Studio 2022 as the primary integrated developer environment (IDE). Either Visual Studio 2022 or Visual Studio Code will be required to develop .NET 7 solutions.

Important note

The links to all the white papers and other sources mentioned in this chapter are provided in the Further reading section toward the end of the chapter.

Exploring business drivers and the application

It's always a good idea to have a solid understanding of why an application exists, how it came to be, and what problems or opportunities it looks to solve. This application is a concept application that involves Internet of Things (IoT) devices, distributed event ingestion at scale, and facial recognition features. The primary market for this application is for turnstiles used at mass transit locations:

Figure 1.1 – Turnstiles in use at a transit station

Figure 1.1 – Turnstiles in use at a transit station

In this scenario, the baseline events capture a simple count of customers who pass through the turnstiles at both the entrance and exit points of the mass transit system. Some drivers that contribute to the concept of the application, along with its need, include the following:

  • To increase the visibility of equipment health and the need for proactive maintenance
  • To allow integration for facial recognition sensors that can scan law enforcement databases for potential fugitives or persons of interest
  • To manage costs associated with turnstile equipment, with options for expanding to fare payment interfaces
  • To analyze transit usage, turnstile placement, and the need for additional units in high-volume areas

Having the ability to capture foot traffic related to the entrance and exit points of a transit station has several benefits. First, it can be used to understand how busy any one station is. Second, with extended use, the equipment can wear down and eventually break. With a line of sight into how many people are using the equipment, technicians can make educated decisions regarding when units might need to be serviced or ultimately replaced. This could also lead to the deeper monitoring of other components besides the turnstile unit, such as the payment interfaces. Some units might only have a ticket scanner, while others might have a ticket scanner and an electronic payment interface, where contactless payments using mobile devices can be used. The monitoring of normal usage, malfunctions, and scheduling the proactive servicing of those components could also be beneficial.

An additional use case could be that of transit scheduling and vehicle availability. Generally, the number of vehicles (such as trains, trams, buses, and more) any transit authority might have in its fleet is a direct result of them already monitoring customer traffic demands. Using data that has been captured in real time can help accelerate the analysis of needed schedule adjustments, fleet adjustments, or reductions in services for less-traveled stations.

The addition of facial recognition software to the equipment is not a hard requirement but does offer a value-add in the ability to potentially identify criminals at large or suspects who are wanted for questioning. With any artificial intelligence, it is essential to both program and operate with ethics and security in mind. While closed-circuit cameras and more advanced video surveillance equipment can be found in many transit stations, those cameras do not immediately notify anyone if a person has been recognized based on an alert or a bulletin issued by a law enforcement agency. Data collected during facial scans must be treated as personally identifiable information and must be purged if no match has been found.

Unpacking this a bit more, other potential drivers could come into play. For example, examining the business requirements for the application would add clarity. Looking at the domain model and any domain-specific language (DSL) associated with the requirements would help remove any ambiguity around what is meant by a customer, an order, an item, or even a payment method. Let's take a look at the domain model to get a better understanding of the layout of the different services, contexts, and aggregates.

Reviewing the domain model

The application's domain model describes the functional areas (domains) that live within the confines of the application. Each is developed using a ubiquitous language that everyone—from business analysts to senior leadership, to junior developers—can easily understand and relate to. Figure 1.2 represents a simple domain model diagram that aligns to the structure of the application:

Figure 1.2 – A high-level domain model

Figure 1.2 – A high-level domain model

The primary domains we will reference for this application are related to the primary pieces of functionality the application looks to offer. The following table offers a description of each domain:

Table 1.1 – Application functions

Table 1.1 – Application functions

With these baseline domains defined, some simple rules of engagement can be derived. For example, a passenger could use a piece of equipment to enter a transit station while being run through facial recognition by the Identification domain. Equipment could raise an error noting a malfunction, which could then schedule a maintenance event. Equipment events such as turnstile operations could fire events per turn, allowing the aggregation of passenger throughput per turnstile and per station. These interactions can then be broken into areas of overlapping concern and, ultimately, help derive aggregate roots that are important to the model and the application. They include the following:

  • Passenger
  • Station
  • Turnstile
  • Camera
  • NotificationConfiguration
  • TurnstileMaintenanceSchedule
  • CameraMaintenanceSchedule

Each of the aggregates will contain common properties such as the name and the ID. Some differences between entities and value objects related to the aggregates will be required, as each one will have its own requirements for data, as prescribed by the domain. Figure 1.3 represents a high-level diagram of each aggregate, including properties (the list items), entities (the white rounded rectangles), and value objects (the green rounded rectangles):

Figure 1.3 – A high-level aggregate view

Figure 1.3 – A high-level aggregate view

Chapter 4, Domain Model and Asynchronous Design, dives deeper into the domain model, including a review of events and event handlers and asynchronous design.

With an understanding of the business relevance and the domain model that supports the business case, next, we can go one level deeper and examine some of the architectural structures and paradigms that help to define the event-driven nature of this application.

Assessing architectural structures and paradigms

Establishing an architectural baseline helps to drive decisions regarding how the application and its components will ultimately be implemented. Additionally, it also provides an opportunity to evaluate different patterns and practices with the ultimate goal of selecting a path forward. This section covers the overall architectural design of the sample application and some core tenets that enable the creation and consumption of events.

A high-level logical architecture

The solution is predicated on the use of hardware interfaces (such as equipment) that can communicate to hosted services in the cloud via a standard network connection. There is a hardware gateway (such as Raspberry Pi) that hosts simple write-only services, which will integrate using relevant domain services to record turnstile usage, facial recognition hits, and possible malfunctions with the turnstile or camera. Any user interface can interact with a common API gateway layer, which allows for data exchange without needing to know all the particulars of the available APIs. The backend runtime is managed by Kubernetes (in this particular case, AKS), with containers for each of the available domain microservices. Each of these microservices interacts with the event bus to send events. Then, the events are handled according to the domain's applicable event handlers. A reporting layer is used to access information captured via the event stream. SQL databases will be used to maintain the append-only activity log of events that come in via Kafka, and read models will be consumed from domain databases using read-oriented services.

The following reference diagram shows the logical construction of the application:

Figure 1.4 – A logical high-level reference architecture

Figure 1.4 – A logical high-level reference architecture

The application uses the Producer-Consumer pattern to produce events, which are later consumed by components who need to know about them. You might also see this pattern referred to as Publish-Subscribe or pub-sub. The key point to take away from the use of this pattern is that any number of components could produce events containing relevant domain information, and any number of possible components could consume those events and act accordingly. We will dive into the producer-consumer pattern in much more detail in Chapter 2, The Producer-Consumer Pattern.

Digging down a layer, there are two technology architecture specifications that we will be using. One is for the device board inside the turnstile unit, which hosts the Equipment domain service. The other is the layout of the cloud components, as mentioned in the reference architecture in Figure 1.4. The high-level flow between the turnstile device and the cloud components is as follows:

  • On the turnstile, after completing one turn, a message is sent to the equipment service indicating a completed rotation.
  • The equipment service will send an event to the IoT hub with the results of the turnstile action.
  • Using Kafka Connect, the message will be forwarded to Kafka, implemented within the Kubernetes cluster using the confluent platform.
  • The event will be written to the appropriate stream.
  • Any relevant event handlers will process the event.

A more detailed diagram of the technology architecture can be seen in Figure 1.5, where both the turnstile unit and the cloud components are represented:

Figure 1.5 – The technology architecture for turnstile-to-cloud communication

Figure 1.5 – The technology architecture for turnstile-to-cloud communication

Next, we will move on to the design of the event sourcing technique.

Event sourcing

Event sourcing is a technique that allows an application to append data to a log or stream in order to capture a definitive list of changes related to an object. One of the benefits of using event sourcing versus traditional create, retrieve, update, and delete (CRUD) methods with relational databases is that the performance can be tuned and increased at the service level, as the overhead of using CRUD methods is not needed. Also, it facilitates implementing a separation of concerns and the single responsibility principle, as outlined by the SOLID development practices (https://en.wikipedia.org/wiki/SOLID).

Another benefit of using event sourcing is its ability to achieve high message throughput while maintaining a high degree of resiliency. Technologies such as Kafka inherently allow for multiple message brokers and multiple partitions within topics. This design ensures that, at the very least, one broker is available to communicate with, and multiple partitions within a topic allow for data redundancy and scalability since Kafka will replicate partition data to each broker in the cluster. This enables multiple consumers to access or write data in parallel.

When using event stores with streaming capabilities, it enables you to debug point-in-time data and replay events to aid in debugging. For example, if an event has data that causes an error in the service code, you are fully able to go back to the point in time before that error was thrown and replay events to help identify potential bugs. Additionally, it can be used to perform "what if" testing. In some cases, normal use cases might have related edge cases that could either cause issues or introduce complexities that they were not originally designed for. Using "what if" testing allows you to go to a certain point in time and begin issuing new events that would correlate to the edge case while also monitoring application performance and potential failures.

Command-Query Responsibility Segregation

Command-Query Responsibility Segregation (CQRS) is a design pattern introduced by Greg Young that is used to describe the logical and physical separation of concerns for reading and writing data. Normally, you will see specific functionality implemented to only allow writing to an event store (commands) or only allow reading from an event store (queries). This allows for the independent scaling of read and write operations depending on the needs of the application or the needs of a presentation layer, either in the form of business intelligence software, such as PowerBI, or web applications accessible from desktop and mobile clients.

Details around how CQRS impacts the design of the application's domain services are covered in the next section. It's important to note that having that distinct separation of concerns is vital to leverage the pattern effectively.

Reviewing the implementation details

After looking at the patterns that will support the business use cases for the application, now, we can move on to the more specific implementation details. While some of the implementation constructs used in this solution will seem familiar, there are some technical details that might be new to you. We will be exploring several topics in this section, which are intended to prepare you for the journey ahead.

The Visual Studio solution topology

The solutions within the source folder are broken up by domain, with a separate solution for each. Additionally, there is a solution for core platform needs, such as marker interfaces to identify value objects, entities, aggregates, and other objects. The intent is to allow for each of the services to be run as an independent solution, which are eventually moved into their own repositories if so desired.

Each of the domains will have API services that can be communicated with. These projects in Visual Studio are not overly complex or even far from the general project template that is created when you create a new .NET Core API app. There are separate project types for queries, which read data, and commands, which affect data. Each domain will have a domain library, an infrastructure library, and test projects where applicable. Also, each domain will have a persistent consumer, in the form of an executable, that will run to enable listening for domain messages and handle those messages accordingly.

Solution folders will also be present to house Docker files, Docker Compose files, and any relevant Infrastructure-as-Code (IaC) or Configuration-as-Code (CaC) required to deploy the necessary components. Eventually, this will also be the location of the YAML file that defines the build and release pipeline.

Important note

The namespaces in each solution all start with a common acronym: MTAEDA. This stands for Mass Transit Authority Event-Driven Application.

Identity and Access Management considerations

Managing access to an application can be a daunting task. Many different options are available, from standalone implementations to platform-native solutions such as Azure Active Directory. Sometimes, the choice to go with an identity provider can be left with the application team; other times, it is driven by an enterprise strategy for authentication and authorization.

In this case, authentication will be handled at two layers. One layer is for transmitting events to applicable services, and the other layer is for users to log in and access management tools, such as dashboards and reports. As the dashboards and reports will be hosted in PowerBI, Azure Active Directory will be used to manage the authentication and authorization of those assets. For communication to the gateway and subsequent domain services for read and write operations, certificates will be used to govern traffic from the equipment to the gateway.

Event structure and schema

To help simplify and streamline event constructs, we have selected the CloudEvents open specification as the baseline for all events being transmitted. This allows you to capture relevant metadata about the operation while still sending over the event data itself. Additionally, using the CloudEvents schema enables you to potentially leverage platform tooling such as Azure Log Analytics and Azure Monitor. Of course, if your cloud target is different, there might be other ways the event schema could be useful. However, in this book, we will focus on the Azure cloud platform.

The schema for a CloudEvents schema is rather simple. There are fields for Data, Subject, Type, Source, Time, and DataContentType. They do not all require values; however, we will be using them to help better define the intent and content of each event we raise. It is entirely possible to not use this construct and still use the domains and domain services. The primary reason this design decision was made was to ensure there is consistency in the message format, along with a capacity to understand metadata associated with the event itself. Table 1.1 illustrates the CloudEvent fields and how they will be used to contain pertinent information when an event is raised:

Table 1.2 – The CloudEvent schema and field mappings

Table 1.2 – The CloudEvent schema and field mappings

Local development and debugging

For local development, using Visual Studio is the easiest option to ensure any prerequisites for the solution can be installed and managed. Additionally, you can use Visual Studio Code, or even use GitHub CodeSpaces, to leverage a fully encapsulated development environment in the cloud.

If you are using Windows as your primary operating system, you will likely also leverage the Windows Subsystem for Linux (WSL), which allows for Linux-native builds and tooling to be directly run from Windows. In the event that any SDKs are missing, Visual Studio will alert you to that, and allow you to install them by clicking on a link next to the message.

There are a couple of different options that you can use to debug the application locally:

  • Start debugging directly from Visual Studio (F5).
  • Run the application using docker compose and attach to the Docker processes via Visual Studio.
  • Deploy the application to Kubernetes and attach it to the application using the Kubernetes extension in Visual Studio.

New .NET 7 features

With the rollout of .NET 7, many improvements have been made to the underlying functionality offered along with language-specific updates. In this application, we will be taking advantage of some of the latest updates from a framework and language perspective. Language-wise, the implementations of minimal APIs and the asynchronous streaming of JSON data will come in handy for simplifying service implementations, and the ability to leverage Hot Reload will allow for faster and more meaningful debugging during the development life cycle.

Minimal APIs

One of the more exciting features in .NET 7 is a feature called minimal APIs. This allows you to develop an ASP.NET Core Web API app with very little code. The .NET team has worked on making the using statement a global construct—meaning that top-level statements, such as using System or using Microsoft.AspNet.MVC, are assumed to be required by all files within a Web API project and are not required in each file as a result. Additionally, the Startup.cs file is no longer required, as you can configure the app directly from the main Program.cs file. The following example code illustrates a code block that is valid and will create an ASP.NET Core Web API app when it is compiled:

var app = WebApplication.Create(args);
app.MapGet("/api/testing",(Func<IActionResult>)(() => { return
  new ContentResult() { Content = "Testing" }; }));
app.Run();

For a very simple API, you can map Get, Post, Put, Patch, and Delete operations directly in the Program.cs file, and they will be added to the routes for the Web API app. Additionally, you can call app.MapControllers() if you wish to keep controller code in separate files, as found in traditional Web API project layouts. On startup, the application will look for items derived from the Controller base class. If you choose this option, you will need to invoke the WebApplication.BuildConfig() method and pass in the build configurations, telling the application to add controllers to the configuration services, as demonstrated in the following code block:

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddControllers();
var app = builder.Build();
app.MapControllers();
app.Run();

JSON transcoding for gRPC

While support for gRPC services was originally added in .NET 6, further improvements have been introduced to enhance the experience. Previously, in order to connect to a gRPC service for testing purposes, you had to build a client for that service and interact with it via the client. With the addition of JSON transcoding support, you can now launch a Swagger page that contains all of the available methods you are exposing via ProtoBuf, and perform tests against them. This doesn't replace the need to have a client built for communication purposes when deployed, but it does help the experience of testing locally.

Observability

With .NET 7, the integration with OpenTelemetry allows developers to leverage out-of-the-box instrumentation as well as telemetry exporters for popular site reliability platforms such as Prometheus and Jaeger. OpenTelemetry is a platform-agnostic framework that enables developers to expose both stack metrics (such as ASP.NET Core instrumentation) as well as custom metrics based on counters, histograms, and meters. While there is active work being done on these libraries, there are versions available that can be installed via NuGet and makes adding baseline telemetry capture straightforward.

Hot reload

One bit of functionality that has been present in other web development stacks for years but not in Visual Studio itself is the option to hot reload when debugging. For example, if you were to change a line of code in a controller, you would need to stop debugging, change the line of code, then resume debugging. With Hot Reload support in .NET 7, this is no longer an obstacle. In Visual Studio 2022, there is now a new icon that invokes hot reload once a change has been detected in the underlying source code.

Summary

This chapter provided an overview of the sample transit application, including the underlying business drivers, architectures, and implementation patterns. We have taken a quick lap around the domain model along with aggregates, entities, and value objects. Additionally, we have covered some key areas within the application's architecture, along with some specific implementation details, including new features in .NET 7 that will make development and debugging easier for us. All of these core topics will be covered in more detail in the coming chapters.

The next chapter takes a look into the producer-consumer pattern, which is an essential underpinning of the application and what helps event-driven systems work at scale. We will be looking at the underlying usage of this design pattern, how it benefits applications that operate at scale, how it is implemented, and how to validate that communications are properly being routed and sent.

Questions

Answer the following questions to test your knowledge of this chapter:

  1. What potential insights can be gained when examining the business perspective behind an application?
  2. Are there other domains that you can identify for the application that are not already listed in the primary domain model?
  3. Are any of the aggregates misrepresented? Or do they contain information that might be irrelevant within the scope of the domain?
  4. How is event sourcing different from using a relational database or NoSQL database to store and retrieve application data?
  5. Is there an advantage to separating read operations from write operations?
  6. What benefits can be gained by separating domain solutions from the overall application solution? Are there potential drawbacks to separating the domain solutions?
  7. What other authentication and authorization mechanisms are available to secure access to reporting data and/or the write services that send data to Kafka?
  8. Is using a standard schema for events, such as CloudEvents, unnecessarily complicating the overall design of the application? Why or why not?
  9. What are some alternative implementations for these services aside from Docker or Kubernetes?

Further reading

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Learn the tenets of event-driven architecture, coupled with reliable design patterns to enhance your knowledge of distributed systems and build a foundation for professional growth
  • Understand how to translate business goals and drivers into a domain model that can be used to develop an app that enables those goals and drivers
  • Identify areas to enhance development and ensure operational support through the architectural design process

Description

This book will guide you through various hands-on practical examples for implementing event-driven microservices architecture using C# 11 and .NET 7. It has been divided into three distinct sections, each focusing on different aspects of this implementation. The first section will cover the new features of .NET 7 that will make developing applications using EDA patterns easier, the sample application that will be used throughout the book, and how the core tenets of domain-driven design (DDD) are implemented in .NET 7. The second section will review the various components of a local environment setup, the containerization of code, testing, deployment, and the observability of microservices using an EDA approach. The third section will guide you through the need for scalability and service resilience within the application, along with implementation details related to elastic and autoscale components. You’ll also cover how proper telemetry helps to automatically drive scaling events. In addition, the topic of observability is revisited using examples of service discovery and microservice inventories. By the end of this book, you’ll be able to identify and catalog domains, events, and bounded contexts to be used for the design and development of a resilient microservices architecture.

Who is this book for?

This book will help .NET developers and architects looking to leverage or pivot to microservices while using a domain-driven event model.

What you will learn

  • Explore .NET 7 and how it enables the development of applications using EDA
  • Understand messaging protocols and producer/consumer patterns and how to implement them in .NET 7
  • Test and deploy applications written in .NET 7 and designed using EDA principles
  • Account for scaling and resiliency in microservices
  • Collect and learn from telemetry at the platform and application level
  • Get to grips with the testing and deployment of microservices

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Mar 17, 2023
Length: 326 pages
Edition : 1st
Language : English
ISBN-13 : 9781803230405
Languages :
Concepts :
Tools :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Product Details

Publication date : Mar 17, 2023
Length: 326 pages
Edition : 1st
Language : English
ISBN-13 : 9781803230405
Languages :
Concepts :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 116.97
Apps and Services with .NET 8
€37.99
Implementing Event-Driven Microservices Architecture in .NET 7
€33.99
C# 12 and .NET 8 – Modern Cross-Platform Development Fundamentals
€44.99
Total 116.97 Stars icon

Table of Contents

20 Chapters
Part 1:Event-Driven Architecture and .NET 7 Chevron down icon Chevron up icon
Chapter 1: The Sample Application Chevron down icon Chevron up icon
Chapter 2: The Producer-Consumer Pattern Chevron down icon Chevron up icon
Chapter 3: Message Brokers Chevron down icon Chevron up icon
Chapter 4: Domain Model and Asynchronous Events Chevron down icon Chevron up icon
Part 2:Testing and Deploying Microservices Chevron down icon Chevron up icon
Chapter 5: Containerization and Local Environment Setup Chevron down icon Chevron up icon
Chapter 6: Localized Testing and Debugging of Microservices Chevron down icon Chevron up icon
Chapter 7: Microservice Observability Chevron down icon Chevron up icon
Chapter 8: CI/CD Pipelines and Integrated Testing Chevron down icon Chevron up icon
Chapter 9: Fault Injection and Chaos Testing Chevron down icon Chevron up icon
Part 3:Testing and Deploying Microservices Chevron down icon Chevron up icon
Chapter 10: Modern Design Patterns for Scalability Chevron down icon Chevron up icon
Chapter 11: Minimizing Data Loss Chevron down icon Chevron up icon
Chapter 12: Service and Application Resiliency Chevron down icon Chevron up icon
Chapter 13: Telemetry Capture and Integration Chevron down icon Chevron up icon
Chapter 14: Observability Revisited Chevron down icon Chevron up icon
Assessments Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.5
(16 Ratings)
5 star 68.8%
4 star 25%
3 star 0%
2 star 0%
1 star 6.3%
Filter icon Filter
Top Reviews

Filter reviews by




Michael Schulz May 03, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book is well-written and timely. The authors clearly have the background and experience to present this information. The real-world example of a mass-transit application helps ground the concepts in a tangible framework. The technologies used are clearly articulated and available to most developers. Reading this book and building the sample application will prepare architects and developers alike to increase their skills, knowledge, and experience with building event-driven microservice-based applications.
Amazon Verified review Amazon
Miguel Angel Teheran Jan 02, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
If you are just starting about event-drive architecture and you want to book that covers everything from scratch and every aspect of this model, this is the right book.The book starts talking about this architecture and the challenges that we have in distributed architect and we can resolve using this event-driven designed. The book contains many diagrams and charts to understand every implementation in this architecture. To model the solution the author use the following technologies: Docker, Kubernetes, Kafka, and Azure, you will have a brief introduction for all this technologies.If you are familiar with all these technologies and you want to learn how to scale and deeper scenarios maybe this is not the right book for you.
Amazon Verified review Amazon
Pranam Bhat Jun 20, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Indeed a great book explaining about architecture pattern and examples in .NET framework. Included all the added features in latest .NET version and it's usage.Explanation is to the point. Clear cut points on architecture pattern. Easy to understand.I really appreciate the authors and whole Packt publishing team for bringing this book to the market.
Amazon Verified review Amazon
Sapna Jain Jul 12, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
The book presents a balanced theory and code approach to implementing event-driven microservices architecture.It serves as kind of a quick intro book for familiarizing yourself with concepts in EDAs. Going through the book chapter wisehelps in building a coherent understanding and touches on a variety of things like - basics, some key challenges in EDAs, deploying them in a production environment, and monitoring them. Overall, it's a good to have book if you want to get a high level full picture of EDAs ecosystem. You won't become master of EDAs with this book but definitely you'll be comfortable enough to grasp advanced concepts.
Amazon Verified review Amazon
Andy Mitchell Apr 07, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Prescriptive guidance with loads of hands-on code samples, diagrams and real-world case studies. This book puts theory about microservices into practice, covering architectural design patterns as well as considerations for when and why to use them. No matter your experience level, you will find wisdom in these pages that will make you a better developer in the age of data at scale.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.