After looking at the patterns that will support the business use cases for the application, now, we can move on to the more specific implementation details. While some of the implementation constructs used in this solution will seem familiar, there are some technical details that might be new to you. We will be exploring several topics in this section, which are intended to prepare you for the journey ahead.
The Visual Studio solution topology
The solutions within the source folder are broken up by domain, with a separate solution for each. Additionally, there is a solution for core platform needs, such as marker interfaces to identify value objects, entities, aggregates, and other objects. The intent is to allow for each of the services to be run as an independent solution, which are eventually moved into their own repositories if so desired.
Each of the domains will have API services that can be communicated with. These projects in Visual Studio are not overly complex or even far from the general project template that is created when you create a new .NET Core API app. There are separate project types for queries, which read data, and commands, which affect data. Each domain will have a domain library, an infrastructure library, and test projects where applicable. Also, each domain will have a persistent consumer, in the form of an executable, that will run to enable listening for domain messages and handle those messages accordingly.
Solution folders will also be present to house Docker files, Docker Compose files, and any relevant Infrastructure-as-Code (IaC) or Configuration-as-Code (CaC) required to deploy the necessary components. Eventually, this will also be the location of the YAML file that defines the build and release pipeline.
Important note
The namespaces in each solution all start with a common acronym: MTAEDA. This stands for Mass Transit Authority Event-Driven Application.
Identity and Access Management considerations
Managing access to an application can be a daunting task. Many different options are available, from standalone implementations to platform-native solutions such as Azure Active Directory. Sometimes, the choice to go with an identity provider can be left with the application team; other times, it is driven by an enterprise strategy for authentication and authorization.
In this case, authentication will be handled at two layers. One layer is for transmitting events to applicable services, and the other layer is for users to log in and access management tools, such as dashboards and reports. As the dashboards and reports will be hosted in PowerBI, Azure Active Directory will be used to manage the authentication and authorization of those assets. For communication to the gateway and subsequent domain services for read and write operations, certificates will be used to govern traffic from the equipment to the gateway.
Event structure and schema
To help simplify and streamline event constructs, we have selected the CloudEvents open specification as the baseline for all events being transmitted. This allows you to capture relevant metadata about the operation while still sending over the event data itself. Additionally, using the CloudEvents schema enables you to potentially leverage platform tooling such as Azure Log Analytics and Azure Monitor. Of course, if your cloud target is different, there might be other ways the event schema could be useful. However, in this book, we will focus on the Azure cloud platform.
The schema for a CloudEvents schema is rather simple. There are fields for Data, Subject, Type, Source, Time, and DataContentType. They do not all require values; however, we will be using them to help better define the intent and content of each event we raise. It is entirely possible to not use this construct and still use the domains and domain services. The primary reason this design decision was made was to ensure there is consistency in the message format, along with a capacity to understand metadata associated with the event itself. Table 1.1 illustrates the CloudEvent fields and how they will be used to contain pertinent information when an event is raised:
Table 1.2 – The CloudEvent schema and field mappings
Local development and debugging
For local development, using Visual Studio is the easiest option to ensure any prerequisites for the solution can be installed and managed. Additionally, you can use Visual Studio Code, or even use GitHub CodeSpaces, to leverage a fully encapsulated development environment in the cloud.
If you are using Windows as your primary operating system, you will likely also leverage the Windows Subsystem for Linux (WSL), which allows for Linux-native builds and tooling to be directly run from Windows. In the event that any SDKs are missing, Visual Studio will alert you to that, and allow you to install them by clicking on a link next to the message.
There are a couple of different options that you can use to debug the application locally:
- Start debugging directly from Visual Studio (F5).
- Run the application using
docker compose
and attach to the Docker processes via Visual Studio.
- Deploy the application to Kubernetes and attach it to the application using the Kubernetes extension in Visual Studio.
New .NET 7 features
With the rollout of .NET 7, many improvements have been made to the underlying functionality offered along with language-specific updates. In this application, we will be taking advantage of some of the latest updates from a framework and language perspective. Language-wise, the implementations of minimal APIs and the asynchronous streaming of JSON data will come in handy for simplifying service implementations, and the ability to leverage Hot Reload will allow for faster and more meaningful debugging during the development life cycle.
Minimal APIs
One of the more exciting features in .NET 7 is a feature called minimal APIs. This allows you to develop an ASP.NET Core Web API app with very little code. The .NET team has worked on making the using
statement a global construct—meaning that top-level statements, such as using System
or using Microsoft.AspNet.MVC
, are assumed to be required by all files within a Web API project and are not required in each file as a result. Additionally, the Startup.cs
file is no longer required, as you can configure the app directly from the main Program.cs
file. The following example code illustrates a code block that is valid and will create an ASP.NET Core Web API app when it is compiled:
var app = WebApplication.Create(args);
app.MapGet("/api/testing",(Func<IActionResult>)(() => { return
new ContentResult() { Content = "Testing" }; }));
app.Run();
For a very simple API, you can map Get
, Post
, Put
, Patch
, and Delete
operations directly in the Program.cs
file, and they will be added to the routes for the Web API app. Additionally, you can call app.MapControllers()
if you wish to keep controller code in separate files, as found in traditional Web API project layouts. On startup, the application will look for items derived from the Controller
base class. If you choose this option, you will need to invoke the WebApplication.BuildConfig()
method and pass in the build configurations, telling the application to add controllers to the configuration services, as demonstrated in the following code block:
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddControllers();
var app = builder.Build();
app.MapControllers();
app.Run();
JSON transcoding for gRPC
While support for gRPC services was originally added in .NET 6, further improvements have been introduced to enhance the experience. Previously, in order to connect to a gRPC service for testing purposes, you had to build a client for that service and interact with it via the client. With the addition of JSON transcoding support, you can now launch a Swagger page that contains all of the available methods you are exposing via ProtoBuf, and perform tests against them. This doesn't replace the need to have a client built for communication purposes when deployed, but it does help the experience of testing locally.
Observability
With .NET 7, the integration with OpenTelemetry allows developers to leverage out-of-the-box instrumentation as well as telemetry exporters for popular site reliability platforms such as Prometheus and Jaeger. OpenTelemetry is a platform-agnostic framework that enables developers to expose both stack metrics (such as ASP.NET Core instrumentation) as well as custom metrics based on counters, histograms, and meters. While there is active work being done on these libraries, there are versions available that can be installed via NuGet and makes adding baseline telemetry capture straightforward.
Hot reload
One bit of functionality that has been present in other web development stacks for years but not in Visual Studio itself is the option to hot reload when debugging. For example, if you were to change a line of code in a controller, you would need to stop debugging, change the line of code, then resume debugging. With Hot Reload support in .NET 7, this is no longer an obstacle. In Visual Studio 2022, there is now a new icon that invokes hot reload once a change has been detected in the underlying source code.