Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Distributed .NET with Microsoft Orleans
Distributed .NET with Microsoft Orleans

Distributed .NET with Microsoft Orleans: Build robust and highly scalable distributed applications without worrying about complex programming patterns

Arrow left icon
Profile Icon Bhupesh Guptha Muthiyalu Profile Icon Kumar Kunani
Arrow right icon
$41.99
Full star icon Full star icon Full star icon Full star icon Half star icon 4.3 (6 Ratings)
Paperback May 2022 262 pages 1st Edition
eBook
$41.99
Paperback
$41.99
Subscription
Free Trial
Renews at $19.99p/m
Arrow left icon
Profile Icon Bhupesh Guptha Muthiyalu Profile Icon Kumar Kunani
Arrow right icon
$41.99
Full star icon Full star icon Full star icon Full star icon Half star icon 4.3 (6 Ratings)
Paperback May 2022 262 pages 1st Edition
eBook
$41.99
Paperback
$41.99
Subscription
Free Trial
Renews at $19.99p/m
eBook
$41.99
Paperback
$41.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
Table of content icon View table of contents Preview book icon Preview Book

Distributed .NET with Microsoft Orleans

Chapter 1: An Introduction to Distributed Applications

In today's world of digital platforms, all applications are expected to be highly scalable, available, and reliable with world-class performance. Many IT companies use the term ARP, which stands for availability, reliability, performance, and have a guaranteed service-level agreement (SLA) for each. Any SLA below 99.99% (four nines) is not acceptable for an application given the challenging and highly competitive world we are in. Some mission-critical applications guarantee a higher SLA of >=99.999% (five nines). To meet increasing user demands and requirements, an application needs to be highly scalable and distributed.

Distributed applications are applications/programs that run on multiple computers and communicate with each other through a well-defined interface over a network to process data and achieve the desired output. While they appear to be one application from the end user's perspective, distributed applications typically have multiple components.

This chapter will cover the following:

  • Monolithic applications versus distributed applications
  • Challenges with distributed applications
  • Designing your application for scalability
  • Designing your application for high availability

Technical requirements

A basic understanding of Azure is all that is required to read this chapter.

Monolithic applications versus distributed applications

In the following diagram, we have a classic monolithic hotel booking application with all the UX and business processing services deployed in a single application server tightly coupled together with the database on the left side. We have a basic N-tier distributed hotel booking application with UX, business processing services, and a database all decoupled and deployed in separate servers on the right side.

Figure 1.1 – Monolithic application (left) versus an N-tier distributed application (right)

Figure 1.1 – Monolithic application (left) versus an N-tier distributed application (right)

Monolithic architecture was widely adopted 15-20 years ago, but plenty of problems arose for software engineering teams when systems grew and business needs expanded with time. Let's see some of the common issues with this approach.

Common issues with monolithic apps

Let's have a look at the scaling issues:

  • In a monolithic app, there will be no option to scale up UX and services separately as they are tightly coupled. Sometimes scaling doesn't help due to conflicting needs of the resources.
  • As most components use a common backend storage, there will be a possibility of locks when everyone tries to access the data at the same time leading to high latency. You can scale up but there will be physical limits to what a single instance of storage can scale.

Here are some issues associated with availability, reliability, and performance SLAs:

  • Any changes in the system will need the deployment of all UX and business components, leading to downtime and low availability.
  • Any non-persistent state-like session stored in a web app will be lost after every deployment. This will lead to abandoning all the workflows that were triggered by the user.
  • Any bugs such as memory leaks or any security bugs in any module make all the modules vulnerable and have the potential to impact the whole system.
  • Due to the highly coupled nature and the sharing of resources within modules, there will always be resource starvation or unoptimized usage of resources, leading to high latency in the system.

Lastly, let's see what the impacts are on the business and engineering team:

  • The impact of a change is difficult to quantify and needs extensive testing. Hence, it slows down the rate of delivery to production. Even a small change would need the entire system to be deployed.
  • Given a single highly coupled system, there will always be physical limits on collaboration across teams to deliver a feature.
  • New scenarios such as mobile apps, chatbots, and analysis engines will take more effort as there are no independent reusable components and services.
  • Continuous deployment is almost impossible.

Let's see how these issues are addressed in a distributed application.

N-tier distributed applications

N-tier architecture divides the application into n tiers:

  • Presentation (known as the UX layer, UI layer, and the work surface)
  • Business (known as the business rules layer and services layer)
  • Data (known as the data storage and access layer)

Let's have a look at the advantages of a distributed application:

  • These tiers can be owned, managed, and deployed separately. For example, any bug fixes or changes in the UX or service will need regression testing and deployment of only that portion.
  • Multiple presentation layers, such as web, mobile, and bots, can leverage the same business and data tiers as they are decoupled.
  • Better scalability: I can scale up my UX, services, and database independently. For example, in the following diagram, I have horizontally scaled out each of the tiers independently.
Figure 1.2 – N-tier Distributed application scaled out

Figure 1.2 – N-tier Distributed application scaled out

  • The separation of concerns has been taken care of. The presentation tier containing the user interface is separated from the services tier containing business logic, which is again separated from the data access tier containing the data store. High-level components are unaware of the low-level components consuming them. The data access tier is unaware of the services consuming it, and the services are unaware of the UX consuming them. Each service is separated based on business logic and the functionality it is supposed to provide.
  • Encapsulation has been taken care of. Each component in the architecture will interact with other components through a well-defined interface and contracts. We should be able to replace any component in the diagram without worrying about its internal implementation if it adheres to the contract. The loosely coupled architecture here also helps in faster development and deployment to the market for customers. Multiple teams can work in parallel on each of their components independently. They share the contract and timelines for integration testing at the beginning of the project with each other and once internal implementation and unit tests are done, they can start with integration testing.

In this section, we discussed the advantages of distributed applications over monolithic applications and how easy it is to scale each of the tiers independently. In the next section, we will see challenges with distributed applications.

Challenges with distributed applications

When you start developing distributed applications, you will run into the following common set of challenges that all developers face:

  • Design and implementation
  • Data management
  • Messaging

Design and implementation

The decisions made during the design and implementation phase are very important. There are several challenges, such as designing for high availability and scalability. Some design changes in the later stages of the project might incur huge costs in terms of changes to development, testing, and so on, depending on the nature of the change. Hence, it is very important to arrive at the right design choices at the beginning.

Data management

As data is spread across different regions and servers in distributed applications, there are several challenges, such as data availability, maintaining data consistency in different locations across multiple servers, optimizing your queries and data store for good performance, caching, security, and many more.

Messaging

As all components are loosely coupled in distributed applications, asynchronous messaging will be widely used for functionalities such as sending emails, uploading files, and so on. The user doesn't have to wait for these operations to be completed as these can happen asynchronously in the background and then send notifications to the user on completion. While there are several benefits, such as high performance, better scaling, and so on, there are several challenges as well with asynchronous messaging, such as handling large messages, processing messages in a defined order, idempotency, handling failed messages, and many more.

In the next section, we will see how to design applications for scalability.

Designing applications for scalability

Scalability is the ability of a system to adapt itself to handle a growing number of incoming requests successfully by increasing the resources available to the system. Scalability is measured by the total number of requests your application can process and respond to successfully. How do you know your application has reached its threshold of the maximum capacity limit? When it is busy processing current requests in the pipeline and can no longer take any incoming requests and process them successfully. Also, your application may not perform as expected, resulting in performance issues, and some requests will start to fail by timing out. At this stage, we must scale our application for business continuity. Let's look at the options available.

Vertical scaling or scaling up

Vertical scaling or scaling up means adding more resources to individual application servers and increasing the hardware capacity. Users send requests and the application processes the requests, reads/writes to the database, and sends responses back to the users. If the user base grows and the number of incoming requests becomes high, the application server will be overloaded, resulting in longer processing times and latency in responding to users. In this case, we can scale up the application server hardware to a higher hardware capacity, as shown in the following diagram.

Figure 1.3 – Vertical scaling (scaling up)

Figure 1.3 – Vertical scaling (scaling up)

Horizontal scaling or scaling out

Horizontal scaling or scaling out means adding more processing servers/machines to a system. Let's say my application is running on one server and can process up to 1,000 requests per minute. I could scale out by adding 4 more servers and could process 4,000 more requests per minute, as shown in the following screenshot.

Figure 1.4 – Horizontal scaling (scaling out)

Figure 1.4 – Horizontal scaling (scaling out)

Tip

Having a single server is always a bottleneck beyond a certain load, no matter how many CPU cores and memory you have. That's when horizontal scaling or scaling out may help.

Load balancers

Load balancers help in increasing scalability by distributing incoming traffic to healthy servers within a region when the amount of simultaneous traffic increases. Load balancers have health probe monitors to monitor a given port on each of the servers to check the health, and if they're found to be unhealthy, the server is disabled from the load balancer and incoming traffic. When the next health probe test passes, the server is added back to the load balancer.

Caching

Caching is one of the key system design patterns that help in scaling any application, along with improving response times. Any application typically involves reading and writing data from and to a data store, which is usually a relational database such as SQL Server or a NoSQL database such as Cosmos DB. However, reading data from the database for every request is not efficient, especially when data is not changing, because databases usually persist data to disk and it's a costly operation to load the data from disk and send it back to the browser client (or device in the case of mobile/desktop applications) or user. This is where caching comes into play. Cache stores can be used as a primary source for retrieving data and falling back to the original data store only when data is not available in the cache, thus giving a faster response to the consuming application. While doing this, we also need to ensure that the cached data is expired/refreshed as and when data in the original data store is updated.

Distributed caching

As we know, in a distributed system, the data store is split across multiple servers; similarly, distributed caching is an extension of traditional caching in which cached data is stored in more than one server in a network. Before we get into distributed caching, here's a quick recap of the CAP theorem: 

  • C: Stands for consistency, meaning the data is consistent across all the nodes and has the same copy of data 
  • A: Stands for availability, meaning the system is available, and failure of one node doesn't cause the system to go down 
  • P: Stands for partition tolerant, meaning the system doesn't go down even if the communication between nodes goes down 

As per the CAP theorem, any distributed system can only achieve two of the preceding principles, and as distributed systems must be partition-tolerant (P), we can only achieve either the consistency (C) of data or the high availability (A) of data. 

So, distributed caching is a cache strategy in which data is stored in multiple servers/nodes/shards outside the application server. Since data is distributed across multiple servers, if one server goes down, another server can be used as a backup to retrieve data. For example, if our system wanted to cache countries, states, and cities, and if there were three caching servers in a distributed caching system, hypothetically there would be a possibility that one of the cache servers would cache countries, another one would cache states, and one would cache cities (of course, in a real-time application, data is split in a much more complex way). Also, each server would additionally act as a backup for one or more entities. So, on a high level, one type of distributed cache system looks as shown: 

Figure 1.5 – Distributed caching high-level representation 

 

Figure 1.5 – Distributed caching high-level representation 

As you can see, while reading data, it is read from the primary server, and if the primary server is not available, the caching system will fall back to the secondary server. Similarly, for writes, write operations are not complete until data is written to the primary as well as the secondary server, and until this operation is completed, read operations can be blocked, hence compromising the availability of the system. Another strategy for writes could be background synchronization, which will result in the eventual consistency of data, hence compromising the consistency of data until synchronization is completed. Going back to the CAP theorem, most distributed caching systems fall under the category of CP or AP. 

Sharding

Sharding can improve scalability when storing and accessing large data from data stores. This is achieved by splitting a single data store into multiple horizontal partitions or shards. As the data is split across a cluster of databases, the system will be able to store a large amount of data and at the same time, the system can handle additional requests. We can continue to scale the system out by adding further shards.

Here are a few important considerations:

  • Keep shards balanced for even load distribution. Periodically rebalance shards as data is updated and removed from each shard.
  • Avoid queries that retrieve data from multiple shards as they are not efficient and cause a performance bottleneck. You can use parallel tasks to fetch data from different shards for better efficiency but it adds complexity.
  • Creating a large number of smaller shards is better for load balancing than a small number of large shards.

In the next section, we will see how to design applications for high availability.

Designing applications for high availability

High availability ensures business continuity by reducing outages and disruption for customers even when some components fail in a distributed application. Let's look at some of the ways to achieve high availability.

In a scaled-out N-tier distributed application, we can add more servers to all the tiers, but I did not mention anything about Azure data centers, Azure regions, Azure Load Balancer, Azure Traffic Manager, Azure availability sets, Azure availability zones, or SQL Always On availability groups. Let's discuss each of these offerings from Microsoft Azure and see what benefits it gives to our distributed application to make it highly available.

Azure data centers

Azure data centers are physical infrastructures or buildings located all over the world where Microsoft Azure servers/VMs and services are managed.

Azure regions

An Azure region is a set of data centers connected through a dedicated low-latency network. Microsoft has 60+ Azure regions all over the world – more than any other cloud provider – from which customers can choose to deploy their applications.

Azure paired regions

An Azure paired region, as the name suggests, is a set of two regions and each region consists of a set of data centers connected through a dedicated low-latency network. The main benefit of going with paired regions is where there is a broader Azure outage affecting multiple regions, at least one region in each pair will be prioritized by Azure for quicker recovery. Planned Azure system patches and updates are rolled out sequentially to one region after another in paired regions to minimize outages or downtime in the rare case of bugs or issues with updates being rolled out.

Tip

You can read in detail about paired regions here: https://docs.microsoft.com/en-us/azure/best-practices-availability-paired-regions, which will help you understand the best practices, different regional pairs available across the globe for you to choose from, and their benefits.

Azure Traffic Manager

Azure Traffic Manager is a DNS-based load balancer to distribute traffic to internet-facing endpoints across global regions.

Traffic Manager provides a wide range of options to route traffic. Let's look at some of the frequently used routing options that can be configured in your Traffic Manager profile:

  • Priority: This option enables you to set a primary service endpoint to which all traffic is routed and provides the option to configure backup endpoints that will take traffic when the primary endpoint is not available. This routing option is very useful in scenarios where you want to provide reliable services to your customers by having backup endpoints.
  • Weighted: This option enables you to distribute traffic across a set of endpoints based on pre-defined weights. The weight is an integer and the higher the weight, the higher the priority. You can configure the same weight across all endpoints to distribute traffic evenly. This routing option is very useful in scenarios where you want to gradually increase the traffic to a new endpoint or provide specific weightage to certain endpoints when you are horizontally scaling up.
  • Performance: This option enables you to distribute traffic to the "closest" endpoint for the user. The closest endpoint is not measured by geographic distance but based on the lowest network latency. Traffic Manager maintains a lookup latency table for the closest endpoint between different source IP address ranges and the Azure data center. This routing option is very useful in scenarios where you want to improve the responsiveness of your applications.

Traffic Manager provides endpoint monitoring and automatic endpoint failover as well. Let's look at important settings to be configured in your Traffic Manager profile for endpoint monitoring:

  • Protocol: You can set HTTP, HTTPS, or TCP as the protocol that Traffic Manager can use to probe your endpoints' health. Please note that HTTPS monitoring just checks whether a certificate is present or not and doesn't check whether a certificate is valid or not.
  • Port: You can set the port that Traffic Manager can use to send a request.
  • Expected status code ranges: You can set success status code ranges in the format 200-299, 301-301. When these status codes are received as a response once a health check is done, Traffic Manager marks those endpoints as healthy. If you don't set anything, a default value of 200 is defined as the success status code.
  • Probing interval: You can set an interval to specify the frequency of endpoint monitoring health check runs from Traffic Manager. You have options to set 30 seconds (normal probing) and 10 seconds (fast probing). If you don't set anything, a default value of 30 seconds is defined as the probing interval.
  • Tolerated number of failures: You can set the total number of failures Traffic Manager can consider before making an endpoint unhealthy. You have options to set it between 0 and 9. A value of 0 means the endpoint will be marked as unhealthy for even a single failure. If you don't set anything, a default value of 3 is considered.
  • Probe timeout: You can set the timeout value Traffic Manager can consider before making an endpoint unhealthy when no response is received. You can set the timeout value between 5 and 10 seconds when the probing interval is 30 seconds. If you don't set anything, a default value of 9 seconds is set for probe timeout.

A Traffic Manager probe initiates a GET request to the endpoint to be monitored using the protocol, port, and relative path given. If the probing agent receives a 200-OK response or any of the responses configured in the expected status code ranges, it marks the endpoint as healthy. If the response is different from any of the responses configured in the expected status code ranges or no response is received within the timeout period, the probing agent reattempts till the tolerated number of failures is reached. The endpoint is marked unhealthy once the consecutive failures count is higher than the Tolerated number of failures setting.

You can configure the routing and endpoint monitoring settings in your Traffic manager profile as shown in the following screenshot. The following are just sample values; you can set them based on your application's requirements.

Figure 1.6 – Traffic Manager profile settings

Figure 1.6 – Traffic Manager profile settings

Availability sets and availability zones

An availability set is a logical group of VMs within a data center in an Azure region and promises availability of 99.95%. They don't provide resiliency and high availability in the event of an entire data center outage.

An availability zone is made up of one or more data centers with independent power, cooling, and networking. It's a physical location within an Azure region and provides high availability (99.99%) even in the event of data center failures.

SQL Always On availability groups

SQL Always On availability groups were introduced in SQL Server 2012 to increase database availability. Availability groups support a set of read-write primary databases and one to eight sets of secondary databases to which we can fail over. These sets of databases are also called availability databases. Having primary databases and secondary databases in different Azure regions will give us high availability and resiliency against data center and Azure region failure. You can create a listener for an availability group and share that connection string for clients to connect to a database. Commit mode and failover are two important factors to consider in this list:

  • Synchronous commit mode: In this mode, confirmations are not sent back to the client until the data is committed to a secondary database. In a way, this provides a 100% guarantee that every transaction that is committed on a given primary database has also been committed on the corresponding secondary database. Hence, this is the preferred option to sync data between databases within the same region but not for databases across regions due to latency.
  • Asynchronous commit mode: In this mode, confirmations are sent back to the client as soon as the data is committed to the primary database without waiting to commit it to a secondary database. This mode is suitable when you want to reduce the response latency or in scenarios where primary and secondary databases are distributed over a considerable distance. Hence, this is the preferred option to sync data between databases across two different regions.
  • Automatic failover: An automatic failover enables a secondary database to automatically transition to the primary database when the primary database becomes unavailable. Automatic failover is the preferred option when the primary database and secondary database reside within the same region with data always synchronized between the two databases. For cross region manual failover is the preferred option to avoid data loss as data is usually asynchronously committed.

Architecture for high availability

Let's extend the N-tier scaled-out distributed application to be highly available based on the options we discussed, as shown in the following screenshot.

Figure 1.7 – N-tier distributed application scaled out and highly available

Figure 1.7 – N-tier distributed application scaled out and highly available

  • Leverage Azure paired regions for high availability for your UX tier, middle tier, and data tier. One region will be the primary region and the other region will be the secondary region. If one region goes down, the other region will be available as a backup region. In this case, I have gone with the American regions, with the primary region as Central US and East US 2 as the secondary region. You have regional pairs available in Asia, Europe, and Africa as well. Depending upon the customer's location base, you can select regional pairs appropriately. When you combine Azure Traffic Manager with Azure Load Balancer, you get global traffic management combined with a local failover option.
  • Leverage stateless services as any of the servers in your application can handle incoming requests and processes. Stateful services maintain contextual information during transactions and subsequent requests within a transaction need to hit the same server, hence designing for high availability and scalability becomes a challenge.
  • Leverage Active-Active mode, which enables traffic to be routed to both regions and to load-balanced incoming requests. If one region becomes unavailable, it is automatically taken out of rotation. Active-Passive mode enables traffic to be routed to only one region at a time and would require manual failover to a secondary region when the primary region goes down, hence is not the right option for high availability unless your service is stateful and needs to maintain a sticky session where requests need to hit the same server every time for the active user session.
  • Leverage deployment of multiple instances of your service in each region: The DNS names of these two instances are eCommerce-CUS.cloudapp.net and eCommerce-EUS2.cloudapp.net. Create a Traffic Manager profile with the name eCommerce-trafficmanager.net and configure it to use a weighted routing method across two endpoints, eCommerce-CUS.cloudapp.net and eCommerce-EUS2.cloudapp.net. Configure the domain name eCommerce.com to point to eCommerce-trafficmanager.net using a DNS CNAME record.
  • Leverage availability zones to get high availability (99.99%) and resiliency against data center failures. Each of the two endpoints eCommerce-CUS.cloudapp.net and eCommerce-EUS2.cloudapp.net are configured to run on multiple servers within each region and all the servers run under the availability zone.
  • Leverage SQL Always On high availability set up with sync commit and auto failover between databases/nodes in the same region and async commit and manual failover across the regions, as shown in the following screenshot, which is a magnified view of the database from an architecture diagram. When the application connects to a SQL availability group listener, calls will be routed to the Node 1 (N1) part of Datacenter 1 (DC1), which is the primary region and primary read/write database.
Figure 1.8 – SQL high availability setup

Figure 1.8 – SQL high availability setup

Let's look at two different scenarios when there is an outage and how this setup will help with high availability:

Summary

In this chapter, we discussed the difference between monolithic and distributed applications and why distributed applications are the way forward. We also discussed challenges with distributed applications and how to architect your distributed applications for high availability and scalability. In the next chapter, we will learn in detail about proven design patterns and principles to handle the different challenges with design, data management, and messaging in distributed applications.

Questions

  1. What are the options available to increase the scalability of the system?

A. Vertical scaling or scaling up

B. Horizontal scaling or scaling out

C. Both

D. None of the above

Answer – C

  1. What is vertical scaling or scaling up?

A. Vertical scaling or scaling up means adding more resources to a single application server and increasing its hardware capacity, typically achieved by increasing the capacity of the CPU or memory.

B. Vertical scaling or scaling up means adding more processing servers/machines to a system.

C. Vertical scaling or scaling up means adding caching to your system to avoid database calls for frequently used objects.

D. None of the above.

Answer – A

  1. What is an availability zone?

A. An availability zone is a logical group of VMs within a data center in an Azure region and promises availability of 99.95%. They don't provide resiliency and high availability in the event of the outage of an entire data center.

B. An availability zone is made up of one or more data centers with independent power, cooling, and networking. It's a physical location within an Azure region and provides high availability (99.99%) even in the case of data center failures.

C. An availability zone is a pair of regions and each region consists of a set of data centers connected through a dedicated low-latency network.

D. An availability zone is a DNS-based load balancer to distribute traffic to internet-facing endpoints across global regions.

Answer – B

  1. Which of the following statements is correct?

A. SQL Always On availability groups support a set of read-write primary databases and 1 to 12 sets of secondary databases to which we can fail over.

B. SQL Always On availability groups support a set of read-write primary databases and one to four sets of secondary databases to which we can fail over.

C. SQL Always On availability groups support a set of read-write primary databases and one set of secondary databases to which we can fail over.

D. SQL Always On availability groups support a set of read-write primary databases and one to eight sets of secondary databases to which we can fail over.

Answer – D

  1. Services can be deployed and scaled independently. Issues in one service will have a local impact and can be fixed by just deploying the impacted service.

A. Domain-driven design principle

B. Single-responsibility principle

C. Stateless service principle

D. Resiliency principle

Answer – B

  1. What are stateful services?

A. Stateful services maintain contextual information during transactions and subsequent requests within a transaction need to hit the same server, hence designing for high availability and scalability becomes a challenge.

B. Stateful services do not maintain contextual information during transactions and subsequent requests within a transaction can hit any server, hence designing for high availability and scalability is not a challenge.

C. Stateful services are the right approach to build your services for a highly scalable distributed application.

Answer – A

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Explore the Orleans cross-platform framework for building robust, scalable, and distributed applications
  • Handle concurrency, fault tolerance, and resource management without complex programming patterns
  • Work with essential components such as grains and silos to write scalable programs with ease

Description

Building distributed applications in this modern era can be a tedious task as customers expect high availability, high performance, and improved resilience. With the help of this book, you'll discover how you can harness the power of Microsoft Orleans to build impressive distributed applications. Distributed .NET with Microsoft Orleans will demonstrate how to leverage Orleans to build highly scalable distributed applications step by step in the least possible time and with minimum effort. You'll explore some of the key concepts of Microsoft Orleans, including the Orleans programming model, runtime, virtual actors, hosting, and deployment. As you advance, you'll become well-versed with important Orleans assets such as grains, silos, timers, and persistence. Throughout the book, you'll create a distributed application by adding key components to the application as you progress through each chapter and explore them in detail. By the end of this book, you'll have developed the confidence and skills required to build distributed applications using Microsoft Orleans and deploy them in Microsoft Azure.

Who is this book for?

This book is for .NET developers and software architects looking for a simplified guide for creating distributed applications, without worrying about complex programming patterns. Intermediate web developers who want to build highly scalable distributed applications will also find this book useful. A basic understanding of .NET Classic or .NET Core with C# and Azure will be helpful.

What you will learn

  • Get to grips with the different cloud architecture patterns that can be leveraged for building distributed applications
  • Manage state and build a custom storage provider
  • Explore Orleans key design patterns and understand when to reuse them
  • Work with different classes that are created by code generators in the Orleans framework
  • Write unit tests for Orleans grains and silos and create mocks for different parts of the system
  • Overcome traditional challenges of latency and scalability while building distributed applications
Estimated delivery fee Deliver to Turkey

Standard delivery 10 - 13 business days

$12.95

Premium delivery 3 - 6 business days

$34.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : May 27, 2022
Length: 262 pages
Edition : 1st
Language : English
ISBN-13 : 9781801818971
Vendor :
Google
Languages :
Concepts :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
Estimated delivery fee Deliver to Turkey

Standard delivery 10 - 13 business days

$12.95

Premium delivery 3 - 6 business days

$34.95
(Includes tracking information)

Product Details

Publication date : May 27, 2022
Length: 262 pages
Edition : 1st
Language : English
ISBN-13 : 9781801818971
Vendor :
Google
Languages :
Concepts :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 132.97
Distributed .NET with Microsoft Orleans
$41.99
Microservices Communication in .NET Using gRPC
$38.99
High-Performance Programming in C# and .NET
$51.99
Total $ 132.97 Stars icon

Table of Contents

16 Chapters
Section 1 - Distributed Applications Architecture Chevron down icon Chevron up icon
Chapter 1: An Introduction to Distributed Applications Chevron down icon Chevron up icon
Chapter 2: Cloud Architecture and Patterns for Distributed Applications Chevron down icon Chevron up icon
Section 2 - Working with Microsoft Orleans Chevron down icon Chevron up icon
Chapter 3: Introduction to Microsoft Orleans Chevron down icon Chevron up icon
Chapter 4: Understanding Grains and Silos Chevron down icon Chevron up icon
Chapter 5: Persistence in Grains Chevron down icon Chevron up icon
Chapter 6: Scheduling and Notifying in Orleans Chevron down icon Chevron up icon
Chapter 7: Engineering Fundamentals in Orleans Chevron down icon Chevron up icon
Section 3 - Building Patterns in Orleans Chevron down icon Chevron up icon
Chapter 8: Advanced Concepts in Orleans Chevron down icon Chevron up icon
Chapter 9: Design Patterns in Orleans Chevron down icon Chevron up icon
Section 4 - Hosting and Deploying Orleans Applications to Azure Chevron down icon Chevron up icon
Chapter 10: Deploying an Orleans Application in Azure Kubernetes Chevron down icon Chevron up icon
Chapter 11: Deploying an Orleans Application to Azure App Service Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.3
(6 Ratings)
5 star 50%
4 star 33.3%
3 star 16.7%
2 star 0%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




Ajmal Yazdani Jun 15, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
A must book for everyone who are looking for distributed data processing in .NET world with amazing technology Orleans. A masterpiece!!! Kudos to the writers who did excellent job here to show the technology with very clear and detailed coding examples. 5 star, Highly recommended!!! Must BUY!!!
Amazon Verified review Amazon
J Jun 07, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I've read this cover to cover and downloaded the samples to look through the code.You definitely need to have a working knowledge of C#, but then I'd assume you wouldn't try to tackle Orleans without that.The book is really clear. It provides relevant information about architecture, but unlike many resources on actor type systems, it also provides details on how to actually make a decent program, starting with the basics, then going onto more advanced topics.It's far superior to trying to read the docs. There is a good tutorial on Pluralsight, but that doesn't go into the depth required to really 'know' orleans like this book does.A genuine 5* book.EDIT: Further to my previous review, I've been actively using Orleans for about a month. I keep going back to this book and it almost always has the answers. It really is fantastic.
Amazon Verified review Amazon
Brady Gaster Aug 03, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
As a member of the Orleans team, I was excited to see Bhupesh’s book come out, as I know from talking with the growing Orleans community that folks wanted new books and docs and samples. We spent some time during the development of Orleans 3.6 working on re-organizing our samples and being more tactical about the docs we’d put out. This book complements those efforts, demonstrates some of the more complicated topics when developing with Orleans, and informs folks of various design decisions the Orleans team made on the road to creating a truly performant, enterprise-grade, distributed programming framework. In fact, my favorite part of the book was how, early on, the author refers to Orleans in the way I refer to it in every conversation – Orleans is “distributed .NET.”The first few chapters do a fantastic job laying the groundwork on what distributed application development is, and it iterates through some of the patterns and challenges developers face when building distributed apps. Patterns like caching and CQRS are discussed, and the author lays significant ground establishing the fact that distributed systems are complex. Then, Orleans is introduced and shown, from an architectural viewpoint rather than a code-first stance, how Orleans solves many of the challenges distributed developers face. Possibly my biggest surprise of the book was during this introduction to Orleans, when the author carefully lays out some of the important concepts developers need to consider when choosing Orleans and does a fantastic job of summarizing why the Orleans team made some of the decisions. What’s more is the author takes the time to discuss not only when Orleans is an appropriate technology, but even when it isn’t appropriate. This helps the reader understand from an opinionated, honest viewpoint if the tech is right for them.From there, the book explores nearly every important concept a developer needs to be familiar with when developing distributed apps with Orleans. Using an easy-to-follow example of building a hotel software system (a great paradigm for a sample app, as we’ve all had to check into a hotel at some point!), the reader is shown how to implement the basics of the Orleans virtual actor model using cloud-native Grain implementations to persist object state. With regards to persistence, the example not only shows how to use the Cosmos DB persistence provider for world-class data distribution, but the example also goes on to show how to implement a fully custom storage persistence class for Orleans, without being intimidating or complex. One immediately sees the power and versatility of Orleans persistence.After this deep exploration into Orleans data persistence, the latter chapters go into great exploration of how to perform reminders in Orleans, goes into fantastic exploration of how to achieve pub/sub using Orleans observers, and summarizes Orleans streaming into a few-line demo that anyone can follow. Streaming with SignalR and Orleans has always been an intimidating topic for new developers, but the example in this book on achieving event-driven using Orleans streams is second-to-none.The last section takes the reader on a journey of deploying Orleans to Azure AKS and App Service, and carefully lays out all the individual steps required to make an Orleans cluster in the cloud. I would love to see an update to the book one day that includes a discussion of Azure Container Apps, but I know the timing of the book made including this somewhat difficult (the two had a very-closely aligned release date, accidentally).For anyone getting started with Orleans, this book is great. It takes you from Orleans Day 1 through some very complicated topics but is never intimidating. In spite of a few spelling or auto-corrective errors in the code, I didn’t notice any issues with the code examples. Just be careful if you copy them letter-for-letter, as there are some places where auto-capitalization might be breaking some of the type names. I noticed IObserver being changed to Iobserver, for example.
Amazon Verified review Amazon
John Kattenhorn Jun 06, 2022
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
Perfect for developers starting out with OrleansThis book gives a good introduction to the challenges of building distributed applications and how Orleans can address some of the complexities.If you are a beginner at building distributed applications with Orleans then this book is for you. It covers all of the major functional pieces of Orleans and provides some guidance on how best to use Orleans.It would have been useful to have included more content around good actor designs and anti-patterns to avoid.If the Orleans documentation is not to your liking then this book would good place to start learning Orleans and well worth the money.If you are already an experienced developer with Orleans then this book won't teach you much you don't already know.
Amazon Verified review Amazon
alex mccool Aug 02, 2022
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
This is a good concise introduction to the virtual actor library of Orleans. If you're unfamiliar with actor pattern or working with clusters this resource can get you started. While it does not go to deeply into the async/await details of the Orleans scheduler and things like long running tasks it does give you a feel for how to use actors to accomplish a goal. The patterns section is rather mundane, but good as a refresher for developers who have forgotten some of the basic best practices.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela