Synchronizing data across threads
In this section, we will look at some of the methods that are available in .NET for synchronizing data across multiple threads. Shared data across threads can be one of the primary pain points of multithreaded development if not handled properly. Classes in .NET that have protections in place for threading are said to be thread-safe.
Data in multithreaded applications can be synchronized in several different ways:
- Synchronized code regions: Only synchronize the block of code that is necessary using the
Monitor
class or with some help from the .NET compiler. - Manual synchronization: There are several synchronization primitives in .NET that can be used to manually synchronize data.
- Synchronized context: This is only available in .NET Framework and Xamarin applications.
- System.Collections.Concurrent classes: There are specialized .NET collections to handle concurrency. We will examine these in Chapter 9.
In this section, we’ll look at the first two methods. Let’s start by discussing how to synchronize code regions in your application.
Synchronizing code regions
There are several techniques you can use to synchronize regions of your code. The first one we will discuss is the Monitor
class. You can surround a block of code that can be accessed by multiple threads with calls to Monitor.Enter
and Monitor.Exit
:
...
Monitor.Enter(order);
order.AddDetails(orderDetail);
Monitor.Exit(order);
...
In this example, imagine you have an order
object that is being updated by multiple threads in parallel. The Monitor
class will lock access from other threads while the current thread adds an orderDetail
item to the order
object. The key to minimizing the chance of introducing wait time to other threads is by only locking the lines of code that need to be synchronized.
Note
The Interlocked
class, as discussed in this section, performs atomic operations in user mode rather than kernel mode. If you want to read more about this distinction, I recommend checking out this blog post by Nguyen Thai Duong: https://duongnt.com/interlocked-synchronization/.
The Interlocked
class provides several methods for performing atomic operations on objects shared across multiple threads. The following list of methods is part of the Interlocked
class:
Add
: This adds two integers, replacing the first one with the sum of the twoAnd
: This is a bitwiseand
operation for two integersCompareExchange
: This compares two objects for equality and replaces the first if they are equalDecrement
: This decrements an integerExchange
: This sets a variable to a new valueIncrement
: This increments an integerOr
: This is a bitwiseor
operation for two integers
These Interlocked
operations will lock access to the target object only for the duration of that operation.
Additionally, the lock
statement in C# can be used to lock access to a block of code to only a single thread. The lock
statement is a language construct implemented using the .NET Monitor.Enter
and Monitor.Exit
operations.
There is some built-in compiler support for the lock
and Monitor
blocks. If an exception is thrown inside one of these blocks, the lock is automatically released. The C# compiler generates a try/finally
block around the synchronized code and makes a call to Monitor.Exit
in the finally
block.
Let’s finish up this section on synchronization by looking at some other .NET classes that provide support for manual data synchronization.
Manual synchronization
The use of manual synchronization is common when synchronizing data across multiple threads. Some types of data cannot be protected in other ways, such as these:
- Global fields: These are variables that can be accessed globally across the application.
- Static fields: These are static variables in a class.
- Instance fields: These are instance variables in a class.
These fields do not have method bodies, so there is no way to put a synchronized code region around them. With manual synchronization, you can protect all the areas where these objects are used. These regions can be protected with lock
statements in C#, but some other synchronization primitives provide access to shared data and can coordinate the interactions between threads on a more granular level. The first construct we will examine is the System.Threading.Mutex
class.
The Mutex
class is similar to the Monitor
class in that it blocks access to a region of code, but it can also provide the ability to grant access to other processes. When using the Mutex
class, use the WaitOne()
and ReleaseMutex()
methods to acquire and release the lock. Let’s look at the same order/order details example. This time, we’ll use a Mutex
class declared at the class level:
private static Mutex orderMutex = new Mutex();
...
orderMutex.WaitOne();
order.AddDetails(orderDetail);
orderMutex.ReleaseMutex();
...
If you want to enforce a timeout period on the Mutex
class, you can call the WaitOne
overload with a timeout value:
orderMutex.WaitOne(500);
It is important to note that Mutex
is a disposable type. You should always call Dispose()
on the object when you are finished using it. Additionally, you can also enclose a disposable type within a using
block to have it disposed of indirectly.
In this section, the last .NET manual locking construct we are going to examine is the ReaderWriterLockSlim
class. You can use this type if you have an object that is used across multiple threads, but most of the code is reading data from the object. You don’t want to lock access to the object in the blocks of code that are reading data, but you do want to prevent reading while the object is being updated or simultaneously written. This is referred to as "multiple readers, single writer."
This ContactListManager
class contains a list of contacts that can be added to or retrieved by a phone number. The class assumes that these operations can be called from multiple threads and uses the ReaderWriterLockSlim
class to apply a read lock in the GetContactByPhoneNumber
method and a write lock in the AddContact
method. The locks are released in a finally
block to ensure they are always released, even when exceptions are encountered:
public class ContactListManager
{
private readonly List<Contact> contacts;
private readonly ReaderWriterLockSlim contactLock =
new ReaderWriterLockSlim();
public ContactListManager(
List<Contact> initialContacts)
{
contacts = initialContacts;
}
public void AddContact(Contact newContact)
{
try
{
contactLock.EnterWriteLock();
contacts.Add(newContact);
}
finally
{
contactLock.ExitWriteLock();
}
}
public Contact GetContactByPhoneNumber(string
phoneNumber)
{
try
{
contactLock.EnterReadLock();
return contacts.FirstOrDefault(x =>
x.PhoneNumber == phoneNumber);
}
finally
{
contactLock.ExitReadLock();
}
}
}
If you were to add a DeleteContact
method to the ContactListManager
class, you would leverage the same EnterWriteLock
method to prevent any conflicts with the other operations in the class. If a lock is forgotten in one usage of contacts
, it can cause any of the other operations to fail. Additionally, it is possible to apply a timeout to the ReaderWriterLockSlim
locks:
contacts.EnterWriteLock(1000);
There are several other synchronization primitives that we have not covered in this section, but we have discussed some of the most common types that you will use. To read more about the available types for manual synchronization, you can visit Microsoft Docs at https://docs.microsoft.com/dotnet/standard/threading/overview-of-synchronization-primitives.
Now that we have examined different ways of synchronizing data when working with managed threads, let’s cover two more important topics before wrapping up this first chapter. We are going to discuss techniques to schedule work on threads and how to cancel managed threads cooperatively.