Concurrency design patterns
In software engineering, a design pattern is a solution to a common problem. This solution has been used many times, and it has proved to be an optimal solution to the problem. You can use them to avoid 'reinventing the wheel' every time you have to solve one of these problems. Singleton or Factory are the examples of common design patterns used in almost every application.
Concurrency also has its own design patterns. In this section, we describe some of the most useful concurrency design patterns and their implementation in the Java language.
Signaling
This design pattern explains how to implement the situation where a task has to notify an event to another task. The easiest way to implement this pattern is with a semaphore or a mutex, using the ReentrantLock
or Semaphore
classes of the Java language or even the wait()
and notify()
methods included in the Object
class.
See the following example:
public void task1() { section1(); commonObject.notify(); } public void task2() { commonObject.wait(); section2(); }
Under these circumstances, the section2()
method will always be executed after the section1()
method.
Rendezvous
This design pattern is a generalization of the Signaling pattern. In this case, the first task waits for an event of the second task and the second task waits for an event of the first task. The solution is similar to that of Signaling, but in this case you must use two objects instead of one.
See the following example:
public void task1() { section1_1(); commonObject1.notify(); commonObject2.wait(); section1_2(); } public void task2() { section2_1(); commonObject2.notify(); commonObject1.wait(); section2_2(); }
Under these circumstances, section2_2()
always will be executed after section1_1()
and section1_2()
after section2_1()
, take into account that, if you put the call to the wait()
method before the call to the notify()
method, you will have a deadlock.
Mutex
A mutex is a mechanism that you can use to implement a critical section ensuring mutual exclusion. That is to say, only one task can execute the portion of code protected by the mutex at one time. In Java, you can implement a critical section using the synchronized
keyword (that allows you to protect a portion of code or a full method), the ReentrantLock
class, or the Semaphore
class.
Look at the following example:
public void task() { preCriticalSection(); lockObject.lock() // The critical section begins criticalSection(); lockObject.unlock(); // The critical section ends postCriticalSection(); }
Multiplex
The Multiplex design pattern is a generalization of the mutex. In this case, a determined number of tasks can execute the critical section at once. It is useful, for example, when you have multiple copies of a resource. The easiest way to implement this design pattern in Java is using the Semaphore
class initialized to the number of tasks that can execute the critical section at once.
Look at the following example:
public void task() { preCriticalSection(); semaphoreObject.acquire(); criticalSection(); semaphoreObject.release(); postCriticalSection(); }
Barrier
This design pattern explains how to implement the situation where you need to synchronize some tasks at a common point. None of the tasks can continue with their execution until all the tasks have arrived at the synchronization point. The Java concurrency API provides the CyclicBarrier
class, which is an implementation of this design pattern.
Look at the following example:
public void task() { preSyncPoint(); barrierObject.await(); postSyncPoint(); }
Double-checked locking
This design pattern provides a solution to the problem that occurs when you acquire a lock and then check for a condition. If the condition is false, you have had the overhead of acquiring the lock ideally. An example of this situation is the lazy initialization of objects. If you have a class implementing the Singleton
design pattern, you may have some code like this:
public class Singleton{ private static Singleton reference; private static final Lock lock=new ReentrantLock(); public static Singleton getReference() { lock.lock(); try { if (reference==null) { reference=new Object(); } } finally { lock.unlock(); } return reference; } }
A possible solution can be to include the lock inside the conditions:
public class Singleton{ private Object reference; private Lock lock=new ReentrantLock(); public Object getReference() { if (reference==null) { lock.lock(); try { if (reference == null) { reference=new Object(); } } finally { lock.unlock(); } } return reference; } }
This solution still has problems. If two tasks check the condition at once, you will create two objects. The best solution to this problem doesn't use any explicit synchronization mechanism:
public class Singleton { private static class LazySingleton { private static final Singleton INSTANCE = new Singleton(); } public static Singleton getSingleton() { return LazySingleton.INSTANCE; } }
Read-write lock
When you protect access to a shared variable with a lock, only one task can access that variable, independently of the operation you are going to perform on it. Sometimes, you will have variables that you modify a few times but read many times. In this circumstance, a lock provides poor performance because all the read operations can be made concurrently without any problem. To solve this problem, there exists the read-write lock design pattern. This pattern defines a special kind of lock with two internal locks: one for read operations and the other for write operations. The behavior of this lock is as follows:
- If one task is doing a read operation and another task wants to do another read operation, it can do it
- If one task is doing a read operation and another task wants to do a write operation, it's blocked until all the readers finish
- If one task is doing a write operation and another task wants to do an operation (read or write), it's blocked until the writer finishes
The Java concurrency API includes the class ReentrantReadWriteLock
that implements this design pattern. If you want to implement this pattern from scratch, you have to be very careful with the priority between read-tasks and write-tasks. If too many read-tasks exist, write-tasks can be waiting too long.
Thread pool
This design pattern tries to remove the overhead introduced by creating a thread for the task you want to execute. It's formed by a set of threads and a queue of tasks you want to execute. The set of threads usually has a fixed size. When a thread approaches the execution of a task, it doesn't finish its execution; it looks for another task in the queue. If there is another task, it executes it. If not, the thread waits until a task is inserted in the queue, but it's not destroyed.
The Java concurrency API includes some classes that implement the ExecutorService
interface, which internally uses a pool of threads.
Thread local storage
This design pattern defines how to use global or static variables locally to tasks. When you have a static attribute in a class, all the objects of a class access the same occurrences of the attribute. If you use thread local storage, each thread accesses a different instance of the variable.
The Java concurrency API includes the ThreadLocal
class to implement this design pattern.