Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Mastering Concurrency Programming with Java 9, Second Edition

You're reading from   Mastering Concurrency Programming with Java 9, Second Edition Fast, reactive and parallel application development

Arrow left icon
Product type Paperback
Published in Jul 2017
Publisher Packt
ISBN-13 9781785887949
Length 516 pages
Edition 2nd Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Javier Fernández González Javier Fernández González
Author Profile Icon Javier Fernández González
Javier Fernández González
Arrow right icon
View More author details
Toc

Table of Contents (14) Chapters Close

Preface 1. The First Step - Concurrency Design Principles FREE CHAPTER 2. Working with Basic Elements - Threads and Runnables 3. Managing Lots of Threads - Executors 4. Getting the Most from Executors 5. Getting Data from Tasks - The Callable and Future Interfaces 6. Running Tasks Divided into Phases - The Phaser Class 7. Optimizing Divide and Conquer Solutions - The Fork/Join Framework 8. Processing Massive Datasets with Parallel Streams - The Map and Reduce Model 9. Processing Massive Datasets with Parallel Streams - The Map and Collect Model 10. Asynchronous Stream Processing - Reactive Streams 11. Diving into Concurrent Data Structures and Synchronization Utilities 12. Testing and Monitoring Concurrent Applications 13. Concurrency in JVM - Clojure and Groovy with the Gpars Library and Scala

Concurrency design patterns

In software engineering, a design pattern is a solution to a common problem. This solution has been used many times, and it has proved to be an optimal solution to the problem. You can use them to avoid 'reinventing the wheel' every time you have to solve one of these problems. Singleton or Factory are examples of common design patterns used in almost every application.

Concurrency also has its own design patterns. In this section, we describe some of the most useful concurrency design patterns and their implementation in the Java language.

Signaling

This design pattern explains how to implement the situation where a task has to notify an event to another task. The easiest way to implement this pattern is with a semaphore or a mutex, using the ReentrantLock or Semaphore classes of the Java language or even the wait() and notify() methods included in the Object class.

See the following example:

public void task1() { 
  section1(); 
  commonObject.notify(); 
} 
 
public void task2() { 
  commonObject.wait(); 
  section2(); 
} 

Under these circumstances, the section2() method will always be executed after the section1() method.

Rendezvous

This design pattern is a generalization of the Signaling pattern. In this case, the first task waits for an event of the second task and the second task waits for an event of the first task. The solution is similar to that of Signaling, but in this case, you must use two objects instead of one.

See the following example:

public void task1() { 
  section1_1(); 
  commonObject1.notify(); 
  commonObject2.wait(); 
  section1_2(); 
} 
public void task2() { 
  section2_1(); 
  commonObject2.notify(); 
  commonObject1.wait(); 
  section2_2(); 
} 

Under these circumstances, section2_2() will always be executed after section1_1() and section1_2() after section2_1(). Take into account that if you put the call to the wait() method before the call to the notify() method, you will have a deadlock.

Mutex

A mutex is a mechanism that you can use to implement a critical section, ensuring the mutual exclusion. That is to say, only one task can execute the portion of code protected by the mutex at once. In Java, you can implement a critical section using the synchronized keyword (that allows you to protect a portion of code or a full method), the ReentrantLock class, or the Semaphore class.

Look at the following example:

public void task() { 
  preCriticalSection(); 
  try { 
    lockObject.lock() // The critical section begins 
    criticalSection(); 
  } catch (Exception e) { 
 
  } finally { 
    lockObject.unlock(); // The critical section ends 
     postCriticalSection(); 
}

Multiplex

The Multiplex design pattern is a generalization of the Mutex. In this case, a determined number of tasks can execute the critical section at once. It is useful, for example, when you have multiple copies of a resource. The easiest way to implement this design pattern in Java is using the Semaphore class initialized to the number of tasks that can execute the critical section at once.

Look at the following example:

public void task() { 
  preCriticalSection(); 
  semaphoreObject.acquire(); 
  criticalSection(); 
  semaphoreObject.release(); 
  postCriticalSection(); 
} 

Barrier

This design pattern explains how to implement the situation where you need to synchronize some tasks at a common point. None of the tasks can continue with their execution until all the tasks have arrived at the synchronization point. Java Concurrency API provides the CyclicBarrier class, which is an implementation of this design pattern.

Look at the following example:

public void task() { 
  preSyncPoint(); 
  barrierObject.await(); 
  postSyncPoint(); 
} 

Double-checked locking

This design pattern provides a solution to the problem that occurs when you acquire a lock and then check for a condition. If the condition is false, you have the overhead of acquiring the lock ideally. An example of this situation is the lazy initialization of objects. If you have a class implementing the Singleton design pattern, you may have some code like this:

public class Singleton{ 
  private static Singleton reference; 
  private static final Lock lock=new ReentrantLock(); 
  public static Singleton getReference() { 
    try { 
      lock.lock(); 
      if (reference==null) { 
        reference=new Object(); 
      } 
    } catch (Exception e) { 
        System.out.println(e); 
    } finally { 
        lock.unlock(); 
    } 
    return reference; 
  } 
} 

A possible solution could be to include the lock inside the conditions:

public class Singleton{ 
  private Object reference; 
  private Lock lock=new ReentrantLock(); 
  public Object getReference() { 
    if (reference==null) { 
      lock.lock(); 
      if (reference == null) { 
        reference=new Object(); 
      } 
      lock.unlock(); 
    } 
    return reference; 
  } 
} 

This solution still has problems. If two tasks check the condition at once, you will create two objects. The best solution to this problem doesn't use any explicit synchronization mechanisms:

public class Singleton { 
 
  private static class LazySingleton { 
    private static final Singleton INSTANCE = new Singleton(); 
  } 
 
  public static Singleton getSingleton() { 
    return LazySingleton.INSTANCE; 
  } 
 
}

Read-write lock

When you protect access to a shared variable with a lock, only one task can access that variable, independently of the operation you are going to perform on it. Sometimes, you will have variables that you modify a few times but you read many times. In this circumstance, a lock provides poor performance because all the read operations can be made concurrently without any problem. To solve this problem, we can use the read-write lock design pattern. This pattern defines a special kind of lock with two internal locks: one for read operations and another for write operations. The behavior of this lock is as follows:

  • If one task is doing a read operation and another task wants to do another read operation, it can do it
  • If one task is doing a read operation and another task wants to do a write operation, it's blocked until all the readers finish
  • If one task is doing a write operation and another task wants to do an operation (read or write), it's blocked until the writer finishes

The Java Concurrency API includes the class ReentrantReadWriteLock that implements this design pattern. If you want to implement this pattern from scratch, you have to be very careful with the priority between read-tasks and write-tasks. If too many read-tasks exist, write-tasks can be waiting too long.

Thread pool

This design pattern tries to remove the overhead introduced by creating a thread per task you want to execute. It's formed by a set of threads and a queue of tasks you want to execute. The set of threads usually has a fixed size. When a thread finishes the execution of a task, it doesn't finish its execution. It looks for another task in the queue. If there is another task, it executes it. If not, the thread waits until a task is inserted in the queue, but it's not destroyed.

The Java Concurrency API includes some classes that implement the ExecutorService interface that internally uses a pool of threads.

Thread local storage

This design pattern defines how to use global or static variables locally to tasks. When you have a static attribute in a class, all the objects of a class access the same occurrences of the attribute. If you use thread local storage, each thread accesses a different instance of the variable.

The Java Concurrency API includes the ThreadLocal class to implement this design pattern.

You have been reading a chapter from
Mastering Concurrency Programming with Java 9, Second Edition - Second Edition
Published in: Jul 2017
Publisher: Packt
ISBN-13: 9781785887949
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime