Thread Synchronization with Locks
In Chapter 2, we learned that threads can read and write memory shared by the process they belong to. While the operating system implements process memory access protection, there is no such protection for threads accessing shared memory in the same process. Concurrent memory write operations to the same memory address from multiple threads require synchronization mechanisms to avoid data races and ensure data integrity.
In this chapter, we will describe in detail the problems created by concurrent access to shared memory by multiple threads and how to fix them. We are going to study in detail the following topics:
- Race conditions – what they are and how they can happen
- Mutual exclusion as a synchronization mechanism and how it is implemented in C++ by
std::mutex
- Generic lock management
- What condition variables are and how to use them with mutexes
- Implementing a fully synchronized queue using
std::mutex
andstd::condition_variable...