Locks, alternatives, and their performance
Once we have accepted that some data sharing is going to happen, we have to also accept the need for the synchronization of concurrent accesses to the shared data. Remember that any concurrent access to the same data without such synchronization leads to data races and undefined behavior.
The most common way to guard shared data is with a mutex:
std::mutex m; size_t count;// Guarded by m … on the threads … { std::lock_guard l(m); ++count; }
Here, we take advantage of the C++17 template type deduction for std::lock_guard
; in C++14, we would have to specify the template type argument.
Using mutexes is usually fairly straightforward: any code that accesses the shared data should be inside a critical section, that is, sandwiched between the calls to lock and unlock the mutex. The mutex implementation comes with the correct memory barriers to ensure that the code in the critical section cannot...