What is needed to use concurrency effectively?
Fundamentally, using concurrency to improve performance is very simple: you really need to do just two things. The first one is to have enough work for the concurrent threads and processes to do so they are busy at all times. The second one is to reduce the use of the shared data since, as we have seen in the previous chapter, accessing a shared variable concurrently is very expensive. The rest is just a matter of the implementation.
Unfortunately, the implementation tends to be quite difficult, and the difficulty increases when the desired performance gains are larger and when the hardware becomes more powerful. This is due to Amdahl's Law, which is something every programmer working with concurrency has heard about, but not everyone has understood the full extent of its implications.
The law itself is simple enough. It states that, for a program that has a parallel (scalable) part and a single-threaded part, the maximum possible...