In this section, we'll discuss a few different ways to parallelize computations. We will start with a comparison between threads and processes, after which we'll show you the tools available in the C++ standard, and last but not least, we'll say a few words about the OpenMP and MPI frameworks.
Before we start, let's say a few words on how to estimate the maximum possible gains you can have from parallelizing your code. There are two laws that can help us here. The first is Amdahl's law. It states that if we want to speed up our program by throwing more cores at it, then the part of our code that must remain sequential (cannot be parallelized) will limit our scalability. For instance, if 90% of your code is parallelizable, then even with infinite cores you can still get only up to a 10x speedup. Even if we cut down the time to execute that 90% to zero, the 10% of the code will always remain there.
The second law is Gustafson's law...