Partitioning Monte Carlo simulations for better pmap performance
In the Parallelizing processing with pmap recipe we found that while using pmap
is easy enough, knowing when to use it is more complicated. Processing each task in the collection has to take enough time to make the costs of threading, coordinating processing, and communicating the data worth it. Otherwise, the program will spend more time with how the parallelization is done and not enough time with what the task is.
A way to get around this is to make sure that pmap
has enough to do at each step it parallelizes. The easiest way to do this is to partition the input collection into chunks and run pmap
on groups of the input.
For this recipe, we'll use Monte Carlo methods to approximate pi. We'll compare a serial version against a naïve parallel version as well as a version that uses parallelization and partitions.
Monte Carlo methods work by attacking a deterministic problem, such as computing pi, nondeterministically. That is...