Parallel processing of data using Python multiprocessing
When dealing with lots of data, one strategy is to process it in parallel so that we make use of all available central processing unit (CPU) power, given that modern machines have many cores. In a theoretical best-case scenario, if your machine has eight cores, you can get an eight-fold increase in performance if you do parallel processing.
Unfortunately, typical Python code only makes use of a single core. That being said, Python has built-in functionality to use all available CPU power; in fact, Python provides several avenues for that. In this recipe, we will be using the built-in multiprocessing
module. The solution presented here works well in a single computer and if the dataset fits into memory, but if you want to scale it in a cluster or the cloud, you should consider Dask, which we will introduce in the next two recipes.
Our objective here will again be to compute some statistics around missingness and heterozygosity...