Introduction
One of the greatest accomplishments in the study of data analysis is the intelligent approach to parallel and concurrent design. As we collect more and more data, we are able to discover more and more patterns. However, this comes at a price of time and space. More data may take more time to compute or more space in terms of memory. It is a very real problem that this chapter will try to solve.
The first few recipes will cover how to evoke pure procedures in parallel and in sequence. The following recipes on forking will deal with concurrency using I/O actions. We will then delve deeper by learning how to access a list and tuple elements in parallel. Then, we will implement MapReduce in Haskell to solve a time-consuming problem efficiently.
We will end the review of parallel and concurrent design by learning how to benchmark runtime performance. Sometimes, the easiest way to discover if code is successfully running in parallel is by timing it against a nonparallel version of...