So far, we have learned about the basics of parallel programming, tasks, and task parallelism. In this chapter, we will cover another important aspect of parallel programming, which deals with the parallel execution of data: data parallelism. While task parallelism creates a separate unit of work for each participating thread, data parallelism creates a common task that is executed by every participating thread in a source collection. This source collection is partitioned so that multiple threads can work on it concurrently. Therefore, it is important to understand data parallelism to get the maximum performance out of loops/collections.
In this chapter, we will discuss the following topics:
- Handling exceptions in parallel loops
- Creating custom partitioning strategies in parallel loops
- Canceling loops
- Understanding thread storage in parallel loops...