We have seen that the GPU provides a great performance improvement in data parallelism when a single instruction operates on multiple data items. We have not seen task parallelism where more than one kernel function, which are independent of each other, operate in parallel. For example, one function may be computing pixel values while another function is downloading something from the internet. We know that the CPU provides a very flexible method for this kind of task parallelism. The GPU also provides this capability, but it is not as flexible as the CPU. This task parallelism is achieved by using CUDA streams, which we will see in detail in this section.
A CUDA stream is nothing but a queue of GPU operations that execute in a specific order. These functions include kernel functions, memory copy operations, and CUDA event operations. The order in which they are...