The CUDA has a hierarchical architecture in terms of parallel execution. The kernel execution can be done in parallel with multiple blocks. Each block is further divided into multiple threads. In the last chapter, we saw that CUDA runtime can carry out parallel operations by launching the same copies of the kernel multiple times. We saw that it can be done in two ways: either by launching multiple blocks in parallel, with one thread per block, or by launching a single block, with many threads in parallel. So, two questions you might ask are, which method should I use in my code? And, is there any limitation on the number of blocks and threads that can be launched in parallel?
The answers to these questions are pivotal. As we will see later on in this chapter, threads in the same blocks can communicate with each other via shared memory. So, there is an advantage to launching...