Chapter 1, Why GPU Programming?, gives us some motivations as to why we should learn this field, and how to apply Amdahl's Law to estimate potential performance improvements from translating a serial program to making use of a GPU.
Chapter 2, Setting Up Your GPU Programming Environment, explains how to set up an appropriate Python and C++ development environment for CUDA under both Windows and Linux.
Chapter 3, Getting Started with PyCUDA, shows the most essential skills we will need for programming GPUs from Python. We will notably see how to transfer data to and from a GPU using PyCUDA's gpuarray class, and how to compile simple CUDA kernels with PyCUDA's ElementwiseKernel function.
Chapter 4, Kernels, Threads, Blocks, and Grids, teaches the fundamentals of writing effective CUDA kernels, which are parallel functions that are launched on the GPU. We will see how to write CUDA device functions ("serial" functions called directly by CUDA kernels), and learn about CUDA's abstract grid/block structure and the role it plays in launching kernels.
Chapter 5, Streams, Events, Contexts, and Concurrency, covers the notion of CUDA Streams, which is a feature that allows us to launch and synchronize many kernels onto a GPU concurrently. We will see how to use CUDA Events to time kernel launches, and how to create and use CUDA Contexts.
Chapter 6, Debugging and Profiling Your CUDA Code, fill in some of the gaps we have in terms of pure CUDA C programming, and shows us how to use the NVIDIA Nsight IDE for debugging and development, as well as how to use the NVIDIA profiling tools.
Chapter 7, Using the CUDA Libraries with Scikit-CUDA, gives us a brief tour of some of the important standard CUDA libraries by way of the Python Scikit-CUDA module, including cuBLAS, cuFFT, and cuSOLVER.
Chapter 8, The CUDA Device Function Libraries and Thrust, shows us how to use the cuRAND and CUDA Math API libraries in our code, as well as how to use CUDA Thrust C++ containers.
Chapter 9, Implementation of a Deep Neural Network, serves as a capstone in which we learn how to build an entire deep neural network from scratch, applying many of the ideas we have learned in the text.
Chapter 10, Working with Compiled GPU Code, shows us how to interface our Python code with pre-compiled GPU code, using both PyCUDA and Ctypes.
Chapter 11, Performance Optimization in CUDA, teaches some very low-level performance optimization tricks, especially in relation to CUDA, such as warp shuffling, vectorized memory access, using inline PTX assembly, and atomic operations.
Chapter 12, Where to Go from Here, is an overview of some of the educational and career paths you will have that will build upon your now-solid foundation in GPU programming.