This is actually quite a technical subject. To this end, we will have to make a few assumptions regarding the reader's programming background. To this end, we will assume the following:
- You have an intermediate level of programming experience in Python.
- You are familiar with standard Python scientific packages, such as NumPy, SciPy, and Matplotlib.
- You have an intermediate ability in any C-based programming language (C, C++, Java, Rust, Go, and so on).
- You understand the concept of dynamic memory allocation in C (particularly how to use the C malloc and free functions.)
GPU programming is mostly applicable to fields that are very scientific or mathematical in nature, so many (if not most) of the examples will make use of some math. For this reason, we are assuming that the reader has some familiarity with first or second-year college mathematics, including:
- Trigonometry (the sinusoidal functions: sin, cos, tan …)
- Calculus (integrals, derivatives, gradients)
- Statistics (uniform and normal distributions)
- Linear Algebra (vectors, matrices, vector spaces, dimensionality).
We will be making another assumption here. Remember that we will be working only with CUDA in this text, which is a proprietary programming language for NVIDIA hardware. We will, therefore, need to have some specific hardware in our possession before we get started. So, I will assume that the reader has access to the following:
- A 64-bit x86 Intel/AMD-based PC
- 4 Gigabytes (GB) of RAM or more
- An entry-level NVIDIA GTX 1050 GPU (Pascal Architecture) or better
The reader should know that most older GPUs will probably work fine with most, if not all, examples in this text, but the examples in this text have only been tested on a GTX 1050 under Windows 10 and a GTX 1070 under Linux. Specific instructions regarding setup and configuration are given in Chapter 2, Setting Up Your GPU Programming Environment.