Leaving the CPU – using device memory
With the proliferation of large language models (LLMs) and other machine learning (ML) use cases, more and more libraries and workflows are being adapted to take advantage of GPUs or other hardware devices. Leveraging these devices requires adapting to an entirely new paradigm of engineering, which can be difficult to learn and adopt. To facilitate this transition, the Arrow C++ library—and PyArrow by extension—provides a series of interfaces and building blocks for designing systems that utilize Arrow-formatted data both in main memory and in device memory.
Important warning!
While there will be examples in this section, and in later chapters that touch on device memory, that work with GPUs—specifically, in this case, Nvidia graphics cards and the Compute Unified Device Architecture (CUDA)—I will not be diving too deep into the actually programming for these devices. People much more experienced in that...