Progress in AI has always been tied to our compute abilities. In this section, we will discuss CPUs and GPUs for powering AI applications, and how to set up your system to work with accelerated GPU processing.
The main computational hardware in your computer is known as the central processing unit (CPUs); CPUs are designed for general computing workloads. While your local CPU can be used to train a deep learning model, you might find your computer hanging up on the training process for hours. When training AI applications on hardware, it's smarter to use the CPU's cousin, the Graphics Processing Unit (GPU). GPUs are designed to process in parallel, just as an ANN process in parallel. As we learned in the last chapter, AI applications require many linear algebra operations, the exact same type of operations that are required for...