Hardware considerations for AI/ML workloads
The majority of the topics in this book focus on the software and service-level functionalities that are available on Google Cloud. Advanced practitioners will also be interested in what kind of hardware capabilities exist. If your use cases require extreme performance, then selecting the right hardware components on which to run your workloads is an important decision. The selection and efficient usage of underlying hardware also affect the costs, which are, of course, another important factor in your solution architecture. In this section, we’ll shift the discussion so that it focuses on some of the hardware considerations for running AI/ML workloads in Google Cloud, beginning with an overview of central processing unit (CPU), graphics processing unit (GPU), and tensor processing unit (TPU) capabilities.
CPUs, GPUs, and TPUs
You will probably already be familiar with CPUs and GPUs, but TPUs are more specific to Google Cloud...