Speeding up your jobs with compilation
Remember that in Chapter 4, we learned about some basic concepts in GPU systems architecture. We covered the foundational Compute Unified Device Architecture (CUDA) software framework that lets you run normal Python code on GPUs. We talked about managed containers and deep learning frameworks, such as PyTorch and TensorFlow, which are already tested and proven to run nicely on the AWS cloud. The problem with most neural network implementations is that they aren’t particularly optimized for GPUs. This is where compilation comes in; you can use it to eke out an extra two-times jump in speed for the same model!
In the context of compilers for deep learning, we’re mostly interested in accelerated linear algebra (XLA). This is a project Google originally developed for TensorFlow, which has since merged into the Jax framework. PyTorch developers will be happy to know that major compilation techniques have been upstreamed into PyTorch...