Now that we've been introduced to a tool that manages data pipelines, it's time to peer completely under the hood. Our models ultimately run on the kinds of hardware we talked about in Chapter 5, Next Word Prediction with Recurrent Neural Networks, abstracted through many layers of software until we get to the point where we can use code such as go build --tags=cuda.
Our deployment of the image recognition pipeline built on top of Pachyderm was local. We did it in a way that was functionally identical to deploying it to cloud resources, without getting into the detail of what that would look like. This detail will now be our focus.
By the end of this chapter, you should be able to do the following:
- Identify and understand cloud resources, including those specific to our platform example (AWS)
- Know how to migrate your local deployment to the cloud ...