Introduction
In the previous chapter, we learned how to create our own data pipelines and use Airflow to automate our jobs. This enables us to create more data pipelines at scale, which is great. However, when we start scaling the number of data pipelines in our local machine, we will quickly run into scaling issues due to the limitations of a single machine. A single machine might give you 16 CPUs and 32 GB of RAM, which allows up to 16 different data pipelines running in parallel providing a memory footprint of less than 32 GB. In reality, AI engineers need to run hundreds of data pipelines every day to train models, predict data, monitor system health, and so on. Therefore, we need many more machines to support operations on such a scale.
Nowadays, software engineers are building their applications on the cloud. There are many benefits to building applications on the cloud. Some of them are as follows:
- The cloud is flexible. We can scale the capacity up or down as we...