Building a batch pipeline
For the batch pipeline, we will use the IMBD dataset we worked on in Chapter 5. We are going to automate the whole process from data acquisition and ingestion into our data lake on Amazon Simple Storage Service (Amazon S3) up to the delivery of consumption-ready tables in Trino. In Figure 10.1, you can see a diagram representing the architecture for this section’s exercise:
Figure 10.1 – Architecture design for a batch pipeline
Now, let’s get to the code.
Building the Airflow DAG
Let’s start developing our Airflow DAG as usual. The complete code is available at https://github.com/PacktPublishing/Bigdata-on-Kubernetes/tree/main/Chapter10/batch/dags folder:
- The first lines of the Airflow DAG are shown next:
imdb_dag.py
from airflow.decorators import task, dag from airflow.utils.task_group import TaskGroup from airflow.providers.cncf.kubernetes.operators.spark_kubernetes import SparkKubernetesOperator...