Managing Spark jobs in a pipeline
Managing Spark jobs in a pipeline involves two aspects:
- Managing the attributes of the pipeline's runtime that launches the Spark activity: Managing the Spark activity pipeline attributes is no different than managing any other activities in a pipeline. The Managing and Monitoring pages we saw in Figure 11.9, Figure 11.11, and Figure 11.12 are the same for any Spark activity as well. You can use the options provided on these screens to manage your Spark activity.
- Managing Spark jobs and configurations: This involves understanding how Spark works, being able to tune the jobs, and so on. We have a complete chapter dedicated to optimizing Synapse SQL and Spark jobs towards the end of this book. You can refer to Chapter 14, Optimizing and Troubleshooting Data Storage and Data Processing, to learn more about managing and tuning Spark jobs.
In this section, we'll learn how to add an Apache Spark job (via HDInsight) to our pipeline...