Monitoring Spark jobs (data engineering and data science)
Data engineering and data science workloads are powered by Fabric Spark Runtime (based on Apache Spark). With the flexibility Fabric provides, you can either choose Notebook for interactive development, use a Spark job definition for batch execution, or use REST APIs for programmatically submitting and executing these jobs. In all these cases, these jobs will be executed by Fabric Spark Runtime and telemetry will be captured for you to look into.
You can monitor these jobs while they’re still executing or when they have completed their execution in Monitoring hub. You can scroll through the list of tracked activity or you can use a filter at the top to search logged information of a specific type. Figure 7.7 shows an example where it uses text-based search to search for all the logs for notebook execution. You can hover over each row to get its details or you can click on it to get more granular details for the Spark...