Rerunning activities
When our data transfers fail for one reason or another, we frequently need to rerun affected pipelines. This ensures the appropriate data movement is performed, albeit delayed. If our design is complex, or if we are moving large volumes of data, it is useful to be able to repeat the run from the point of failure, to minimize the time lost in the failed pipeline.
In this section, we will look at two features of Data Factory that help us to troubleshoot our pipelines and rerun them with maximum efficiency. The first feature is breakpoints, which allow us to execute a pipeline up to an activity of our choice. The second feature is rerunning from the point of failure, which helps to minimize the time lost due to a failed execution.
Getting ready
Preparing your environment for this recipe is identical to the preparation required for the previous recipe in this chapter, Investigating failures – running in debug mode. We will be using the same Azure Data...