Summary
In this chapter, you have examined Synapse pipelines/Azure Data Factory. You have learned how to create a data movement pipeline using a wizard, as well as from scratch in the authoring environment. You have seen the orchestration capabilities with the many different activities provided.
You have further implemented your first mapping flow to create transformations your data is going through before it lands in your Data Lake Storage. You have examined wrangling flows and learned the difference between the two data flow components.
We have also examined the IRs and their differences and talked about managed virtual networks and managed private endpoints.
Finally, we have integrated our Data Factory with Azure DevOps and have established source control over our artifacts.
In the next chapter, we are going to dive into another option to transform and process data using one of the main compute components in our modern data warehouse: the Spark engine.