Summary
In this chapter, we learned how to use different languages in a Synapse notebook to query data. Magic commands allow you to easily switch to any different language within the same notebook. We covered how to use Azure Open Datasets within a Synapse workspace. We also learned that a DataFrame or Spark table can be created using all the supported languages in Azure Synapse Analytics. In this chapter, we learned how to read data from Azure Data Lake Storage Gen2 accounts, how to create Spark DataFrames, and how to create Spark tables using PySpark, Scala, or .NET languages. We also covered how we can write data back to an Azure Data Lake Storage Gen2 account. Although we only covered Azure Data Lake Storage Gen2, we can use a similar approach for accessing data on blob containers.
So far, we have learned about using a Spark pool and SQL pool, and using different languages against these pools. However, our next area of focus will be the reporting tool.
In the next chapter...