Compacting small files
Small files are the nightmares of big data processing systems. Analytical engines such as Spark, Synapse SQL, and Hive, and cloud storage systems such as Blob and ADLS Gen2, are all inherently optimized for big files. Hence, to make our data pipelines efficient, it is better to merge or compact the small files into bigger ones. This can be achieved in Azure using Azure Data Factory and Synapse Pipelines. Let's look at an example using Azure Data Factory to concatenate a bunch of small CSV files in a directory into one big file. The steps for Synapse pipelines will be very similar:
- From the Azure Data Factory portal, select the Copy Data activity as shown in the following screenshot. In the Source tab, either choose an existing source dataset or create a new one, pointing to the data storage where the small files are present. Next, choose the Wildcard file path option for File Path type. In the Wildcard Paths field, provide a folder path ending with...