Moving data between a data lake and Redshift
Moving data between a data lake and a data warehouse, such as Amazon Redshift, is a common requirement for many use cases. Data may be cleansed and processed with Glue ETL jobs in the data lake, for example, and then hot data can be loaded into Redshift so that it can be queried via BI tools with optimal performance.
In the same way, there are certain use cases where data may be further processed in the data warehouse, and this newly processed data then needs to be exported back to the data lake so that other users and processes can consume this data.
In this section, we will examine some best practices and recommendations for both ingesting data from the data lake and exporting data back to the data lake.
Optimizing data ingestion in Redshift
While there are various ways that you can insert data into Redshift, the recommended way is to bulk ingest data using the Redshift COPY
command. The COPY
command enables optimized...