Data ingestion using AWS Glue ETL
In the previous section, we learned how to use various features of AWS Glue Crawler and AWS Glue Data Catalog to create a centralized data catalog for data discovery. In this section, we will explore the option of using AWS Glue ETL for data ingestion from various data sources, such as data lakes (Amazon S3), databases, streaming, and SaaS data stores. Additionally, we will learn about how to use job bookmarks to perform incremental data loads from Data Lake (S3) and JDBC.
Glue enables users to create ETL jobs using three different types of ETL frameworks – Spark ETL, Spark Streaming, and Python Shell. In the introduction section of Glue DataBrew, we learned how AWS Glue has evolved and that now, AWS Glue Studio is available to build ETL pipelines.
The AWS Glue user interface allows you to build your ETL pipeline with an interesting feature that converts your UI job into a script, which helps you scale when building similar pipelines or...