Discovering your datasets on S3 using AWS Glue Crawlers
Let's say that you have a lot of data that you are outputting to S3, and you want to query it. Before you can, you need to register that data. However, the data sitting in S3 is in many different formats and schemas. Going through each dataset, inspecting files, and determining the file format, partitions, and columns is a very time-consuming task. If a table contains incorrect column names, incorrect ordering of columns, or any other form of error, then the table may not be queryable until it is corrected. AWS Glue Crawlers solve these issues. Glue Crawlers can scan data on S3, inspect the S3 directory structure and data within it, and automatically populate the data catalog. This section will look at how they work and set up a Glue crawler to discover a sample dataset.
How do AWS Glue Crawlers work?
There are three actions that a Glue crawler takes when scanning S3:
- It scans S3 directories for data files...