Chapter 2: Data Ingestion
Data ingestion is the process of moving data from disparate operational systems to a central location such as a data warehouse or a data lake to be processed and made conducive for data analytics. It is the first step of the data analytics process and is necessary for creating centrally accessible, persistent storage, where data engineers, data scientists, and data analysts can access, process, and analyze data to generate business analytics.
You will be introduced to the capabilities of Apache Spark as a data ingestion engine for both batch and real-time processing. Various data sources supported by Apache Spark and how to access them using Spark's DataFrame interface will be presented.
Additionally, you will learn how to use Apache Spark's built-in functions to access data from external data sources, such as a Relational Database Management System (RDBMS), and message queues such as Apache Kafka, and ingest them into data lakes. The different...