Data lakes
Simply put, a data lake is a centralized repository to store all kinds of data. Data can be structured (such as relational database data in tabular format), semi-structured (such as JSON), or unstructured (such as images, PDFs, and so on). Data from all the heterogenous source systems is collected and processed in this single repository and consumed from it. In its early days, Apache Hadoop became the go-to place for setting up data lakes. The Hadoop framework provided a storage layer called Hadoop Distributed File System (HDFS) and a data processing layer called MapReduce. Organizations started using this data lake as a central place for storing and processing all kinds of data. The data lake provided a great alternative to storing and processing data outside relational databases and data warehouses. But soon, the data lake setup on-premises infrastructure became a nightmare. We will look at those challenges as we build upon this chapter.
The following diagram shows...