Investigating the data lake era
The genesis of the data lake starts in 2004. In 2004, Google researchers Jeffery Dean and Sanjay Ghemawat published a paper titled MapReduce: Simplified Data Processing on Large Clusters. This paper laid the foundation of a new technology that evolved into Hadoop, whose original authors are Doug Cutting and Mike Cafarella.
Hadoop was later incorporated into Apache Software Foundation, a decentralized open source community of developers. Hadoop has been one of the top open source projects within the Apache ecosystem.
Hadoop was based on a simple concept – divide and conquer. The idea entailed three steps:
- Distribute data into multiple files and distribute them across the various nodes in a cluster.
- Use compute nodes to process the data locally in the nodes of each cluster.
- Use an orchestrator that communicates with each node and aggregates data for the final output.
Over the years, this concept gained traction, and a new kind of paradigm emerged for analytics. This architecture paradigm is the data lake paradigm. A typical data lake pattern can be depicted in the following figure:
This pattern addressed the challenges prevalent in the EDW pattern. The advantages that the data lake architecture pattern can offer are evident. The key advantages are as follows:
- The data lake caters to both structured and unstructured data. The Hadoop ecosystem was primarily developed to store and process data formats such as JSON, text, and images. The EDW pattern was not designed to store or analyze these data types.
- The data lake pattern can process large volumes of data at a relatively cheaper cost. The volumes of data that data lakes can store and process are in the order of high Terabytes (TBs) or Petabytes (PB). The EDW pattern found these large volumes of data challenging to store and process efficiently.
- Data lakes can better address fast-changing business requirements. The evolving AI technologies can leverage data lakes better.
This pattern is widely adopted as it is the need of the hour. However, it has its own challenges. A few challenges associated with this pattern are as follows:
- It is easy for a data lake to become a data swamp. Data lakes take in data, any form of data, and store it in its raw form. The philosophy is to ingest data first and then figure out what to do with it. This causes easy slippage of governance, and it becomes challenging to govern the data lake. With no proper data governance, data starts to mushroom all over the place in a data lake, and soon it becomes a data swamp.
- Data lakes also have challenges with the rapid evolution of technology. The data lake paradigm mainly relies on open source software. Open source software evolves rapidly into behemoths that can become too difficult to manage. The software is predominantly community-driven, and it doesn't have proper enterprise support. This causes a lot of maintenance overhead and implementation complexities. Many features that are demanded by enterprises are missing from open source software, for example, a robust security framework.
- Data lakes focus a lot more on AI enablement than BI. It was natural that the open source software evolution focused more on enabling AI. AI was having its own journey and was riding the wave, cresting together with Hadoop. BI was seen as retro, as it was already mature in its life cycle.
Soon, it became evident that the data lake pattern alone wouldn't be sustainable in the long run. There was a need for a new paradigm that fuses these two patterns.