The story of Hadoop started with HDFS and MapReduce. Hadoop version 1 has the basic features for storing and processing data over a distributed platform and since then it has evolved a lot. Hadoop version 2 added major changes, such as NameNode, high availability, and a new resource management framework called YARN. However, the high-level flow for MapReduce processing did not change despite various changes in its API.Â
MapReduce consists of two major steps: map and reduce, and multiple minor steps that are part of the process flow from map to reduce tasks. The mappers are responsible for performing map tasks while reducers are responsible for the reduce tasks. The job of the mapper is to process the blocks stored on HDFS, like the distributed storage system. Let's us look at the following MapReduce flow diagram:
We...