Starting with the basics
The Apache Hadoop 2.x version consists of three key components:
Hadoop Distributed File System (HDFS)
Yet Another Resource Negotiator (YARN)
The MapReduce API (Job execution, MRApplicationMaster, JobHistoryServer, and so on)
There are two master processes that manage the Hadoop 2.x cluster—the NameNode and the ResourceManager. All the slave nodes in the cluster have DataNode and NodeManager processes running as the worker daemons for the cluster. The NameNode and DataNode daemons are part of HDFS, whereas the ResourceManager and NodeManager belong to YARN.
When we configure Hadoop-YARN on a single node, we need to have all four processes running on the same system. Hadoop single node installation is generally used for learning purposes. If you are a beginner and need to understand the Hadoop-YARN concepts, you can use a single node Hadoop-YARN cluster.
In the production environment, a multi-node cluster is used. It is recommended to have separate nodes for NameNode and...