HDFS is built based on a master/worker architecture where NameNode is the master and DataNodes are the workers. DataNode follows NameNode's instructions, such as block creation, replication, and deletion. Read and write requests from clients are served by DataNodes. All the files in HDFS are split into blocks and actual data is then stored in the DataNodes. Each DataNode periodically sends its heartbeat to the NameNode to acknowledge that it is still alive and functioning properly. DataNodes also send block reports to the NameNodes.Â
When the DataNodes receives a new block request, it sends a block received acknowledgement to the NameNode. The Datanode.Java class contains the majority of the implementation of Datanode's functionality. This class has the implementation code for communicating with the following:
- Client code for read and write operations...