The inner workings of HDFS
In Chapter 1, Introduction, we gave a very high-level overview of HDFS; we will now explore it in a little more detail. As mentioned in that chapter, HDFS can be viewed as a filesystem, though one with very specific performance characteristics and semantics. It's implemented with two main server processes: the NameNode and the DataNodes, configured in a master/slave setup. If you view the NameNode as holding all the filesystem metadata and the DataNodes as holding the actual filesystem data (blocks), then this is a good starting point. Every file placed onto HDFS will be split into multiple blocks that might reside on numerous DataNodes, and it's the NameNode that understands how these blocks can be combined to construct the files.
Cluster startup
Let's explore the various responsibilities of these nodes and the communication between them by assuming we have an HDFS cluster that was previously shut down and then examining the startup behavior.