As a big data practitioner or enthusiast, you must have read or heard about the HDFS architecture. The goal of this section is to explore the architecture in depth, including the main and essential supporting components. By the end of this section, you will have a deep knowledge of the HDFS architecture, along with the intra-process communication of architecture components. But first, let's start by establishing definition of HDFS (Hadoop Distributed File System). HDFS is the storage system of the Hadoop platform, which is distributed, fault-tolerant, and immutable in nature. HDFS is specifically developed for large datasets (too large to fit in cheaper commodity machines). Since HDFS is designed for large datasets on commodity hardware, it purposely mitigates some of the bottlenecks associated with large datasets.
We will understand...