HDFS is designed to run on a cluster of commodity hardware. It is a fault-tolerant, scalable File System that handles the failure of nodes without data and can scale up horizontally to any number of nodes. The initial goal of HDFS was to serve large data files with high read and write performance.
The following are a few essential features of HDFS:
- Fault tolerance: Downtime due to machine failure or data loss could result in a huge loss to a company; therefore, the companies want a highly available fault-tolerant system. HDFS is designed to handle failures and ensures data availability with corrective and preventive actions.
Files stored in HDFS are split into small chunks and each chunk is referred to as a block. Each block is either 64 MB or 128 MB, depending on the configuration. Blocks are replicated across clusters based on the replication factor...