Over any period of time, when you're producing a Hadoop cluster, there is always a need to manage disks on DataNodes. It could be the case that you must replace corrupted disks or you must add more disks for more data volumes. Another possibility is that your disks volumes vary in same data nodes. All such cases would result in uneven data distribution across all of the disks in a DataNode. Another reason that can result in uneven data distribution is round robin-based disk writes and random deletes.
To prevent such problems from occurring prior to the release of Hadoop 3, Hadoop administrators were applying methods that were far from ideal. One solution was to shut down your data node and use the UNIX mv command to move block replicas along with supported metadata files from one directory to another directory. Each of those directories...