HDFS plays a major role in the performance of batch or micro jobs, using HDFS to read and write the data. If there is any bottleneck on the application during writing or reading the file from HDFS, then it will lead to overall performance issues.
HDFS
DFSIO
DFSIO are the tests that are used to measure read and write performance of MapReduce jobs. They are file-based operations that read and write tasks in parallel. The reduce tasks collect all performance parameters and statistics. You can always pass different parameters to test the throughput, the total number of bytes processed, average I/O, and much more. The important key is that you match these outputs with the number of cores, disks, and memory in your Hadoop cluster...