Factors affecting the performance of MapReduce
The processing time of input data with MapReduce may be affected by many factors. One of these factors is the algorithm you use while implementing your map
and reduce
functions. Other external factors may also affect the MapReduce performance. Based on our experience and observation, the following are the major factors that may affect MapReduce performance:
Hardware (or resources) such as CPU clock, disk I/O, network bandwidth, and memory size.
The underlying storage system.
Data size for input data, shuffle data, and output data, which are closely correlated with the runtime of a job.
Job algorithms (or program) such as map, reduce, partition, combine, and compress. Some algorithms may be hard to conceptualize in MapReduce, or may be inefficient to express in terms of MapReduce.
While running a map task, intermediate output of the shuffle subtasks is stored in a memory buffer to reduce disk I/O. However, since the size of this output may exceed that of the memory buffer and such an overflow may occur, the spill subphase is needed to flush the data into a local filesystem. This subphase may affect the MapReduce performance and is often implemented using multithreading to maximize the utility of disk I/O and to reduce the runtime of jobs.
The MapReduce programming model enables users to specify data transformation logic using their own map
and reduce
functions. The model does not specify how intermediate pairs produced by map functions are grouped for reduce functions to process. Therefore, the merge-sort algorithm is employed as the default grouping algorithm. However, the merge-sort algorithm is not always the most efficient algorithm, especially for analytical tasks, such as aggregation and equal-join, which do not care about the order of intermediate keys.
Note
In the MapReduce programming model, grouping/partitioning is a serial task! This means the framework needs to wait for all map tasks to complete before any reduce tasks can be run.
To learn more about the merge-sort algorithm, refer to the URL http://en.wikipedia.org/wiki/Merge_sort.
The MapReduce performance is based on the runtime of both map
and reduce
. This is because parameters such as the number of nodes in a cluster or the number of slots in a node are unmodifiable in a typical environment.
Other factors that may potentially affect the performance of MapReduce are:
The I/O mode: This is the way to retrieve data from the storage system. There atre two modes to read data from the underlying storage system:
Direct I/O: This is used to read directly from the local disk cache to memory through hardware controllers; therefore, no inter-process communication costs are required.
Streaming I/O: This allows you to read data from another running process (typically the storage system process) through certain inter-process communication schemes such as TCP/IP and JDBC.
Tip
To enhance performance, using direct I/O may be more efficient than streaming I/O.
Input data parsing: This is the conversion process from raw data into the key/value pairs when data is retrieved from the storage system. The data parsing process aims to decode raw data from its native format and transform it into data objects that can be processed by a programming language such as Java.
Input data can be decoded to (Java or other) objects so that the content can be altered after an instance is created, typically when you use a reference to an instance of the object (these objects are called mutable objects) or to objects where the content cannot be altered after it is created (called immutable objects). In the case of a million records, the immutable decoding process is significantly slower than the mutable decoding process as it may produce a huge number of immutable objects. Therefore, this can lead to poor performance of the system.
Input data storage: The underlining storage system must ensure a high speed access and data availability (such as HDFS, and HBase) when data is retrieved by MapReduce to be processed. If you choose to use a storage filesystem other than those recommended to be used with MapReduce, the access to the input data may potentially affect MapReduce performance.
When using the Hadoop framework, many factors may affect the overall system performance and the runtime of a job. These factors may be part of the Hadoop MapReduce engine or may be external to it.
The Hadoop configuration parameters usually indicate how many tasks can run concurrently and determine the runtime of a job since other factors are not modifiable after the Hadoop cluster is set up and the job starts the execution. A misconfigured Hadoop framework may underutilize the cluster resources and therefore impact the MapReduce job performance. This is due to the large number of configuration parameters that control the Hadoop framework's behavior.
A Hadoop job is often composed of many submodules that implement different algorithms, and some of these sub-modules are connected in serial, while others are connected in parallel. A misconfiguration of the Hadoop framework may impact how all internal tasks coordinate together to achieve tasks. The impact of the settings of all these parameters (which will be covered in Chapter 2, An Overview of the Hadoop Parameters) depends on the map
and reduce
functions' code, the cluster resources, and of course, the input data.
A MapReduce job performance can also be affected by the number of nodes in the Hadoop cluster and the available resources of all the nodes to run map and reduce tasks. Each node capacity determines the number of mapper and reducer tasks that a node can execute. Therefore, if the resources of nodes are underutilized or overutilized, it will directly impact the MapReduce tasks' performance.