Implementing a user-defined counter in a Map Reduce program
In this recipe, we are going to learn how to add a user-defined counter so that we can keep track of certain events easily.
Getting ready
To perform this recipe, you should have a running Hadoop cluster as well as an eclipse that's similar to an IDE.
How to do it...
After every map reduce execution, you will see a set of system defined counters getting published, such as File System counters, Job counters, and Map Reduce Framework counters. These counters help us understand the execution in detail. They give very detailed information about the number of bytes written to HDFS, read from HDFS, the input given to a map, the output received from a map, and so on. Similar to this information, we can also add our own user-defined counters, which will help us track the execution in a better manner.
In earlier recipes, we considered the use case of log analytics. There can be chances that the input we receive might always not be in the same...