In this section, we will cover compiling and running MapReduce jobs. We have already seen examples of how jobs can be run on standalone, pseudo-development, and cluster environments. You need to remember that, when you compile the classes, you must do it with same versions of your libraries and Java that you will otherwise run in production, otherwise you may get major-minor version mismatch errors in your run-time (read the description here). In almost all cases, the JAR for programs is created and run directly through the following command:
Hadoop jar <jarfile> <parameters>
Now let's look at different alternatives available for running the jobs.