Time for action – changing job priorities and killing a job
Let's explore job priorities by changing them dynamically and watching the result of killing a job.
Start a relatively long-running job on the cluster.
$ hadoop jar hadoop-examples-1.0.4.jar pi 100 1000
Open another window and submit a second job.
$ hadoop jar hadoop-examples-1.0.4.jar wordcount test.txt out1
Open another window and submit a third.
$ hadoop jar hadoop-examples-1.0.4.jar wordcount test.txt out2
List the running jobs.
$ Hadoop job -list
You'll see the following lines on the screen:
3 jobs currently running JobId State StartTime UserName Priority SchedulingInfo job_201201111540_0005 1 1326325810671 hadoop NORMAL NA job_201201111540_0006 1 1326325938781 hadoop NORMAL NA job_201201111540_0007 1 1326325961700 hadoop NORMAL NA
Check the status of the running job.
$ Hadoop job -status job_201201111540_0005
You'll see the following lines on the screen:
Job: job_201201111540_0005 file: hdfs://head:9000...