In this chapter, we have gone through various topics pertaining to MapReduce with a deeper walk through. We started with understanding the concept of MapReduce and an example of how it works. We started configuring the config files for a MapReduce environment; we also configured Job history server. We then looked at Hadoop application URLs, ports, and so on. Post-configuration, we focused on some hands-on work of setting up a MapReduce project and going through Hadoop packages, and then we did a deeper dive into writing MapReduce programs. We also studied different data formats needed for MapReduce. Later, we looked at job compilation, remote job run, and using utilities such as Tool for a simple life. We then studied unit testing and failure handling.
Now that you are able to write applications in MapReduce, in the next chapter, we will start looking at building applications...