Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Optimizing Hadoop for MapReduce

You're reading from   Optimizing Hadoop for MapReduce This book is the perfect introduction to sophisticated concepts in MapReduce and will ensure you have the knowledge to optimize job performance. This is not an academic treatise; it's an example-driven tutorial for the real world.

Arrow left icon
Product type Paperback
Published in Feb 2014
Publisher
ISBN-13 9781783285655
Length 120 pages
Edition Edition
Tools
Arrow right icon
Author (1):
Arrow left icon
Khaled Tannir Khaled Tannir
Author Profile Icon Khaled Tannir
Khaled Tannir
Arrow right icon
View More author details
Toc

An overview of Hadoop MapReduce


Hadoop is the most popular open source Java implementation of the MapReduce programming model proposed by Google. There are many other implementations of MapReduce (such as Sphere, Starfish, Riak , and so on), which implement all the features described in the Google documentation or only a subset of these features.

Hadoop consists of distributed data storage engine and MapReduce execution engine. It has been successfully used for processing highly distributable problems across a large amount of datasets using a large number of nodes. These nodes collectively form a Hadoop cluster, which in turn consists of a single master node called JobTracker, and multiple worker (or slave) nodes; each worker node is called a TaskTracker. In this framework, a user program is called a job and is divided into two steps: map and reduce.

Like in the MapReduce programing model, the user has to only define the map and reduce functions when using the Hadoop MapReduce implementation. The Hadoop MapReduce system automatically parallelizes the execution of these functions and ensures fault tolerance.

Tip

To learn more about the Hadoop MapReduce implementation, you can browse Hadoop's official website at http://hadoop.apache.org/.

Basically, the Hadoop MapReduce framework utilizes a distributed filesystem to read and write its data. This distributed filesystem is called Hadoop Distributed File System (HDFS), which is the open source counterpart of the Google File System (GFS). Therefore, the I/O performance of a Hadoop MapReduce job strongly depends on HDFS.

HDFS consists of a master node called NameNode, and slave nodes called DataNodes. Within the HDFS, data is divided into fixed-size blocks (chunks) and spread across all DataNodes in the cluster. Each data block is typically replicated with two replicas: one placed within the same rack and the other placed outside it. NameNode keeps track of which DataNodes hold replicas of which block.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime