Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Big Data Analytics with Hadoop 3

You're reading from   Big Data Analytics with Hadoop 3 Build highly effective analytics solutions to gain valuable insight into your big data

Arrow left icon
Product type Paperback
Published in May 2018
Publisher Packt
ISBN-13 9781788628846
Length 482 pages
Edition 1st Edition
Languages
Tools
Concepts
Arrow right icon
Author (1):
Arrow left icon
Sridhar Alla Sridhar Alla
Author Profile Icon Sridhar Alla
Sridhar Alla
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. Introduction to Hadoop FREE CHAPTER 2. Overview of Big Data Analytics 3. Big Data Processing with MapReduce 4. Scientific Computing and Big Data Analysis with Python and Hadoop 5. Statistical Big Data Computing with R and Hadoop 6. Batch Analytics with Apache Spark 7. Real-Time Analytics with Apache Spark 8. Batch Analytics with Apache Flink 9. Stream Processing with Apache Flink 10. Visualizing Big Data 11. Introduction to Cloud Computing 12. Using Amazon Web Services

MapReduce framework

An easy way to understand this concept is to imagine that you and your friends want to sort out piles of fruit into boxes. For that, you want to assign each person the task of going through one raw basket of fruit (all mixed up) and separating out the fruit into various boxes. Each person then does the same task of separating the fruit into the various types with this basket of fruit. In the end, you end up with a lot of boxes of fruit from all your friends. Then, you can assign a group to put the same kind of fruit together in a box, weigh the box, and seal the box for shipping. A classic example of showing the MapReduce framework at work is the word count example. The following are the various stages of processing the input data, first splitting the input across multiple worker nodes and then finally generating the output, the word counts:

The MapReduce framework consists of a single ResourceManager and multiple NodeManagers (usually, NodeManagers coexist with the DataNodes of HDFS). 

Task-level native optimization

MapReduce has added support for a native implementation of the map output collector. This new support can result in a performance improvement of about 30% or more, particularly for shuffle-intensive jobs.

The native library will build automatically with Pnative. Users may choose the new collector on a job-by-job basis by setting mapreduce.job.map.output.collector.class=org.apache.hadoop.mapred.
nativetask.NativeMapOutputCollectorDelegator in their job configuration. 

The basic idea is to be able to add a NativeMapOutputCollector in order to handle key/value pairs emitted by mapper. As a result of this sort, spill, and IFile serialization can all be done in native code. A preliminary test (on Xeon E5410, jdk6u24) showed promising results as follows:

  • sort is about 3-10 times faster than Java (only binary string compare is supported)
  • IFile serialization speed is about three times faster than Java: about 500 MB per second. If CRC32C hardware is used, things can get much faster in the range of 1 GB or higher per second
  • Merge code is not completed yet, so the test uses enough io.sort.mb to prevent mid-spill
You have been reading a chapter from
Big Data Analytics with Hadoop 3
Published in: May 2018
Publisher: Packt
ISBN-13: 9781788628846
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime