Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Learning YARN

You're reading from   Learning YARN Moving beyond MapReduce - learn resource management and big data processing using YARN

Arrow left icon
Product type Paperback
Published in Aug 2015
Publisher
ISBN-13 9781784393960
Length 278 pages
Edition 1st Edition
Tools
Arrow right icon
Toc

Table of Contents (14) Chapters Close

Preface 1. Starting with YARN Basics FREE CHAPTER 2. Setting up a Hadoop-YARN Cluster 3. Administering a Hadoop-YARN Cluster 4. Executing Applications Using YARN 5. Understanding YARN Life Cycle Management 6. Migrating from MRv1 to MRv2 7. Writing Your Own YARN Applications 8. Dive Deep into YARN Components 9. Exploring YARN REST Services 10. Scheduling YARN Applications 11. Enabling Security in YARN 12. Real-time Data Analytics Using YARN Index

Shortcomings of MapReducev1

Though the Hadoop MapReduce framework was widely used, the following are the limitations that were found with the framework:

  • Batch processing only: The resources across the cluster are tightly coupled with map-reduce programming. It does not support integration of other data processing frameworks and forces everything to look like a MapReduce job. The emerging customer requirements demand support for real-time and near real-time processing on the data stored on the distributed file systems.
  • Nonscalability and inefficiency: The MapReduce framework completely depends on the master daemon, that is, the JobTracker. It manages the cluster resources, execution of jobs, and fault tolerance as well.

    It is observed that the Hadoop cluster performance degrades drastically when the cluster size increases above 4,000 nodes or the count of concurrent tasks crosses 40,000. The centralized handling of jobs control flow resulted in endless scalability concerns for the scheduler.

  • Unavailability and unreliability: The availability and reliability are considered to be critical aspects of a framework such as Hadoop. A single point of failure for the MapReduce framework is the failure of the JobTracker daemon. The JobTracker manages the jobs and resources across the cluster. If it goes down, information related to the running or queued jobs and the job history is lost. The queued and running jobs are killed if the JobTracker fails. The MapReduce v1 framework doesn't have any provision to recover the lost data or jobs.
  • Partitioning of resources: A MapReduce framework divides a job into multiple map and reduce tasks. The nodes with running the TaskTracker daemon are considered as resources. The capability of a resource to execute MapReduce jobs is expressed as the number of map-reduce tasks a resource can execute simultaneously. The framework forced the cluster resources to be partitioned into map and reduce task slots. Such partitioning of the resources resulted in less utilization of the cluster resources.
    Shortcomings of MapReducev1

    Note

    If you have a running Hadoop 1.x cluster, you can refer to the JobTracker web interface to view the map and reduce task slots of the active TaskTracker nodes.

    The link for the active TaskTracker list is as follows: http://JobTrackerHost:50030/machines.jsp?type=active

  • Management of user logs and job resources: The user logs refer to the logs generated by a MapReduce job. Logs for MapReduce jobs. These logs can be used to validate the correctness of a job or to perform log analysis to tune up the job's performance. In MapReduce v1, the user logs are generated and stored on the local file system of the slave nodes. Accessing logs on the slaves is a pain as users might not have the permissions issued. Since logs were stored on the local file system of a slave, in case the disk goes down, the logs will be lost.

A MapReduce job might require some extra resources for job execution. In the MapReduce v1 framework, the client copies job resources to the HDFS with the replication of 10. Accessing resources remotely or through HDFS is not efficient. Thus, there's a need for localization of resources and a robust framework to manage job resources.

Note

In January 2008, Arun C. Murthy logged a bug in JIRA against the MapReduce architecture, which resulted in a generic resource scheduler and a per job user-defined component that manages the application execution.

You can see this at https://issues.apache.org/jira/browse/MAPREDUCE-279

You have been reading a chapter from
Learning YARN
Published in: Aug 2015
Publisher:
ISBN-13: 9781784393960
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime