Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Cloudera Administration Handbook
Cloudera Administration Handbook

Cloudera Administration Handbook: A complete, hands-on guide to building and maintaining large Apache Hadoop clusters using Cloudera Manager and CDH5

eBook
€8.99 €36.99
Paperback
€45.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

Cloudera Administration Handbook

Chapter 1. Getting Started with Apache Hadoop

Apache Hadoop is a widely used open source distributed computing framework that is employed to efficiently process large volumes of data using large clusters of cheap or commodity computers. In this chapter, we will learn more about Apache Hadoop by covering the following topics:

  • History of Apache Hadoop and its trends
  • Components of Apache Hadoop
  • Understanding the Apache Hadoop daemons
  • Introducing Cloudera
  • What is CDH?
  • Responsibilities of a Hadoop administrator

History of Apache Hadoop and its trends

We live in the era where almost everything surrounding us is generating some kind of data. A click on a web page is being logged on the server. The flipping of channels when watching TV is being captured by cable companies. A search on a search engine is being logged. A heartbeat of a patient in a hospital generates data. A single phone call generates data, which is stored and maintained by telecom companies. An order of pizza generates data. It is very difficult to find processes these days that don't generate and store data.

Why would any organization want to store data? The present and the future belongs to those who hold onto their data and work with it to improve their current operations and innovate to generate newer products and opportunities. Data and the creative use of it is the heart of organizations such as Google, Facebook, Netflix, Amazon, and Yahoo!. They have proven that data, along with powerful analysis, helps in building fantastic and powerful products.

Organizations have been storing data for several years now. However, the data remained on backup tapes or drives. Once it has been archived on storage devices such as tapes, it can only be used in case of emergency to retrieve important data. However, processing or analyzing this data to get insight efficiently is very difficult. This is changing. Organizations want to now use this data to get insight to help understand existing problems, seize new opportunities, and be more profitable. The study and analysis of these vast volumes of data has given birth to a term called big data. It is a phrase often used to promote the importance of the ever-growing data and the technologies applied to analyze this data.

Big and small companies now understand the importance of data and are adding loggers to their operations with an intention to generate more data every day. This has given rise to a very important problem—storage and efficient retrieval of data for analysis. With the data growing at such a rapid rate, traditional tools for storage and analysis fall short. Though these days the cost per byte has reduced considerably and the ability to store more data has increased, the disk transfer rate has remained the same. This has been a bottleneck for processing large volumes of data. Data in many organizations have reached petabytes and is continuing to grow. Several companies have been working to solve this problem and have come out with a few commercial offerings that leverage the power of distributed computing. In this solution, multiple computers work together (a cluster) to store and process large volumes of data in parallel, thus making the analysis of large volumes of data possible. Google, the Internet search engine giant, ran into issues when their data, acquired by crawling the Web, started growing to such large volumes that it was getting increasingly impossible to process. They had to find a way to solve this problem and this led to the creation of Google File System (GFS) and MapReduce.

The GFS or GoogleFS is a filesystem created by Google that enables them to store their large amount of data easily across multiple nodes in a cluster. Once stored, they use MapReduce, a programming model developed by Google to process (or query) the data stored in GFS efficiently. The MapReduce programming model implements a parallel, distributed algorithm on the cluster, where the processing goes to the location where data resides, making it faster to generate results rather than wait for the data to be moved to the processing, which could be a very time consuming activity. Google found tremendous success using this architecture and released white papers for GFS in 2003 and MapReduce in 2004.

Around 2002, Doug Cutting and Mike Cafarella were working on Nutch, an open source web search engine, and faced problems of scalability when trying to store billions of web pages that were crawled everyday by Nutch. In 2004, the Nutch team discovered that the GFS architecture was the solution to their problem and started working on an implementation based on the GFS white paper. They called their filesystem Nutch Distributed File System (NDFS). In 2005, they also implemented MapReduce for NDFS based on Google's MapReduce white paper.

In 2006, the Nutch team realized that their implementations, NDFS and MapReduce, could be applied to more areas and could solve the problems of large data volume processing. This led to the formation of a project called Hadoop. Under Hadoop, NDFS was renamed to Hadoop Distributed File System (HDFS). After Doug Cutting joined Yahoo! in 2006, Hadoop received lot of attention within Yahoo!, and Hadoop became a very important system running successfully on top of a very large cluster (around 1000 nodes). In 2008, Hadoop became one of Apache's top-level projects.

So, Apache Hadoop is a framework written in Java that:

  • Is used for distributed storage and processing of large volumes of data, which run on top of a cluster and can scale from a single computer to thousands of computers
  • Uses the MapReduce programming model to process data
  • Stores and processes data on every worker node (the nodes on the cluster that are responsible for the storage and processing of data) and handles hardware failures efficiently, providing high availability

Apache Hadoop has made distributed computing accessible to anyone who wants to try and process their large volumes of data without shelling out big bucks to commercial offerings. The success of Apache Hadoop implementations in organizations such as Facebook, Netflix, LinkedIn, Twitter, The New York Times, and many more have given the much deserved recognition to Apache Hadoop and in turn good confidence to other organizations to make it a core part of their system. Having made large data analysis a possibility, Hadoop has also given rise to many startups that build analytics products on top of Apache Hadoop.

Components of Apache Hadoop

Apache Hadoop is composed of two core components. They are:

  • HDFS: The HDFS is responsible for the storage of files. It is the storage component of Apache Hadoop, which was designed and developed to handle large files efficiently. It is a distributed filesystem designed to work on a cluster and makes it easy to store large files by splitting the files into blocks and distributing them across multiple nodes redundantly. The users of HDFS need not worry about the underlying networking aspects, as HDFS takes care of it. HDFS is written in Java and is a filesystem that runs within the user space.
  • MapReduce: MapReduce is a programming model that was built from models found in the field of functional programming and distributed computing. In MapReduce, the task is broken down to two parts: map and reduce. All data in MapReduce flows in the form of key and value pairs, <key, value>. Mappers emit key and value pairs and the reducers receive them, work on them, and produce the final result. This model was specifically built to query/process the large volumes of data stored in HDFS.

We will be going through HDFS and MapReduce in depth in the next chapter.

Understanding the Apache Hadoop daemons

Most of the Apache Hadoop clusters in production run Apache Hadoop 1.x (MRv1—MapReduce Version 1). However, the new version of Apache Hadoop, 2.x (MRv2—MapReduce Version 2), also referred to as Yet Another Resource Negotiator (YARN) is being adopted by many organizations actively. In this section, we shall go through the daemons for both these versions.

Apache Hadoop 1.x (MRv1) consists of the following daemons:

  • Namenode
  • Secondary namenode
  • Jobtracker
  • Datanode
  • Tasktracker

All the preceding daemons are Java services and run within their own JVM.

Apache Hadoop stores and processes data in a distributed fashion. To achieve this goal, Hadoop implements a master and slave model. The namenode and jobtracker daemons are master daemons, whereas the datanode and tasktracker daemons are slave daemons.

Namenode

The namenode daemon is a master daemon and is responsible for storing all the location information of the files present in HDFS. The actual data is never stored on a namenode. In other words, it holds the metadata of the files in HDFS.

The namenode maintains the entire metadata in RAM, which helps clients receive quick responses to read requests. Therefore, it is important to run namenode from a machine that has lots of RAM at its disposal. The higher the number of files in HDFS, the higher the consumption of RAM. The namenode daemon also maintains a persistent checkpoint of the metadata in a file stored on the disk called the fsimage file.

Whenever a file is placed/deleted/updated in the cluster, an entry of this action is updated in a file called the edits logfile. After updating the edits log, the metadata present in-memory is also updated accordingly. It is important to note that the fsimage file is not updated for every write operation.

In case the namenode daemon is restarted, the following sequence of events occur at namenode boot up:

  1. Read the fsimage file from the disk and load it into memory (RAM).
  2. Read the actions that are present in the edits log and apply each action to the in-memory representation of the fsimage file.
  3. Write the modified in-memory representation to the fsimage file on the disk.

The preceding steps make sure that the in-memory representation is up to date.

The namenode daemon is a single point of failure in Hadoop 1.x, which means that if the node hosting the namenode daemon fails, the filesystem becomes unusable. To handle this, the administrator has to configure the namenode to write the fsimage file to the local disk as well as a remote disk on the network. This backup on the remote disk can be used to restore the namenode on a freshly installed server. Newer versions of Apache Hadoop (2.x) now support High Availability (HA), which deploys two namenodes in an active/passive configuration, wherein if the active namenode fails, the control falls onto the passive namenode, making it active. This configuration reduces the downtime in case of a namenode failure.

Since the fsimage file is not updated for every operation, it is possible the edits logfile would grow to a very large file. The restart of namenode service would become very slow because all the actions in the large edits logfile will have to be applied on the fsimage file. The slow boot up time could be avoided using the secondary namenode daemon.

Secondary namenode

The secondary namenode daemon is responsible for performing periodic housekeeping functions for namenode. It only creates checkpoints of the filesystem metadata (fsimage) present in namenode by merging the edits logfile and the fsimage file from the namenode daemon. In case the namenode daemon fails, this checkpoint could be used to rebuild the filesystem metadata. However, it is important to note that checkpoints are done in intervals and it is possible that the checkpoint data could be slightly outdated. Rebuilding the fsimage file using such a checkpoint could lead to data loss. The secondary namenode is not a failover node for the namenode daemon.

It is recommended that the secondary namenode daemon be hosted on a separate machine for large clusters. The checkpoints are created by merging the edits logfiles and the fsimage file from the namenode daemon.

The following are the steps carried out by the secondary namenode daemon:

  1. Get the edits logfile from the primary namenode daemon.
  2. Get the fsimage file from the primary namenode daemon.
  3. Apply all the actions present in the edits logs to the fsimage file.
  4. Push the fsimage file back to the primary namenode.

This is done periodically and so whenever the namenode daemon is restarted, it would have a relatively updated version of the fsimage file and the boot up time would be significantly faster. The following diagram shows the communication between namenode and secondary namenode:

Secondary namenode

The datanode daemon acts as a slave node and is responsible for storing the actual files in HDFS. The files are split as data blocks across the cluster. The blocks are typically 64 MB to 128 MB size blocks. The block size is a configurable parameter. The file blocks in a Hadoop cluster also replicate themselves to other datanodes for redundancy so that no data is lost in case a datanode daemon fails. The datanode daemon sends information to the namenode daemon about the files and blocks stored in that node and responds to the namenode daemon for all filesystem operations. The following diagram shows how files are stored in the cluster:

Secondary namenode

File blocks of files A, B, and C are replicated across multiple nodes of the cluster for redundancy. This ensures availability of data even if one of the nodes fail. You can also see that blocks of file A are present on nodes 2, 4, and 6; blocks of file B are present on nodes 3, 5, and 7; and blocks of file C are present on 4, 6, and 7. The replication factor configured for this cluster is 3, which signifies that each file block is replicated three times across the cluster. It is the responsibility of the namenode daemon to maintain a list of the files and their corresponding locations on the cluster. Whenever a client needs to access a file, the namenode daemon provides the location of the file to client and the client, then accesses the file directly from the datanode daemon.

Jobtracker

The jobtracker daemon is responsible for accepting job requests from a client and scheduling/assigning tasktrackers with tasks to be performed. The jobtracker daemon tries to assign tasks to the tasktracker daemon on the datanode daemon where the data to be processed is stored. This feature is called data locality. If that is not possible, it will at least try to assign tasks to tasktrackers within the same physical server rack. If for some reason the node hosting the datanode and tasktracker daemons fails, the jobtracker daemon assigns the task to another tasktracker daemon where the replica of the data exists. This is possible because of the replication factor configuration for HDFS where the data blocks are replicated across multiple datanodes. This ensures that the job does not fail even if a node fails within the cluster.

Tasktracker

The tasktracker daemon is a daemon that accepts tasks (map, reduce, and shuffle) from the jobtracker daemon. The tasktracker daemon is the daemon that performs the actual tasks during a MapReduce operation. The tasktracker daemon sends a heartbeat message to jobtracker, periodically, to notify the jobtracker daemon that it is alive. Along with the heartbeat, it also sends the free slots available within it, to process tasks. The tasktracker daemon starts and monitors the map, and reduces tasks and sends progress/status information back to the jobtracker daemon.

In small clusters, the namenode and jobtracker daemons reside on the same node. However, in larger clusters, there are dedicated nodes for the namenode and jobtracker daemons. This can be easily understood from the following diagram:

Tasktracker

In a Hadoop cluster, these daemons can be monitored via specific URLs using a browser. The specific URLs are of the http://<serveraddress>:port_number type.

By default, the ports for the Hadoop daemons are:

The Hadoop daemon

Port

Namenode

50070

Secondary namenode

50090

Jobtracker

50030

Datanode

50075

Tasktracker

50060

The preceding mentioned ports can be configured in the hdfs-site.xml and mapred-site.xml files.

YARN is a general-purpose, distributed, application management framework for processing data in Hadoop clusters.

YARN was built to solve the following two important problems:

  • Support for large clusters (4000 nodes or more)
  • The ability to run other applications apart from MapReduce to make use of data already stored in HDFS, for example, MPI and Apache Giraph

In Hadoop Version 1.x, MapReduce can be divided into the following two parts:

  • The MapReduce user framework: This consists of the user's interaction with MapReduce such as the application programming interface for MapReduce
  • The MapReduce system: This consists of system level tasks such as monitoring, scheduling, and restarting of failed tasks

The jobtracker daemon had these two parts tightly coupled within itself and was responsible for managing the tasks and all its related operations by interacting with the tasktracker daemon. This responsibility turned out to be overwhelming for the jobtracker daemon when the nodes in the cluster started increasing and reached the 4000 node mark. This was a scalability issue that needed to be fixed. Also, the investment in Hadoop could not be justified as MapReduce was the only way to process data on HDFS. Other tools were unable to process this data. YARN was built to address these issues and is part of Hadoop Version 2.x. With the introduction of YARN, MapReduce is now just one of the clients that run on the YARN framework.

YARN addresses the preceding mentioned issues by splitting the following two jobtracker responsibilities:

  • Resource management
  • Job scheduling/monitoring

The jobtracker daemon has been removed and the following two new daemons have been introduced in YARN:

  • ResourceManager
  • NodeManager

ResourceManager

The ResourceManager daemon is a global master daemon that is responsible for managing the resources for the applications in the cluster. The ResourceManager daemon consists of the following two components:

  • ApplicationsManager
  • Scheduler

The ApplicationsManager performs the following operations:

  • Accepts jobs from a client.
  • Creates the first container on one of the worker nodes to host the ApplicationMaster. A container, in simple terms, is the memory resource on a single worker node in cluster.
  • Restarts the container hosting ApplicationMaster on failure.

The scheduler is responsible for allocating the system resources to the various applications in the cluster and also performs the monitoring of each application.

Each application in YARN will have an ApplicationMaster. This is responsible for communicating with the scheduler and setting up and monitoring its resource containers.

NodeManager

The NodeManager daemon runs on the worker nodes and is responsible for monitoring the containers within the node and its system resources such as CPU, memory, and disk. It sends this monitoring information back to the ResourceManager daemon. Each worker node will have exactly one NodeManager daemon running.

Job submission in YARN

The following are the sequence of steps involved when a job is submitted to a YARN cluster:

  1. When a job is submitted to the cluster, the client first receives an application ID from the ResourceManager.
  2. Next, the client copies the job resources to a location in the HDFS.
  3. The ResourceManager then starts the first container under the NodeManager's management to bring up the ApplicationMaster. For example, if a MapReduce job is submitted, the ResourceManager will bring up the MapReduce ApplicationMaster.
  4. The ApplicationMaster, based on the job to be executed, requests resources from the ResourceManager.
  5. Once the ResourceManager schedules a container with the requested resource, the ApplicationMaster contacts the NodeManager to start the container and execute the task. In case of a MapReduce job, that task would be a map or reduce task.
  6. The client checks with the ApplicationMaster for status updates on the submitted job.

The following diagram shows the interactions of the client and the different daemons in a YARN environment:

Job submission in YARN

In a Hadoop cluster, the ResourceManager and NodeManager daemons can be monitored via specific URLs using a browser. The specific URLs are of the http://<serveraddress>:port_number type.

By default, the ports for these Hadoop daemons are:

The Hadoop daemon

Port

ResourceManager

8088

NodeManager

8042

The preceding mentioned ports can be configured in the yarn-site.xml file.

This was a short introduction to YARN, but it is important as a Hadoop administrator to know about YARN as this is soon going to be the way all Hadoop clusters will function.

Introducing Cloudera

Cloudera Inc. is a Palo Alto-based American enterprise software company that provides Apache Hadoop-based software, support and services, and training to data-driven enterprises. It is often referred to as the commercial Hadoop company.

Cloudera was founded by three top engineers from Google, Yahoo!, and Facebook—Christophe Bisciglia, Amr Awadallah, and Jeff Hammerbacher.

Cloudera is the market leader in Hadoop and is one of the major code contributors to the Apache Hadoop ecosystem. With the help of Hadoop, Cloudera helps businesses and organizations interact with their large datasets and derive great insights.

They have also built a full-fledged Apache Hadoop distribution called Cloudera's Distribution Including and a proprietary Hadoop cluster manager called Cloudera Manager, which helps users set up large clusters with extreme ease.

Introducing CDH

CDH or Cloudera's Distribution Including Apache Hadoop is an enterprise-level distribution including Apache Hadoop and several components of its ecosystem such as Apache Hive, Apache Avro, HBase, and many more. CDH is 100 percent open source. It is the most downloaded distribution in its space. As of writing this book, the current version of CDH is CDH 5.0.

Some of the important features of CDH are as follows:

  • All components are thoroughly tested by Cloudera, to see that they work well with each other, making it a very stable distribution
  • All enterprise needs such as security and high availability are built-in as part of the distribution
  • The distribution is very well documented making it easy for anyone interested to get the services up and running quickly

Responsibilities of a Hadoop administrator

With the increase in the interest to derive insight on their big data, organizations are now planning and building their big data teams aggressively. To start working on their data, they need to have a good solid infrastructure. Once they have this setup, they need several controls and system policies in place to maintain, manage, and troubleshoot their cluster.

There is an ever-increasing demand for Hadoop Administrators in the market as their function (setting up and maintaining Hadoop clusters) is what makes analysis really possible.

The Hadoop administrator needs to be very good at system operations, networking, operating systems, and storage. They need to have a strong knowledge of computer hardware and their operations, in a complex network.

Apache Hadoop, mainly, runs on Linux. So having good Linux skills such as monitoring, troubleshooting, configuration, and security is a must.

Setting up nodes for clusters involves a lot of repetitive tasks and the Hadoop administrator should use quicker and efficient ways to bring up these servers using configuration management tools such as Puppet, Chef, and CFEngine. Apart from these tools, the administrator should also have good capacity planning skills to design and plan clusters.

There are several nodes in a cluster that would need duplication of data, for example, the fsimage file of the namenode daemon can be configured to write to two different disks on the same node or on a disk on a different node. An understanding of NFS mount points and how to set it up within a cluster is required. The administrator may also be asked to set up RAID for disks on specific nodes.

As all Hadoop services/daemons are built on Java, a basic knowledge of the JVM along with the ability to understand Java exceptions would be very useful. This helps administrators identify issues quickly.

The Hadoop administrator should possess the skills to benchmark the cluster to test performance under high traffic scenarios.

Clusters are prone to failures as they are up all the time and are processing large amounts of data regularly. To monitor the health of the cluster, the administrator should deploy monitoring tools such as Nagios and Ganglia and should configure alerts and monitors for critical nodes of the cluster to foresee issues before they occur.

Knowledge of a good scripting language such as Python, Ruby, or Shell would greatly help the function of an administrator. Often, administrators are asked to set up some kind of a scheduled file staging from an external source to HDFS. The scripting skills help them execute these requests by building scripts and automating them.

Above all, the Hadoop administrator should have a very good understanding of the Apache Hadoop architecture and its inner workings.

The following are some of the key Hadoop-related operations that the Hadoop administrator should know:

  • Planning the cluster, deciding on the number of nodes based on the estimated amount of data the cluster is going to serve.
  • Installing and upgrading Apache Hadoop on a cluster.
  • Configuring and tuning Hadoop using the various configuration files available within Hadoop.
  • An understanding of all the Hadoop daemons along with their roles and responsibilities in the cluster.
  • The administrator should know how to read and interpret Hadoop logs.
  • Adding and removing nodes in the cluster.
  • Rebalancing nodes in the cluster.
  • Employ security using an authentication and authorization system such as Kerberos.
  • Almost all organizations follow the policy of backing up their data and it is the responsibility of the administrator to perform this activity. So, an administrator should be well versed with backups and recovery operations of servers.

Summary

In this chapter, we started out by exploring the history of Apache Hadoop and moved on to understanding its specific components. We also introduced ourselves to the new version of Apache Hadoop. We learned about Cloudera and its Apache Hadoop distribution called CDH and finally looked at some important roles and responsibilities of an Apache Hadoop administrator.

In the next chapter, we will get a more detailed understanding of Apache Hadoop's distributed filesystem, HDFS, and its programming model, MapReduce.

Left arrow icon Right arrow icon
Download code icon Download Code

Description

An easy-to-follow Apache Hadoop administrator’s guide filled with practical screenshots and explanations for each step and configuration. This book is great for administrators interested in setting up and managing a large Hadoop cluster. If you are an administrator, or want to be an administrator, and you are ready to build and maintain a production-level cluster running CDH5, then this book is for you.

What you will learn

  • Understand the Apache Hadoop architecture and the future of distributed processing frameworks
  • Use HDFS and MapReduce for all filerelated operations
  • Install and configure CDH to bring up an Apache Hadoop cluster
  • Configure HDFS High Availability and HDFS Federation to prevent single points of failure
  • Install and configure Cloudera Manager to perform administrator operations
  • Implement security by installing and configuring Kerberos for all services in the cluster
  • Add, remove, and rebalance nodes in a cluster using cluster management tools
  • Understand and configure the different backup options to back up your HDFS

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jul 18, 2014
Length: 254 pages
Edition : 1st
Language : English
ISBN-13 : 9781783558971
Tools :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Product Details

Publication date : Jul 18, 2014
Length: 254 pages
Edition : 1st
Language : English
ISBN-13 : 9781783558971
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 116.97
Optimizing Hadoop for MapReduce
€28.99
Mastering Hadoop
€41.99
Cloudera Administration Handbook
€45.99
Total 116.97 Stars icon
Banner background image

Table of Contents

10 Chapters
1. Getting Started with Apache Hadoop Chevron down icon Chevron up icon
2. HDFS and MapReduce Chevron down icon Chevron up icon
3. Cloudera's Distribution Including Apache Hadoop Chevron down icon Chevron up icon
4. Exploring HDFS Federation and Its High Availability Chevron down icon Chevron up icon
5. Using Cloudera Manager Chevron down icon Chevron up icon
6. Implementing Security Using Kerberos Chevron down icon Chevron up icon
7. Managing an Apache Hadoop Cluster Chevron down icon Chevron up icon
8. Cluster Monitoring Using Events and Alerts Chevron down icon Chevron up icon
9. Configuring Backups Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.5
(10 Ratings)
5 star 20%
4 star 50%
3 star 10%
2 star 0%
1 star 20%
Filter icon Filter
Top Reviews

Filter reviews by




william El Kaim Sep 22, 2014
Full star icon Full star icon Full star icon Full star icon Full star icon 5
The Cloudera Administration Handbook written by Rohit Menon is a fantastic resource for anybody wanting to understand and manage a Cloudera platform.I have to admit that I’m a rookie and that this book was exactly what I was dreaming of. Having all information in the same place, and code example both for Linux and Windows.The book is mainly targeted at bid data expert and system administrator. The first three chapters are giving the minimum background to understand MapReduce, Hadoop and Yarn and the Cloudera's Distribution Including Apache Hadoop (all services are listed and explained).Then, you enter into the “hard part”. Chapter 4 discussing in details HDFS Federation and Its High Availability and chapter 7 describing “Managing an Apache Hadoop Cluster” were for me particularly valuable. The chapter 5 presenting Cloudera Manager, a web-browser-based administration tool to manage Apache Hadoop clusters, will show you how to manage the clusters with point and clicks instead of command lines. Chapter 6 is about configuring access and right using the Kerberos services. It does show you how to implement the security services, but not how to manage user rights, which is a step requiring some planning. Monitoring and backup (using the Hadoop utility DistCp and the Cloudera manager). are also presented in two distinct parts.What I like in this book is that it goes directly to the point, assuming you already know the basics of system administration and distributed architecture. It then shares many “tips” that only an experienced professional will know, and enables the rookie I was to avoid mistakes. With this book, you will gain time. For example, the author told you when a SPOF (single point of failure) exist and the solutions to avoid them.The only part of the book that was missing for me was the cloud deployment. I would have liked a chapter explaining how to setup Cloudera in the cloud, and get the code (puppet or chef) to automate the install.It is clearly a worth buying book for people wanting to setup and deploy correctly a Cloudera platform. I also like the fact that for the same price you can download the PDF, mobi, epub and kindle version.
Amazon Verified review Amazon
A. Zubarev Sep 25, 2014
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Cloudera Administration Handbook is just another great what I call 'desk companion' book, especially a must for a beginner Cloudera Administrator.Written in a well balanced volume of material to feature coverage ratio, by a person from "the trenches" Rohit expands exactly on what a Hadoop Admin needs and should be using in retrospect to the Cloudera offerings in this area of expertize to successfully accomplish ones day-to-day tasks.However, it is actually a lot more than just an admin's book, it also teaches how to install most of the Cloudera Hadoop ecosystem components, what components are typically in use by what in a business and how to configure each. That all is done in a thorough, precise and professional manner without any extra fuss or foofaraw.I liked that the author expanded briefly, but nicely on the new features in Hadoop 2.0. For me the coverage on Map-Reduce appeared the most valuable. I admit it is a rough area of Hadoop.The troubleshooting part must be the one to read on and re-read, but also high availability, backup, balancing, and security. Especially the Kerberos setup, I deem it a very necessary, yet rarely covered topic, that also appears very hard to understand, may be at least to me, but it was worth going through that very much. Overall, as an aside, CDH distribution is very extensive and feature rich no wonder a whole book can be dedicated to just this topic. The Cloudera Manager now after reading the book I must say is an awesome tool to have on board, it is just a great helper, but it requires a good book as Cloudera Administration Handbook by Rohit Menon to get acquitted with.Have it beside you, at your desk.
Amazon Verified review Amazon
Robert Rapplean Sep 12, 2014
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
This book provides an excellent overview of how to use the Cloudera Manager to build and maintain an industrial Hadoop cluster. I think its best audience is either someone who is already familiar with Hadoop and will need to start managing a Cloudera cluster, or someone who will mostly just be interacting with the Cloudera Manager interface while a primary system administrator handles the more complicated issues that revolve around unpredictable variations. It starts off with a relatively watered-down overview of the concepts behind Hadoop, but around Chapter 5 it really picks up and provides a great description of how to build and manage your cluster. The techniques provided in this book make use of Cloudera Manager wherever possible, as this is the preferred method of setting up and maintaining a Cloudera Hadoop cluster.One of the strongest points is that it contributes to the amount of published knowledge around Hadoop 2, which has been slow to catch up with the release of the technology.There are a few shortcomings that prevent me from giving it a full five stars. Since it focuses on the Cloudera Manager, it could leave a fledgeling admin in a bad place if things aren't all lined up just right. The education base of the target audience is a little narrow since its tone is aimed at informing, not teaching, so it excludes those who are not familiar with system administration and general. In the other direction, it doesn't provide the in-depth, "under the hood" details that heavy-weight system administrators enjoy wading through (but which would require a much thicker book).
Amazon Verified review Amazon
Si Dunn Sep 17, 2014
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
This is a well-organized, well-written and solidly illustrated guide to building and maintaining large Apache Hadoop clusters using Cloudera Manager and CDH5. You can't get everything you need to know in one book, no matter how big it is. And I would have preferred seeing a bit more information about the Cloudera Manager nearer to the front (while not giving up the learn-to-use-the-command-line-first approach). But this book definitely can get you going, and it can show you a lot of what you will need to know as an administrator of Hadoop clusters. I have read and used a few other Hadoop how-to books in recent years. This one will be a keeper. (Thanks to Packt Publishing for providing a review copy).
Amazon Verified review Amazon
Ganapathy Kokkeshwara Oct 09, 2014
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
This is a must read book for anyone who is in the process of learning the administration of Cloudera distribution of hadoop. Other than learning, the book can also be used for reference. In the first 2 chapters , the author talks about Apache Hadoop, its various components, HDFS and Mapreduce. These chapters provides a very informative introduction to Apache Hadoop and the ecosystem associated with it. These chapters are useful for anyone interested in learning hadoop technology let alone for Cloudera administrators.In the 3rd chapter author talks about Cloudera distribution of Apache Hadoop and the other components that are distributed with Cloudera Hadoop Distribution ( CDH) such as Flume ,Sqoop, Pig, Hive, Zookeeper etc. The components are explained in simple terms that can be understood by most of the technical persons. I liked the 'screenshots' of the UI for each of the components that made it little bit easier to understand and comprehend. This chapter also covers the installation of CDH and the various components. I liked the fact that author covers the installation from Cloudera Manager as well as from Operating System’s package manager , thus providing more options for the administrator.Rest of the chapters cover administrating the high availability , implementing security using Kerberos , managing cluster and monitoring. Chapter 9 talks about backing up the Hadoop cluster. I liked the fact that author took time to explain the various types of backup and storage media for backups before actually getting into technical nitty-gritty of back up of big data.Throughout the book , author also writes briefly about the impact on administering the CDH when deployed on cloud and provides the relevant web links for further reference.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.