Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
HBase Administration Cookbook
HBase Administration Cookbook

HBase Administration Cookbook: Master HBase configuration and administration for optimum database performance with this book and ebook

eBook
$28.99 $32.99
Paperback
$54.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with Print?

Product feature icon Instant access to your digital copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Redeem a companion digital copy on all Print orders
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

HBase Administration Cookbook

Chapter 1. Setting Up HBase Cluster

In this chapter, we will cover:

  • Quick start

  • Getting ready on Amazon EC2

  • Setting up Hadoop

  • Setting up ZooKeeper

  • Changing the kernel settings

  • Setting up HBase

  • Basic Hadoop/ZooKeeper/HBase configurations

  • Setting up multiple High Availability (HA) masters

Introduction


This chapter explains how to set up HBase cluster, from a basic standalone HBase instance to a fully distributed, highly available HBase cluster on Amazon EC2.

According to Apache HBase's home page:

HBase is the Hadoop database. Use HBase when you need random, real-time, read/write access to your Big Data. This project's goal is the hosting of very large tables—billions of rows X millions of columns—atop clusters of commodity hardware.

HBase can run against any filesystem. For example, you can run HBase on top of an EXT4 local filesystem, Amazon Simple Storage Service (Amazon S3), and Hadoop Distributed File System (HDFS) , which is the primary distributed filesystem for Hadoop. In most cases, a fully distributed HBase cluster runs on an instance of HDFS, so we will explain how to set up Hadoop before proceeding.

Apache ZooKeeper is an open source software providing a highly reliable, distributed coordination service. A distributed HBase depends on a running ZooKeeper cluster.

HBase, which is a database that runs on Hadoop, keeps a lot of files open at the same time. We need to change some Linux kernel settings to run HBase smoothly.

A fully distributed HBase cluster has one or more master nodes (HMaster), which coordinate the entire cluster, and many slave nodes (RegionServer), which handle the actual data storage and request. The following diagram shows a typical HBase cluster structure:

HBase can run multiple master nodes at the same time, and use ZooKeeper to monitor and failover the masters. But as HBase uses HDFS as its low-layer filesystem, if HDFS is down, HBase is down too. The master node of HDFS, which is called NameNode, is the Single Point Of Failure (SPOF) of HDFS, so it is the SPOF of an HBase cluster. However, NameNode as a software is very robust and stable. Moreover, the HDFS team is working hard on a real HA NameNode, which is expected to be included in Hadoop's next major release.

The first seven recipes in this chapter explain how we can get HBase and all its dependencies working together, as a fully distributed HBase cluster. The last recipe explains an advanced topic on how to avoid the SPOF issue of the cluster.

We will start by setting up a standalone HBase instance, and then demonstrate setting up a distributed HBase cluster on Amazon EC2.

Quick start


HBase has two run modes—standalone mode and distributed mode. Standalone mode is the default mode of HBase. In standalone mode, HBase uses a local filesystem instead of HDFS, and runs all HBase daemons and an HBase-managed ZooKeeper instance, all in the same JVM.

This recipe describes the setup of a standalone HBase. It leads you through installing HBase, starting it in standalone mode, creating a table via HBase Shell, inserting rows, and then cleaning up and shutting down the standalone HBase instance.

Getting ready

You are going to need a Linux machine to run the stack. Running HBase on top of Windows is not recommended. We will use Debian 6.0.1 (Debian Squeeze) in this book, because we have several Hadoop/HBase clusters running on top of Debian in production at my company, Rakuten Inc., and 6.0.1 is the latest Amazon Machine Image (AMI) we have, at http://wiki.debian.org/Cloud/AmazonEC2Image.

As HBase is written in Java, you will need to have Java installed first. HBase runs on Oracle's JDK only, so do not use OpenJDK for the setup. Although Java 7 is available, we don't recommend you to use Java 7 now because it needs more time to be tested. You can download the latest Java SE 6 from the following link: http://www.oracle.com/technetwork/java/javase/downloads/index.html.

Execute the downloaded bin file to install Java SE 6. We will use /usr/local/jdk1.6 as JAVA_HOME in this book:

root# ln -s /your/java/install/directory /usr/local/jdk1.6

We will add a user with the name hadoop, as the owner of all HBase/Hadoop daemons and files. We will have all HBase files and data stored under /usr/local/hbase:

root# useradd hadoop
root# mkdir /usr/local/hbase
root# chown hadoop:hadoop /usr/local/hbase

How to do it...

Get the latest stable HBase release from HBase's official site, http://www.apache.org/dyn/closer.cgi/hbase/. At the time of writing this book, the current stable release was 0.92.1.

You can set up a standalone HBase instance by following these instructions:

  1. 1. Download the tarball and decompress it to our root directory for HBase. We will set an HBASE_HOME environment variable to make the setup easier, by using the following commands:

    root# su - hadoop
    hadoop$ cd /usr/local/hbase
    hadoop$ tar xfvz hbase-0.92.1.tar.gz
    hadoop$ ln -s hbase-0.92.1 current
    hadoop$ export HBASE_HOME=/usr/local/hbase/current
    
  2. 2. Set JAVA_HOME in HBase's environment setting file, by using the following command:

    hadoop$ vi $HBASE_HOME/conf/hbase-env.sh
    # The java implementation to use. Java 1.6 required.
    export JAVA_HOME=/usr/local/jdk1.6
    
  3. 3. Create a directory for HBase to store its data and set the path in the HBase configuration file (hbase-site.xml), between the<configuration> tag, by using the following commands:

    hadoop$ mkdir -p /usr/local/hbase/var/hbase
    hadoop$ vi /usr/local/hbase/current/conf/hbase-site.xml
    <property>
    <name>hbase.rootdir</name>
    <value>file:///usr/local/hbase/var/hbase</value>
    </property>
    
  4. 4. Start HBase in standalone mode by using the following command:

    hadoop$ $HBASE_HOME/bin/start-hbase.sh
    starting master, logging to /usr/local/hbase/current/logs/hbase-hadoop-master-master1.out
    
  5. 5. Connect to the running HBase via HBase Shell, using the following command:

    hadoop$ $HBASE_HOME/bin/hbase shell
    HBase Shell; enter 'help<RETURN>' for list of supported commands.
    Type "exit<RETURN>" to leave the HBase Shell
    Version 0.92.1, r1298924, Fri Mar 9 16:58:34 UTC 2012
    
  6. 6. Verify HBase's installation by creating a table and then inserting some values. Create a table named test, with a single column family named cf1, as shown here:

    hbase(main):001:0> create 'test', 'cf1'
    0 row(s) in 0.7600 seconds
    

    i. In order to list the newly created table, use the following command:

    hbase(main):002:0> list
    TABLE
    test
    1 row(s) in 0.0440 seconds
    

    ii. In order to insert some values into the newly created table, use the following commands:

    hbase(main):003:0> put 'test', 'row1', 'cf1:a', 'value1'
    0 row(s) in 0.0840 seconds
    hbase(main):004:0> put 'test', 'row1', 'cf1:b', 'value2'
    0 row(s) in 0.0320 seconds
    
  7. 7. Verify the data we inserted into HBase by using the scan command:

    hbase(main):003:0> scan 'test'
    ROW COLUMN+CELL row1 column=cf1:a, timestamp=1320947312117, value=value1 row1 column=cf1:b, timestamp=1320947363375, value=value2
    1 row(s) in 0.2530 seconds
    
  8. 8. Now clean up all that was done, by using the disable and drop commands:

    i. In order to disable the table test, use the following command:

    hbase(main):006:0> disable 'test'
    0 row(s) in 7.0770 seconds
    

    ii. In order to drop the the table test, use the following command:

    hbase(main):007:0> drop 'test'
    0 row(s) in 11.1290 seconds
    
  9. 9. Exit from HBase Shell using the following command:

    hbase(main):010:0> exit
    
  10. 10. Stop the HBase instance by executing the stop script:

hadoop$ /usr/local/hbase/current/bin/stop-hbase.sh
stopping hbase.......

How it works...

We installed HBase 0.92.1 on a single server. We have used a symbolic link named current for it, so that version upgrading in the future is easy to do.

In order to inform HBase where Java is installed, we will set JAVA_HOME in hbase-env.sh, which is the environment setting file of HBase. You will see some Java heap and HBase daemon settings in it too. We will discuss these settings in the last two chapters of this book.

In step 1, we created a directory on the local filesystem, for HBase to store its data. For a fully distributed installation, HBase needs to be configured to use HDFS, instead of a local filesystem. The HBase master daemon (HMaster) is started on the server where start-hbase.sh is executed. As we did not configure the region server here, HBase will start a single slave daemon (HRegionServer) on the same JVM too.

As we mentioned in the Introduction section, HBase depends on ZooKeeper as its coordination service. You may have noticed that we didn't start ZooKeeper in the previous steps. This is because HBase will start and manage its own ZooKeeper ensemble, by default.

Then we connected to HBase via HBase Shell. Using HBase Shell, you can manage your cluster, access data in HBase, and do many other jobs. Here, we just created a table called test, we inserted data into HBase, scanned the test table, and then disabled and dropped it, and exited the shell.

HBase can be stopped using its stop-hbase.sh script. This script stops both HMaster and HRegionServer daemons.

Getting ready on Amazon EC2


Amazon Elastic Compute Cloud (EC2) is a web service that provides resizable computer capacity in the cloud. By using Amazon EC2, we can practice HBase on a fully distributed mode easily, at low cost. All the servers that we will use to demonstrate HBase in this book are running on Amazon EC2.

This recipe describes the setup of the Amazon EC2 environment, as a preparation for the installation of HBase on it. We will set up a name server and client on Amazon EC2. You can also use other hosting services such as Rackspace, or real servers to set up your HBase cluster.

Getting ready

You will need to sign up, or create an Amazon Web Service (AWS)  account at http://aws.amazon.com/.

We will use EC2 command-line tools to manage our instances. You can download and set up the tools by following the instructions available at the following page:

http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/index.html?SettingUp_CommandLine.html.

You need a public/private key to log in to your EC2 instances. You can generate your key pairs and upload your public key to EC2, using these instructions:

http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/generating-a-keypair.html.

Before you can log in to an instance, you must authorize access. The following link contains instructions for adding rules to the default security group:

http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/adding-security-group-rules.html.

After all these steps are done, review the following checklist to make sure everything is ready:

  • X.509 certificates: Check if the X.509 certificates are uploaded. You can check this at your account's Security Credentials page.

  • EC2 key pairs: Check if EC2 key pairs are uploaded. You can check this at AWS Management Console | Amazon EC2 | NETWORK & SECURITY | Key Pairs.

  • Access: Check if the access has been authorized. This can be checked at AWS Management Console | Amazon EC2 | NETWORK & SECURITY | Security Groups | Inbound.

  • Environment variable settings: Check if the environment variable settings are done. As an example, the following snippet shows my settings; make sure you are using the right EC2_URL for your region:

$ cat ~/.bashrc
export EC2_HOME=~/opt/ec2-api-tools-1.4.4.2
export PATH=$PATH:$EC2_HOME/bin
export EC2_PRIVATE_KEY=~/.ec2/pk-OWRHNWUG7UXIOPJXLOBC5UZTQBOBCVQY.pem
export EC2_CERT=~/.ec2/cert-OWRHNWUG7UXIOPJXLOBC5UZTQBOBCVQY.pem
export JAVA_HOME=/Library/Java/Home
export EC2_URL=https://ec2.us-west-1.amazonaws.com

We need to import our EC2 key pairs to manage EC2 instances via EC2 command-line tools:

$ ec2-import-keypair your-key-pair-name --public-key-file ~/.ssh/id_rsa.pub

Verify the settings by typing the following command:

$ ec2-describe-instances

If everything has been set up properly, the command will show your instances similarly to how you had configured them in the previous command.

Note

Downloading the example code

You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

The last preparation is to find a suitable AMI. An AMI is a preconfigured operating system and software, which is used to create a virtual machine within EC2. We can find a registered Debian AMI at http://wiki.debian.org/Cloud/AmazonEC2Image.

For the purpose of practicing HBase, a 32-bit, EBS-backed AMI is the most cost effective AMI to use. Make sure you are choosing AMIs for your region. As we are using US-West (us-west-1) for this book, the AMI ID for us is ami-77287b32. This is a 32-bit, small instance of EC2. A small instance is good for practicing HBase on EC2 because it's cheap. For production, we recommend you to use at least High-Memory Extra Large Instance with EBS, or a real server.

How to do it...

Follow these instructions to get your EC2 instances ready for HBase. We will start two EC2 instances; one is a DNS/NTP server, and the other one is the client:

  1. 1. Start a micro instance for the mentioned server. We will use ns1.hbase-admin-cookbook.com (ns1) as its Fully Qualified Domain Name (FQDN) , in a later section of this book:

    $ ec2-run-instances ami-77287b32 -t t1.micro -k your-key-pair
    
  2. 2. Start a small instance for the client. We will use client1.hbase-admin-cookbook.com (client1) as its FQDN, later in this book:

    $ ec2-run-instances ami-77287b32 -t m1.small -k your-key-pair
    
  3. 3. Verify the startup from AWS Management Console, or by typing the following command:

    $ ec2-describe-instances
    
    • You should see two instances from the output of the command. From the output of the ec2-describe-instances command, or AWS Management Console, you can find the public DNS of the instances that have already started. The DNS shows a value such as ec2-xx-xx-xxx-xx.us-west-1.compute.amazonaws.com:

  4. 4. Log in to the instances via SSH, using the following command:

    $ ssh root@ec2-xx-xx-xxx-xx.us-west-1.compute.amazonaws.com
    
  5. 5. Update the package index files before we install packages on the server, by using the following command:

    root# apt-get update
    
  6. 6. Change your instances' time zone to your local timezone, using the following command:

    root# dpkg-reconfigure tzdata
    
  7. 7. Install the NTP server daemon on the DNS server, using the following command:

    root@ns# apt-get install ntp ntp-server ntpdate
    
  8. 8. Install the NTP client on the client/server, using the following command:

    root@client1# apt-get install ntp ntpdate
    
  9. 9. Configure /etc/ntp.conf on ns1 to run as an NTP server, and client1 to run as an NTP client, using ns1 as its server.

    Because there is no HBase-specific configuration for the NTP setup, we will skip the details. You can find the sample ntp.conf files for both the server and client, from the sample source of this book.

  10. 10. Install BIND9 on ns1 to run as a DNS server, using the following command:

    root@ns# apt-get install bind9
    
    • You will need to configure BIND9 to run as a primary master server for internal lookup, and run as a caching server for external lookup. You also need to configure the DNS server, to allow other EC2 instances to update their record on the DNS server.

      We will skip the details as this is out of the scope of this book. For sample BIND9 configuration, please refer to the source, shipped with this book.

  11. 11. For client1, just set it up using ns1 as its DNS server:

    root@client1# vi /etc/resolv.conf
    nameserver 10.160.49.250 #private IP of ns
    search hbase-admin-cookbook.com #domain name
    
  12. 12. Update the DNS hostname automatically. Set up hostname to the EC2 instance's user data of the client. From the My Instances page of AWS Management Console, select client1 from the instances list, stop it, and then click Instance Actions | View | Change User Data; enter the hostname of the instance you want to use (here client1) in the pop-up page:

  13. 13. Create a script to update the client's record on the DNS server, using user data:

    root@client1# vi ec2-hostname.sh
    #!/bin/bash
    #you will need to set up your DNS server to allow update from this key
    DNS_KEY=/root/etc/Kuser.hbase-admin-cookbook.com.+157+44141.private
    DOMAIN=hbase-admin-cookbook.com
    USER_DATA=`/usr/bin/curl -s http://169.254.169.254/latest/user-data`
    HOSTNAME=`echo $USER_DATA`
    #set also the hostname to the running instance
    hostname $HOSTNAME
    #we only need to update for local IP
    LOCIP=`/usr/bin/curl -s http://169.254.169.254/latest/meta-data/local-ipv4`
    cat<<EOF | /usr/bin/nsupdate -k $DNS_KEY -v
    server ns.$DOMAIN
    zone $DOMAIN
    update delete $HOSTNAME.$DOMAIN A
    update add $HOSTNAME.$DOMAIN 60 A $LOCIP
    send
    EOF
    
  14. 14. Finally, to run this at boot time from rc.local, add the following script to the rc.local file:

    root@client1# vi /etc/rc.local
    sh /root/bin/ec2-hostname.sh
    

How it works...

First we started two instances, a micro instance for DNS/NTP server, and a small one for client. To provide a name service to other instances, the DNS name server has to be kept running. Using micro instance can reduce your EC2 cost.

In step 3, we set up the NTP server and client. We will run our own NTP server on the same DNS server, and NTP clients on all other servers.

Note

Note: Make sure that the clocks on the HBase cluster members are in basic alignment.

EC2 instances can be started and stopped on demand; we don't need to pay for stopped instances. But, restarting an EC2 instance will change the IP address of the instance, which makes it difficult to run HBase. We can resolve this issue by running a DNS server to provide a name service to all EC2 instances in our HBase cluster. We can update name records on the DNS server every time other EC2 instances are restarted.

That's exactly what we have done in steps 4 and 5. Step 4 is a normal DNS setup. In step 5, we stored the instance name in its user data property at first, so that when the instance is restarted, we can get it back using EC2 API. Also, we will get the private IP address of the instance via EC2 API. With this data, we can then send a DNS update command to our DNS server every time the instance is restarted. As a result, we can always use its fixed hostname to access the instance.

We will keep only the DNS instance running constantly. You can stop all other instances whenever you do not need to run your HBase cluster.

Setting up Hadoop


A fully distributed HBase runs on top of HDFS. As a fully distributed HBase cluster installation, its master daemon (HMaster) typically runs on the same server as the master node of HDFS (NameNode), while its slave daemon (HRegionServer) runs on the same server as the slave node of HDFS, which is called DataNode.

Hadoop MapReduce is not required by HBase. MapReduce daemons do not need to be started. We will cover the setup of MapReduce in this recipe too, in case you like to run MapReduce on HBase. For a small Hadoop cluster, we usually have a master daemon of MapReduce (JobTracker) run on the NameNode server, and slave daemons of MapReduce (TaskTracker) run on the DataNode servers.

This recipe describes the setup of Hadoop. We will have one master node (master1) run NameNode and JobTracker on it. We will set up three slave nodes (slave1 to slave3), which will run DataNode and TaskTracker on them, respectively.

Getting ready

You will need four small EC2 instances, which can be obtained by using the following command:

$ec2-run-instances ami-77287b32 -t m1.small -n 4 -k your-key-pair

All these instances must be set up properly, as described in the previous recipe, Getting ready on Amazon EC2. Besides the NTP and DNS setups, Java installation is required by all servers too.

We will use the hadoop user as the owner of all Hadoop daemons and files. All Hadoop files and data will be stored under /usr/local/hadoop. Add the hadoop user and create a /usr/local/hadoop directory on all the servers, in advance.

We will set up one Hadoop client node as well. We will use client1, which we set up in the previous recipe. Therefore, the Java installation, hadoop user, and directory should be prepared on client1 too.

How to do it...

Here are the steps to set up a fully distributed Hadoop cluster:

  1. 1. In order to SSH log in to all nodes of the cluster, generate the hadoop user's public key on the master node:

    hadoop@master1$ ssh-keygen -t rsa -N ""
    
    • This command will create a public key for the hadoop user on the master node, at ~/.ssh/id_rsa.pub.

  2. 2. On all slave and client nodes, add the hadoop user's public key to allow SSH login from the master node:

    hadoop@slave1$ mkdir ~/.ssh
    hadoop@slave1$ chmod 700 ~/.ssh
    hadoop@slave1$ cat >> ~/.ssh/authorized_keys
    
  3. 3. Copy the hadoop user's public key you generated in the previous step, and paste to ~/.ssh/authorized_keys. Then, change its permission as following:

    hadoop@slave1$ chmod 600 ~/.ssh/authorized_keys
    
  4. 4. Get the latest, stable, HBase-supported Hadoop release from Hadoop's official site, http://www.apache.org/dyn/closer.cgi/hadoop/common/. While this chapter was being written, the latest HBase-supported, stable Hadoop release was 1.0.2. Download the tarball and decompress it to our root directory for Hadoop, then add a symbolic link, and an environment variable:

    hadoop@master1$ ln -s hadoop-1.0.2 current
    hadoop@master1$ export HADOOP_HOME=/usr/local/hadoop/current
    
  5. 5. Create the following directories on the master node:

    hadoop@master1$ mkdir -p /usr/local/hadoop/var/dfs/name
    hadoop@master1$ mkdir -p /usr/local/hadoop/var/dfs/data
    hadoop@master1$ mkdir -p /usr/local/hadoop/var/dfs/namesecondary
    
  6. 6. You can skip the following steps if you don't use MapReduce:

    hadoop@master1$ mkdir -p /usr/local/hadoop/var/mapred
    
  7. 7. Set up JAVA_HOME in Hadoop's environment setting file (hadoop-env.sh):

    hadoop@master1$ vi $HADOOP_HOME/conf/hadoop-env.sh
    export JAVA_HOME=/usr/local/jdk1.6
    
  8. 8. Add the hadoop.tmp.dir property to core-site.xml:

    hadoop@master1$ vi $HADOOP_HOME/conf/core-site.xml
    <property>
    <name>hadoop.tmp.dir</name>
    <value>/usr/local/hadoop/var</value>
    </property>
    
  9. 9. Add the fs.default.name property to core-site.xml:

    hadoop@master1$ vi $HADOOP_HOME/conf/core-site.xml
    <property>
    <name>fs.default.name</name>
    <value>hdfs://master1:8020</value>
    </property>
    
  10. 10. If you need MapReduce, add the mapred.job.tracker property to mapred-site.xml:

    hadoop@master1$ vi $HADOOP_HOME/conf/mapred-site.xml
    <property>
    <name>mapred.job.tracker</name>
    <value>master1:8021</value>
    </property>
    
  11. 11. Add a slave server list to the slaves file:

    hadoop@master1$ vi $HADOOP_HOME/conf/slaves
    slave1
    slave2
    slave3
    
  12. 12. Sync all Hadoop files from the master node, to client and slave nodes. Don't sync ${hadoop.tmp.dir} after the initial installation:

    hadoop@master1$ rsync -avz /usr/local/hadoop/ client1:/usr/local/hadoop/
    hadoop@master1$ for i in 1 2 3
    do rsync -avz /usr/local/hadoop/ slave$i:/usr/local/hadoop/
    sleep 1
    done
    
  13. 13. You need to format NameNode before starting Hadoop. Do it only for the initial installation:

    hadoop@master1$ $HADOOP_HOME/bin/hadoop namenode -format
    
  14. 14. Start HDFS from the master node:

    hadoop@master1$ $HADOOP_HOME/bin/start-dfs.sh
    
  15. 15. You can access your HDFS by typing the following command:

    hadoop@master1$ $HADOOP_HOME/bin/hadoop fs -ls /
    
    • You can also view your HDFS admin page from the browser. Make sure the 50070 port is opened. The HDFS admin page can be viewed at http://master1:50070/dfshealth.jsp:

  16. 16. Start MapReduce from the master node, if needed:

    hadoop@master1$ $HADOOP_HOME/bin/start-mapred.sh
    
    • Now you can access your MapReduce admin page from the browser. Make sure the 50030 port is opened. The MapReduce admin page can be viewed at http://master1:50030/jobtracker.jsp:

  17. 17. To stop HDFS, execute the following command from the master node:

    hadoop@master1$ $HADOOP_HOME/bin/stop-dfs.sh
    
  18. 18. To stop MapReduce, execute the following command from the master node:

    hadoop@master1$ $HADOOP_HOME/bin/stop-mapred.sh
    

How it works...

To start/stop the daemon on remote slaves from the master node, a passwordless SSH login of the hadoop user is required. We did this in step 1.

HBase must run on a special HDFS that supports a durable sync implementation. If HBase runs on an HDFS that has no durable sync implementation, it may lose data if its slave servers go down. Hadoop versions later than 0.20.205, including Hadoop 1.0.2 which we have chosen, support this feature.

HDFS and MapReduce use local filesystems to store their data. We created directories required by Hadoop in step 3, and set up the path to the Hadoop's configuration file in step 5.

In steps 9 to 11, we set up Hadoop so it could find HDFS, JobTracker, and slave servers. Before starting Hadoop, all Hadoop directories and settings need to be synced with the slave servers. The first time you start Hadoop (HDFS), you need to format NameNode. Note that you should only do this at the initial installation.

At this point, you can start/stop Hadoop using its start/stop script. Here we started/stopped HDFS and MapReduce separately, in case you don't require MapReduce. You can also use $HADOOP_HOME/bin/start-all.sh and stop-all.sh to start/stop HDFS and MapReduce using one command.

Left arrow icon Right arrow icon

Key benefits

  • Move large amounts of data into HBase and learn how to manage it efficiently
  • Set up HBase on the cloud, get it ready for production, and run it smoothly with high performance
  • Maximize the ability of HBase with the Hadoop eco-system including HDFS, MapReduce, Zookeeper, and Hive

Description

As an Open Source distributed big data store, HBase scales to billions of rows, with millions of columns and sits on top of the clusters of commodity machines. If you are looking for a way to store and access a huge amount of data in real-time, then look no further than HBase.HBase Administration Cookbook provides practical examples and simple step-by-step instructions for you to administrate HBase with ease. The recipes cover a wide range of processes for managing a fully distributed, highly available HBase cluster on the cloud. Working with such a huge amount of data means that an organized and manageable process is key and this book will help you to achieve that.The recipes in this practical cookbook start from setting up a fully distributed HBase cluster and moving data into it. You will learn how to use all of the tools for day-to-day administration tasks as well as for efficiently managing and monitoring the cluster to achieve the best performance possible. Understanding the relationship between Hadoop and HBase will allow you to get the best out of HBase so the book will show you how to set up Hadoop clusters, configure Hadoop to cooperate with HBase, and tune its performance.

Who is this book for?

This book is for HBase administrators, developers, and will even help Hadoop administrators. You are not required to have HBase experience, but are expected to have a basic understanding of Hadoop and MapReduce.

What you will learn

  • Set up a fully distributed, highly available HBase cluster and load data into it using the normal client API or your own MapReduce job
  • Access data in HBase via HBase Shell or Hive using its SQL-like query language
  • Backup and restore HBase table, along with its data distribution, and move or replicate data between different HBase clusters
  • Gather metrics then show them in graphs, monitor the cluster s status, and get notified if thresholds are exceeded
  • Tune your kernel settings with JVM GC, Hadoop, and HBase configuration to maximize the performance
  • Discover troubleshooting tools and tips in order to avoid the most commonly-found problems with HBase
  • Gain optimum performance with data compression, region splits, and by manually managing compaction
  • Learn advanced configuration and tuning for read and write-heavy clusters
Estimated delivery fee Deliver to United States

Economy delivery 10 - 13 business days

Free $6.95

Premium delivery 6 - 9 business days

$21.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Aug 17, 2012
Length: 332 pages
Edition : 1st
Language : English
ISBN-13 : 9781849517140
Vendor :
Apache
Category :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Redeem a companion digital copy on all Print orders
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to United States

Economy delivery 10 - 13 business days

Free $6.95

Premium delivery 6 - 9 business days

$21.95
(Includes tracking information)

Product Details

Publication date : Aug 17, 2012
Length: 332 pages
Edition : 1st
Language : English
ISBN-13 : 9781849517140
Vendor :
Apache
Category :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 109.98
Hadoop Beginner's Guide
$54.99
HBase Administration Cookbook
$54.99
Total $ 109.98 Stars icon

Table of Contents

9 Chapters
Setting Up HBase Cluster Chevron down icon Chevron up icon
Data Migration Chevron down icon Chevron up icon
Using Administration Tools Chevron down icon Chevron up icon
Backing Up and Restoring HBase Data Chevron down icon Chevron up icon
Monitoring and Diagnosis Chevron down icon Chevron up icon
Maintenance and Security Chevron down icon Chevron up icon
Troubleshooting Chevron down icon Chevron up icon
Basic Performance Tuning Chevron down icon Chevron up icon
Advanced Configurations and Tuning Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.4
(11 Ratings)
5 star 63.6%
4 star 27.3%
3 star 0%
2 star 0%
1 star 9.1%
Filter icon Filter
Top Reviews

Filter reviews by




Kindle Customer Sep 17, 2012
Full star icon Full star icon Full star icon Full star icon Full star icon 5
HBase Administration Cookbook is a must have for both seasoned and new users of HBase. More companies are looking to leverage HBase's unique capabilities, yet up to date documentation and best practices are hard to find. The HBase Administration Cookbook looks to fill this void. It is a great compliment to Lars' more development focused, HBase - The Definitive Guide.Yifeng Jiang does a good job presenting all the necessary HBase administration topics including initial cluster setup, data migration, backup/restore, monitoring, security, and performance tuning. As he walks through the configuration he offers the reasoning for the various setting values backed by real-world production experience. This saved me countless hours of trial and error as we setup our first HBase cluster.
Amazon Verified review Amazon
David Ginzburg Sep 25, 2012
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Delivers what promised.Will likely save many hours of coding,troubleshooting and scripting.Very focused and wisely structured.It is not intended to explain hbase deeply, but will get you very far without.
Amazon Verified review Amazon
Guiness-Draft Jan 28, 2013
Full star icon Full star icon Full star icon Full star icon Full star icon 5
After attending all of Cloudera's training classes for Hadoop and HBase, I felt fairly confident about jumping in and doing development. However, I always felt like the operations side of things were a lot more murky (yes, even after taking their Admin class which duplicates quite a bit of their Developer class). This book details all the types of things I believe should be taught in Cloudera's Admin class. The book is obviously tailored towards HBase, but some Hadoop nuggets are thrown in as well. This book is amazingly helpful for running a Hadoop/HBase cluster in the real world.
Amazon Verified review Amazon
H. Michael Covert Sep 28, 2012
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I received an early copy of this book from the publisher about four weeks ago. I have already found this book to be incredibly helpful. HBase can be very tricky to administer, debug, and to tune. This book provides essential new material that was missing from other documentation. It is an excellent companion to the Lars George book and should be a must have for anyone doing any sort of large scale HBase management or administration. The sections covering tools, backup strategies, and monitoring were especially beneficial. I wish I had this book from the very beginning. It would have saved me an immense amount of time.
Amazon Verified review Amazon
Doug Duncan Sep 15, 2012
Full star icon Full star icon Full star icon Full star icon Full star icon 5
HBase Adminsistration Cookbook by Yifeng Jiang is one of those books that will read and then return to on occasion when you have a problem. Being a cookbook style book, you'll easily be able to find the solution to your problem. The book also has several recipes for dealing with Hadoop, Java and the operating system as they apply to making HBase run optimally.As is typical with these practical step by step books, you are presented with a task, what you need to do to prepare things, the steps to take and then a breakdown of why you did what you did. There are some recipes that have a final section that takes you a little further along or helps you do related tasks.The book takes you through installing and setting up HBase and bringing in data, to backing up/restore data and keeping your data secure. The last couple chapters shows how to troubleshoot and performance tune your cluster.This is a book that will come in handy for me in the next couple months as we get our HBase cluster up and running. This is definitely a book that I would recommend for anyone who has to administer and maintain an HBase cluster.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the digital copy I get with my Print order? Chevron down icon Chevron up icon

When you buy any Print edition of our Books, you can redeem (for free) the eBook edition of the Print Book you’ve purchased. This gives you instant access to your book when you make an order via PDF, EPUB or our online Reader experience.

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela