Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
HBase High Performance Cookbook
HBase High Performance Cookbook

HBase High Performance Cookbook: Solutions for optimization, scaling and performance tuning

Arrow left icon
Profile Icon Ruchir Choudhry
Arrow right icon
NZ$14.99 NZ$64.99
Full star icon Full star icon Half star icon Empty star icon Empty star icon 2.5 (2 Ratings)
eBook Jan 2017 350 pages 1st Edition
eBook
NZ$14.99 NZ$64.99
Paperback
NZ$80.99
Subscription
Free Trial
Arrow left icon
Profile Icon Ruchir Choudhry
Arrow right icon
NZ$14.99 NZ$64.99
Full star icon Full star icon Half star icon Empty star icon Empty star icon 2.5 (2 Ratings)
eBook Jan 2017 350 pages 1st Edition
eBook
NZ$14.99 NZ$64.99
Paperback
NZ$80.99
Subscription
Free Trial
eBook
NZ$14.99 NZ$64.99
Paperback
NZ$80.99
Subscription
Free Trial

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

HBase High Performance Cookbook

Chapter 2. Loading Data from Various DBs

In this chapter, we will cover the following:

  • Extracting data from Oracle
  • Loading data using Oracle Big Data Connector
  • Bulk utilities
  • Using Hive and Apache Flume Streaming Data in Apache HBase
  • Using Sqoop

This will allow the actor to import data from different RDBMS/flat files.

Introduction

As we know, HBase is very effective in enabling real-time platforms to access read/write data randomly from the disk with commodity hardware, and there are many ways to do that, such as the following:

  • Put APIs
  • BulkLoad Tool
  • MapReduce jobs

Put APIs are the most straightforward way to place data into the HBase system, but they are only good for small sets of data and can be used for site-facing applications or for more real-time scenarios/use cases.

BulkLoad Tool runs the MapReduce job behind the scenes and loads data into HBase tables. These tools internally generate the HBase internal file format (HFile), which allows us to import the data into a live HBase cluster.

Note

In case of huge data or a very high write-intensive job, it's advisable to use the ImportTsv tool. Using MapReduce jobs in conjunction with HFileOutputFormat is acceptable; but as the data grows, it loses its performance, scalability, and maintainability, which are necessary for any software to be successful...

Extracting data from Oracle

HBase doesn't allow direct interaction or a pipeline for data import from Oracle and MySQL to HBase. The basic concept remains the same: to first extract the data into flat / text files (ImportTsv format), transform the data into HFiles, and then load them into HBase by telling the region server where to find them.

Getting ready

Let's start with getting public data from the following URL:

http://databank.worldbank.org/data/download/WDI_csv.zip

This will have the following files:

  • WDI_Data.csv
  • WDI_Country.csv (this is the file we will use)
  • WDI_Series.csv
  • WDI_CS_Notes.csv
  • WDI_ST_Notes.csv
  • WDI_Footnotes.csv
  • WDI_Description.csv

We will be using this as data and nothing else; this is freely available on the aforementioned World Bank site.

We will then create a table in Oracle Schema on your SQL prompt:

The names of the column used have an exact match with WDI_Country.csv:

CREATE TABLE WDI_COUNTRY 

(
"COUNTRY_CODE" VARCHAR2(100 BYTE),

"SHORT_NAME&quot...

Loading data using Oracle Big data connector

If there is a very large volume of data in the system, it is vital to have an extremely efficient data-processing engine between various touch points. Oracle Big Data connector is calibrated to do the following activities. We will only touch upon the loading part of it:

  • Connector for HDFS
  • Loader for Hadoop
  • Data Integrator Adaptor for Hadoop
  • R Advanced Analytics for Hadoop
  • XQuery for Hadoop

Getting Ready

Download Oracle connector from the following:

  1. Loading data using the Oracle Big data connector:

    http://www.oracle.com/technetwork/database/database-technologies/bdc/big-data-connectors/downloads/index.html

  2. Download for Linux x86-64
  3. Download Cloudera's Distribution (CDH3 or CDH4)
  4. JDK1.6.08 or later
  5. Hive 0.7.0, 0.8.1, or 0.9.0
  6. Oracle DB release 11.2.0.2 or 11.2.0.3, the same version of CDH3,CDH4

How to do it…

  1. Configure CDH or Apache Hadoop as shown in the preceding section.

    Tip

    Don't change anything in the HBase setup. Indicate clearly that jars...

Bulk utilities

The process for loading data using Bulk utilities is very similar:

  1. Extracting data from the source.
  2. Transforming the data into HFiles.
  3. Loading the files into HBase by guiding the region servers as to where to find them.

Getting ready...

The following points have to be remembered when using Bulk utilities:

  • HBase/Hadoop cluster with MapReduce/Yarn should be running. You can run jps to check it.
  • Access rights (user/group) are needed to execute the program.
  • Table schema needs to be designed to the input structure.
  • Split points need to be taken into consideration.
  • The entire stack (compaction, split, block size, max file size, flush size, version compression, mem store size, block cache, garbage collections nproc, and so on) needs to be fine-tuned to make the best of it.

The WAL are not written; thus, data lost during the failure may not be recovered as there is no replication performed by reading the WAL.

How to do it…

There are multiple ways to do this work, such as writing your own...

Using Hive with Apache HBase

Hive is an ETL engine for HBase/Hadoop. It has an SQL-like query language, popularly known as Hive QA for SELECT(read) and INSERT(write). The main objective is to do ad hoc analysis on Petabyte-level data. Hive integration was originally introduced in HIVE-705.

Getting ready

How to do it…

The first step is to use HbaseStorageHandler to register...

Using Sqoop

Sqoop provides an excellent way to import data in parallel from existing RDBMs to HDFS. We get an exact set of table structures that are imported. This happens because of parallel processing. These files can have text delimited by ',' '|', and so on. After manipulating imported records by using MapReduce or Hive, the output result set can be exported back to RDBMS. The data imported can be done in real time or in the batch process (using a cron job).

Getting ready

Prerequisites:

HBase and Hadoop cluster must be up and running.

You can do a wget to http://mirrors.gigenet.com/apache/sqoop/1.4.6/sqoop-1.4.6.tar.gz

Untar it to /u/HbaseB using tar –zxvf sqoop-1.4.6.tar.gz

It will create a /u/HbaseB/sqoop-1.4.6 folder.

A Sqoop user is created in the target DB, which has read/write access and is not bound strictly with CPU and memory (RAM, Storage) limitation by the DBAs.

How to do it…

  1. Log in to MySQL by executing the following command:
    Mysql –h yourMySqlHostName...
Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Architect a good HBase cluster for a very large distributed system
  • Get to grips with the concepts of performance tuning with HBase
  • A practical guide full of engaging recipes and attractive screenshots to enhance your system’s performance

Description

Apache HBase is a non-relational NoSQL database management system that runs on top of HDFS. It is an open source, disturbed, versioned, column-oriented store and is written in Java to provide random real-time access to big Data. We’ll start off by ensuring you have a solid understanding the basics of HBase, followed by giving you a thorough explanation of architecting a HBase cluster as per our project specifications. Next, we will explore the scalable structure of tables and we will be able to communicate with the HBase client. After this, we’ll show you the intricacies of MapReduce and the art of performance tuning with HBase. Following this, we’ll explain the concepts pertaining to scaling with HBase. Finally, you will get an understanding of how to integrate HBase with other tools such as ElasticSearch. By the end of this book, you will have learned enough to exploit HBase for boost system performance.

Who is this book for?

This book is intended for developers and architects who want to know all about HBase at a hands-on level. This book is also for big data enthusiasts and database developers who have worked with other NoSQL databases and now want to explore HBase as another futuristic scalable database solution in the big data space.

What you will learn

  • Configure HBase from a high performance perspective
  • Grab data from various RDBMS/Flat files into the HBASE systems
  • Understand table design and perform CRUD operations
  • Find out how the communication between the client and server happens in HBase
  • Grasp when to use and avoid MapReduce and how to perform various tasks with it
  • Get to know the concepts of scaling with HBase through practical examples
  • Set up Hbase in the Cloud for a small scale environment
  • Integrate HBase with other tools including ElasticSearch

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jan 31, 2017
Length: 350 pages
Edition : 1st
Language : English
ISBN-13 : 9781783983070
Category :
Languages :
Tools :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Product Details

Publication date : Jan 31, 2017
Length: 350 pages
Edition : 1st
Language : English
ISBN-13 : 9781783983070
Category :
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just NZ$7 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just NZ$7 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total NZ$ 233.97
HBase High Performance Cookbook
NZ$80.99
Hadoop 2.x Administration Cookbook
NZ$80.99
Learning Hbase
NZ$71.99
Total NZ$ 233.97 Stars icon
Banner background image

Table of Contents

12 Chapters
1. Configuring HBase Chevron down icon Chevron up icon
2. Loading Data from Various DBs Chevron down icon Chevron up icon
3. Working with Large Distributed Systems Part I Chevron down icon Chevron up icon
4. Working with Large Distributed Systems Part II Chevron down icon Chevron up icon
5. Working with Scalable Structure of tables Chevron down icon Chevron up icon
6. HBase Clients Chevron down icon Chevron up icon
7. Large-Scale MapReduce Chevron down icon Chevron up icon
8. HBase Performance Tuning Chevron down icon Chevron up icon
9. Performing Advanced Tasks on HBase Chevron down icon Chevron up icon
10. Optimizing Hbase for Cloud Chevron down icon Chevron up icon
11. Case Study Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Half star icon Empty star icon Empty star icon 2.5
(2 Ratings)
5 star 0%
4 star 0%
3 star 50%
2 star 50%
1 star 0%
GeneM Jul 12, 2017
Full star icon Full star icon Full star icon Empty star icon Empty star icon 3
This book is in sad need of a proof reader and an editor. It has good topics. Too many times I have to imagine what he meant to write.
Amazon Verified review Amazon
Adam Aug 25, 2017
Full star icon Full star icon Empty star icon Empty star icon Empty star icon 2
Makes a complex subject impossible to follow. There may be some good content here but its hidden amongst bad grammar, run on sentences, typos, and even review comments which were not removed.E.g. " if an inconsistency between the checksum and the block contents is observed, This does not make sense!, the communication is sent to the HDFS master".
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.