Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Apache Hive Cookbook
Apache Hive Cookbook

Apache Hive Cookbook:

Arrow left icon
Profile Icon Hanish Bansal Profile Icon Saurabh Chauhan Profile Icon Shrey Mehrotra
Arrow right icon
€36.99
Full star icon Full star icon Full star icon Empty star icon Empty star icon 3 (4 Ratings)
Paperback Apr 2016 268 pages 1st Edition
eBook
€20.98 €29.99
Paperback
€36.99
Subscription
Free Trial
Renews at €18.99p/m
Arrow left icon
Profile Icon Hanish Bansal Profile Icon Saurabh Chauhan Profile Icon Shrey Mehrotra
Arrow right icon
€36.99
Full star icon Full star icon Full star icon Empty star icon Empty star icon 3 (4 Ratings)
Paperback Apr 2016 268 pages 1st Edition
eBook
€20.98 €29.99
Paperback
€36.99
Subscription
Free Trial
Renews at €18.99p/m
eBook
€20.98 €29.99
Paperback
€36.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Table of content icon View table of contents Preview book icon Preview Book

Apache Hive Cookbook

Chapter 1. Developing Hive

In this chapter, we will cover the following recipes:

  • Deploying Hive on a Hadoop cluster
  • Deploying Hive Metastore
  • Installing Hive
  • Configuring HCatalog
  • Understanding different components of Hive
  • Compiling Hive from source
  • Hive packages
  • Debugging Hive
  • Running Hive
  • Changing configurations at runtime

Introduction

Hive, an Apache Hadoop ecosystem component is developed by Facebook to query the data stored in Hadoop Distributed File System (HDFS). Here, HDFS is the data storage layer of Hadoop that at very high level divides the data into small blocks (default 128 MB) and stores these blocks on different nodes.

Hive provides a SQL-like query model named Hive Query Language (HQL) to access and analyze big data. It is also termed Data Warehousing framework of Hadoop and provides various analytical features, such as windowing and partitioning.

Deploying Hive on a Hadoop cluster

Hive is supported by a wide variety of platforms. GNU/Linux and Windows are commonly used as the production environment, whereas Mac OS X is commonly used as the development environment.

Getting ready

In this book, we will assume a GNU/Linux-based installation of Apache Hive for installation and other instructions.

Before installing Hive, the first step is to make sure that a Java SE environment is installed properly. Hive requires version 6 or later, which can be downloaded from http://www.oracle.com/technetwork/java/javase/downloads/index.html.

How to do it...

To install Hive, just download it from http://Hive.apache.org/downloads.html and unpack it. Choose the latest stable version.

Note

At the time of writing this book, Hive 1.2.1 was the latest stable version available.

How it works…

By default, Hive is configured to use an embedded Derby database whose disk storage location is determined by the Hive configuration variable named javax.jdo.option.ConnectionURL. By default, this location is set to the /metastore_dbinconf/hive-default.xml file. Hive with Derby as metastore in embedded mode allows at most one user at a time.

The other modes of installation are Hive with local metastore and Hive with remote metastore, which will be discussed later.

Deploying Hive Metastore

Apache Hive is a client-side library that provides a table-like abstraction on top of the data in HDFS for data processing. Hive jobs are converted into a map reduce plan, which is then submitted to the Hadoop cluster. Hadoop cluster is the set of nodes or machines with HDFS, MapReduce, and YARN deployed on these machines. MapReduce works on the distributed data stored in HDFS and processes a large datasets in parallel, as compared with traditional processing engines that process whole task on a single machine and wait for hours or days for a single query. Yet Another Resource Negotiator (YARN) is used to manage RAM the and CPU cores of the whole cluster, which are critical for running any process on a node.

The Hive table and database definitions and mapping to the data in HDFS is stored in a metastore. A metastore is a central repository for Hive metadata. A metastore consists of two main components, which are really important for working on Hive. Let's take a look at these components:

  • Services to which the client connects and queries the metastore
  • A backing database to store the metadata

Getting ready

In this book, we will assume a GNU/Linux-based installation of Apache Hive for installation and other instructions.

Before installing Hive, the first step is to make sure that a Java SE environment is installed properly. Hive requires version 6 or later, which can be downloaded from http://www.oracle.com/technetwork/java/javase/downloads/index.html.

How to do it…

In Hive, a metastore (service and RDBMS database) could be configured in one of the following ways:

  • An embedded metastore
  • A local metastore
  • A remote metastore

When we install Hive on the preinstalled Hadoop cluster, Hive, by default, gets the embedded database. This means that we need not configure any database as a Hive metastore. Let's check out what these configurations are and why we call them the embedded and remote metastore.

By default, the metastore service and the Hive service run in the same JVM. Hive needs a database to store metadata. In default mode, it uses an embedded Derby database stored on the local file system. The embedded mode of Hive has the limitation that only one session can be opened at a time from the same location on a machine as only one embedded Derby database can get lock and access the database files on disk:

How to do it…

An Embedded Metastore has a single service and a single JVM that cannot work with multiple nodes at a time.

To solve this limitation, a separate RDBMS database runs on same node. The metastore service and Hive service still run in the same JVM. This configuration mode is named local metastore. Here, local means the same environment of the JVM machine as well as the service in the same node.

There is one more configuration where one or more metastore servers run in a separate JVM process to the Hive service connecting to a database on a remote machine. This configuration is named remote metastore.

The Hive service is configured to use a remote metastore by setting hive.metastore.uris to metastore server URIs, separated by commas. The Hive metastore could be configured using properties specified in the following sections.

In the following diagram, the pictorial representation of the metastore and driver is given:

How to do it…
<property>
    <name>hive.metastore.warehouse.dir</name>
    <value>/user/Hive/warehouse </value>
    <description>The directory relative to fs.default.name where managed tables are stored.
    </description>
</property>

<property>
    <name> hive.metastore.uris</name>
    <value></value>
    <description> The URIs specifying the remote metastore servers to connect to. If there are multiple remote servers, clients connect in a round-robin fashion
    </description>
</property>

<property>
    <name>javax.jdo.option. ConnectionURL</name>
    <value>jdbc:derby:;databaseName=hivemetastore;create=true</value>
    <description> The JDBC URL of database.
    </description>
</property>

<property>
    <name> javax.jdo.option.ConnectionDriverName </name>
    <value> org.apache.derby.jdbc.EmbeddedDriver </value>
    <description> The JDBC driver classname.
    </description>
</property>
<property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>username</value>
    <description>metastore username to connect with
    </description>
</property>

<property>
    <name> javax.jdo.option.ConnectionPassword </name>
    <value>password</value>
    <description>metastore password to connect with
    </description>
</property>

Installing Hive

We will now take a look at installing Hive along with all the prerequisites.

Getting ready

Let's download the stable version from one of the mirrors:

$ wget http://a.mbbsindia.com/hive/hive-1.2.1/apache-hive-1.2.1-bin.tar.gz

How to do it…

This can be achieved in three ways.

Hive with an embedded metastore

Once you have downloaded the Hive tar-ball file, installing and setting up a Hive is pretty simple and straightforward. Extract the compressed tar:

$tar –xzvf apache-hive-1.2.1-bin.tar.gz

Export the location where Hive is extracted as the environment variable HIVE_HOME:

$ cd  apache-hive-1.2.1-bin
$ export HIVE_HOME={{pwd}}

Hive has all its installation scripts in the $HIVE_HOME/bin directory. Export this location to the PATH environment variable so that you can run all scripts from any location directly from a command-line:

$ export PATH=$HIVE_HOME/bin:$PATH

Alternatively, if you want to set the Hive path permanently for the user, then make the entry of Hive environment variables in the .bashrc or .bash_profile files available or could be created in the user's home folder:

  1. Add the following to ~/.bash_profile:
    export HIVE_HOME=/home/hduser/apache-hive-1.2.1-bin
    export PATH=$PATH:$HIVE_HOME/bin
    
  2. Here, hduser is the name of user with which you have logged in and Hive-1.2.1 is the Hive directory extracted from the tar file.
Run Hive from a terminal:
    hive
    
  3. Make sure that the Hive node has a connection to Hadoop cluster, which means Hive would be installed on any of the Hadoop nodes, or Hadoop configurations are available in the node's class path.
  4. This installation uses the embedded Derby database and stores the data on the local filesystem. Only one Hive session can be open on the node.
  5. If different users try to run the Hive shell, the second would get the Failed to start database 'metastore_db' error.
  6. Run Hive queries for the datastore to test the installation:
    hive> SHOW TABLES;
    hive> CREATE TABLE sales(id INT, product String, age INT) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t';
    
  7. Logs are generated per user bases in the /tmp/<usrename> folder.

Hive with a local metastore

Follow these steps to configure Hive with the local metastore. Here, we are using the MySQL database as a metastore:

  1. Add following to ~/.bash_profile:
    export HIVE_HOME=/home/hduser/apache-hive-1.2.1-bin
    export PATH=$PATH:$HIVE_HOME/bin
    

    Here, hduser is the user name, and apache-hive-1.2.1-bin is the Hive directory extracted from the tar file.

  2. Install a SQL database such as MySQL on the same machine where you want to run Hive.
  3. For the Ubuntu, MySQL could be installed by running the following command on the node's terminal:
    sudo apt-get install mysql-server
    
  4. In case of MySql, Hive needs the mysql-connector jar. Download the latest mysql-connector jar from http://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-5.1.35.tar.gz and copy it to the lib folder of your Hive home directory.
  5. Create a file, hive-site.xml, in the conf folder of Hive and add the following entries to it:
    <configuration>
    <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://localhost:3306/metastore_db?createDatabaseIfNotExist=true</value>
    <description>metadata is stored in a MySQL server</description>
    </property>
    <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
    <description>MySQL JDBC driver class</description>
    </property>
    <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>hduser</value>
    <description>user name for connecting to mysql server     
    </description>
    </property>
    <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>passwd</value>
    <description>password for connecting to mysql server</description>
    </property>
    </configuration>
  6. Run Hive from the terminal:
    hive
    

Note

There is a known "JLine" jar conflict issue with Hadoop 2.6.0 and Hive 1.2.1. If you are getting the error "unable to load class jline.terminal," you need to remove the older version of the jline jar from the yarn lib folder using the following command:

sudo rm -r $HADOOP_PREFIX/share/hadoop/yarn/lib/jline-0.9.94.jar

Hive with a remote metastore

Follow these steps to configure Hive with a remote metastore.

  1. Download the latest version of Hive from http://a.mbbsindia.com/hive/hive-1.2.1/apache-hive-1.2.1-bin.tar.gz.
  2. Extract the package:
    tar –xzvf apache-hive-1.2.1-bin.tar.gz
    
  3. Add the following to ~/.bash_profile:
    sudo nano ~/.bash_profile
    export HIVE_HOME=/home/hduser/apache-hive-1.2.1-bin
    export PATH=$PATH:$HIVE_HOME/bin
    

    Here, hduser is the user name and apache-hive-1.2.1-bin is the Hive directory extracted from the tar file.

  4. Install a SQL database such as MySQL on a remote machine to be used for the metastore.
  5. For Ubuntu, MySQL can be installed with the following command:
    sudo apt-get install mysql-server
    
  6. In the case of MySQL, Hive needs the mysql-connector jar file. Download the latest mysql-connector jar from http://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-5.1.35.tar.gz and copy it to the lib folder of your Hive home directory.
  7. Add the following entries to hive-site.xml:
    <configuration>
    <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://<ip_of_remote_host>:3306/metastore_db?createDatabaseIfNotExist=true</value>
    <description>metadata is stored in a MySQL server</description>
    </property>
    <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value><description>MySQL JDBC driver class</description>
    </property>
    <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>hduser</value>
    <description>user name for connecting to mysql server     
    </description>
    </property>
    <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>passwd</value>
    <description>password for connecting to mysql server</description>
    </property>
    </configuration>
  8. Start the Hive metastore interface:
    bin/hive --service metastore &
    
  9. Run Hive from the terminal:
    hive
    
  10. The Hive metastore interface by default listens at port 9083:
    netstat -an | grep 9083
    
  11. Start the Hive shell and make sure that the Hive Data Definition Language and Data Manipulation Language (DDL or DML) operations are working by creating tables in Hive.

Note

There is a known "JLine" jar conflict issue with Hadoop 2.6.0 and Hive 1.2.1. If you are getting the error "unable to load class jline.terminal," you need to remove the older version of jline jar from the yarn lib folder using the following command:

sudo rm -r $HADOOP_PREFIX/share/hadoop/yarn/lib/jline-0.9.94.jar

Configuring HCatalog

Assuming that Hive has been configured in the remote metastore, let's look into how to install and configure HCatalog.

Getting ready

The HCatalog CLI supports these command-line options:

Option

Usage

Description

-g

hcat -g mygrp

The HCatalog table, which needs to be created, must have the group "mygrp".

-p

hcat -p rwxrwxr-x

The HCatalog table, which needs to be created, must have permissions "rwxrwxr-x".

-f

hcat -f myscript.hcat

Tells HCatalog that myscript.hcat is a file containing DDL commands to execute.

-e

hcat -e 'create table mytable(a int);'

Treat the following string as a DDL command and execute it.

-D

hcat -Dkey=value

Pass the key-value pair to HCatalog as a Java System Property.

 

Hcat

Prints a usage message.

How to do it...

Hive 0.11.0 HCatalog is packaged with Hive binaries. Because we have already configured Hive, we could access the HCatalog command-line hcat command on shell. The script is available at the hcatalog/bin directory.

Understanding different components of Hive

Besides the Hive metastore, Hive components could be broadly classified as Hive clients and Hive servers. Hive servers provide interfaces to make the metastore available to external applications and check for user's authorization and authentication, and Hive clients are various applications used to access and execute Hive queries on the Hadoop cluster.

HiveServer

Let's take a look at its various components.

Hive metastore

Hive metastore URIs start a metastore service on the specified port. Metastore provides APIs to query the database, tables, schema, and other entities stored in the RDBMS datastore.

How to do it...

The metastore service starts as a Java process in the backend. You can start the Hive metastore service with the following command:

hive --service metastore &

HiveServer2

HiveServer2 is an interface that allows clients to execute Hive queries and get the result. It is based on Thrift RPC and supports multiple clients a against single client in HiveServer. It also provisioned for the authentication and authorization of the user.

How to do it...

The HiveServer2 service also starts as a Java process in the backend. You can start HiveServer2 with the following command:

hive --service hiveserver2 &

Hive clients

The following are the different clients available in Hive to query metastore data or to submit Hive queries to Hive servers.

Hive CLI

The following are the various sections included in Hive CLI.

Getting ready

Hive Command-line Interface (CLI) can be used to run Hive queries in either interactive or batch mode.

How to do it...

To run Hive CLI, use the following command:

$ HIVE_HOME/bin/hive

Queries are submitted by username of the user logged in to the UNIX system.

Beeline

The following are the various sections included in Beeline.

Getting ready

If you have configured HiveServer2, then a Beeline client can be used to interact with Hive.

How to do it...

To run Beeline, use the following command:

$ HIVE_HOME/bin/beeline

Using beeline, a connection could be made to any HiveServer2 instance with any username and password.

Compiling Hive from source

In this recipe, we will see how to compile Hive from source.

Getting ready

Apache Hive is an open source framework available for compilation and modification by any user. Hive source code is a maven project. The source has intermittent scripts executed on a UNIX platform during compilation.

The following prerequisites need to be installed:

  • UNIX OS: UNIX is preferable for Hive source compilation. Although the source could also be compiled on Windows, you need to comment out the intermittent scripts execution.
  • Maven: The following are the steps to configure maven:
    1. Download the Apache maven binaries for Linux (.tar.gz) from https://maven.apache.org/download.cgi.
      wget http://mirror.olnevhost.net/pub/apache/maven/maven-3/3.3.3/binaries/apache-maven-3.3.3-bin.tar.gz
      
    2. Extract the tar file:
      tar -xzvf apache-maven-3.3.3-bin.tar.gz
      
    3. Create a folder and move maven binaries to that folder:
      sudo mkdir –p /usr/lib/maven
      mv apache-maven-3.3.3-bin/usr/lib/maven/
      
    4. Open /etc/environment:
      sudo nano /etc/profile
      
    5. Add the following variable for the environment PATH:
      export M2_HOME=/usr/lib/maven/apache-maven-3.3.3-bin
      export M2=$M2_HOME/bin
      export PATH=$M2:$PATH
      
    6. Use the command source /etc/environment to add variables to PATH without restart:
      source /etc/environment
      
    7. Check whether maven is properly installed or not:
      mvn –version
      

How to do it...

Follow these steps to compile Hive on a Unix OS:

  1. Download the latest version of the Hive source tar file:
    sudo wget http://a.mbbsindia.com/hive/hive-1.2.1/apache-hive-1.2.1-src.tar.gz
    
  2. Extract the source folder:
    tar –xzvf apache-hive-1.2.1-src.tar.gz
    
  3. Move to the Hive directory:
    cd apache-hive-1.2.1-src
    
  4. To import Hive packages in eclipse, run the following command:
    mvn eclipse:eclipse
    
  5. To compile Hive with Hadoop 2 binaries, run the following command:
    mvn clean install -Phadoop-2,dist
    
  6. In case you want to skip tests execution, run the earlier command with the following switch:
    mvn clean install –DskipTests -Phadoop-2,dist
    
  7. To generate a tarball file from the source code, run the following command:
    mvn clean package -DskipTests -Phadoop-2 -Pdist
    

Hive packages

The following are the various sections included in Hive packages.

Getting ready

Hive source consists of different modules categorized by the features they provide or as a submodule of some other module.

How to do it...

The following is the list of Hive modules and their usage in Hive:

  • accumulo-handler: Apache accumulo is a distributed key-value datastore based on Google Big Table. This package includes the components responsible for mapping the Hive table to the accumulo table. AccumuloStorageHandler and AccumuloPredicateHandler are the main classes responsible for mapping tables. For more information, refer to the official integration documentation available at https://cwiki.apache.org/confluence/display/Hive/AccumuloIntegration.
  • ant: This tool is used to build earlier versions of Hive source. Ant is also needed to configure the Hive Web Interface server.
  • beeline: A Hive client used to connect with HiveServer2 and run Hive queries.
  • bin: This package includes scripts to start Hive clients and services.
  • cli: This is a Hive Command-line Interface implementation.
  • common: These are utility classes used by other modules.
  • conf: This contains default configurations and uses defined configuration objects.
  • contrib: This contains Serdes, generic UDF, and fileformat contributed by third parties to Hive.
  • hbase-handler: This module allows Hive SQL statements to access HBase tables for SELECT and INSERT commands. It also provides interfaces to access HBase and Hive tables for join and union in a single query. More information is available at https://cwiki.apache.org/confluence/display/Hive/HBaseIntegration.
  • hcatalog: This is a table management framework that helps other frameworks such as Pig or MapReduce to access the Hive metastore and table schema.
  • hwi: This module provides an implementation of a web interface to run Hive queries. Also, the WebHCat APIs provide REST APIs to access the Hive metastore.
  • Jdbc: This is a connector that accepts JDBC connections and calls to execute Hive queries on the cluster.
  • Metastore: This is the API that provides access to metastore entities including database, table, schema, and serdes.
  • odbc: This module implements the Open Database Connectivity (ODBC) API, enabling ODBC applications to connect and execute queries over Hive.
  • ql: This module provides an interface to clients that checks for query semantics and provides an implementation for driver, parser, and query planner.
  • Serde: This module has an implementation of serializer and deserializer used by Hive to read and write data. It helps in validating and parsing record and field types.
  • shims: This is the module that transparently intercepts and modifies calls to the Hive API, usually for compatibility purposes.
  • spark-client: This module provides an interface to execute Hive SQLs on a Spark framework.

Debugging Hive

Here, we will take a quick look at the command-line debugging option in Hive.

Getting ready

Hive code could be debugged by assigning a port to Hive and adding socket details to Hive JVM. To add debugging configuration to Hive, execute the following properties on an OS terminal or add it to bash_profile of the user:

export HIVE_DEBUG_PORT=8000
export HIVE_DEBUG="-Xdebug -Xrunjdwp:transport=dt_socket,address=${HIVE_DEBUG_PORT},server=y,suspend=y"

How to do it...

Once a debug port is attached to Hive and Hive server suspension is enabled at startup, the following steps will help you debug Hive queries:

  1. After defining previously mentioned properties, run the Hive CLI in debug mode:
    hive --debug
    
  2. If you have written up your own Test class and want to execute unit test cases written in that class, then you need to execute the following command specifying the class name you want to execute:
    mvn test -Dtest=ClassName
    

Running Hive

Let's see how to run Hive from the command-line.

Getting ready

Once you have the binaries of Hive either compiled or downloaded, you need to configure a metastore for Hive where it keeps information about different entities. Once that is configured, start Hive metastore and HiveServer2 to access the entities from different clients.

How to do it...

Follow these steps to start different components of Hive on a node:

  1. Run Hive CLI:
    $HIVE_HOME/bin/hive
    
  2. Run HiveServer2 and Beeline:
    $HIVE_HOME/bin/hiveserver2
    $HIVE_HOME/bin/beeline -u jdbc:Hive2://$HiveServer2_HOST:$HiveServer2_PORT
    
  3. Run HCatalog and start up the HCatalog server:
    $HIVE_HOME/hcatalog/sbin/hcat_server.sh
    
  4. Run the HCatalog CLI:
    $HIVE_HOME/hcatalog/bin/hcat
    
  5. Run WebHCat:
    $HIVE_HOME/hcatalog/sbin/webhcat_server.sh
    

Changing configurations at runtime

Let's see how we can change various configuration settings at runtime.

How to do it...

Follow these steps to change any of the Hive configuration properties at runtime for a particular session or query:

  1. Configuration for Hive and underlying MapReduce could be changed at runtime through beeline or the CLI. The general syntax to set a property is as follows:
    SET key=value;
    
  2. The configuration set is only applicable for that session. If you want to set it permanently, then you need to set it in Hive-site.xml. The examples are as follows:
    beeline> SET mapred.job.tracker=example.host.com:50030;
    Hive> SET Hive.exec.mode.local.auto=false;
    
Left arrow icon Right arrow icon

Key benefits

  • Grasp a complete reference of different Hive topics.
  • Get to know the latest recipes in development in Hive including CRUD operations
  • Understand Hive internals and integration of Hive with different frameworks used in today’s world.

Description

Hive was developed by Facebook and later open sourced in Apache community. Hive provides SQL like interface to run queries on Big Data frameworks. Hive provides SQL like syntax also called as HiveQL that includes all SQL capabilities like analytical functions which are the need of the hour in today’s Big Data world. This book provides you easy installation steps with different types of metastores supported by Hive. This book has simple and easy to learn recipes for configuring Hive clients and services. You would also learn different Hive optimizations including Partitions and Bucketing. The book also covers the source code explanation of latest Hive version. Hive Query Language is being used by other frameworks including spark. Towards the end you will cover integration of Hive with these frameworks.

Who is this book for?

The book is intended for those who want to start in Hive or who have basic understanding of Hive framework. Prior knowledge of basic SQL command is also required

What you will learn

  • Learn different features and offering on the latest Hive
  • Understand the working and structure of the Hive internals
  • Get an insight on the latest development in Hive framework
  • Grasp the concepts of Hive Data Model
  • Master the key concepts like Partition, Buckets and Statistics
  • Know how to integrate Hive with other frameworks such as Spark, Accumulo, etc
Estimated delivery fee Deliver to Cyprus

Premium delivery 7 - 10 business days

€32.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Apr 29, 2016
Length: 268 pages
Edition : 1st
Language : English
ISBN-13 : 9781782161080
Vendor :
Apache
Category :
Languages :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Estimated delivery fee Deliver to Cyprus

Premium delivery 7 - 10 business days

€32.95
(Includes tracking information)

Product Details

Publication date : Apr 29, 2016
Length: 268 pages
Edition : 1st
Language : English
ISBN-13 : 9781782161080
Vendor :
Apache
Category :
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 115.97
Apache Hive Cookbook
€36.99
Apache Hive Essentials
€32.99
Hadoop Real-World Solutions Cookbook- Second Edition
€45.99
Total 115.97 Stars icon

Table of Contents

13 Chapters
1. Developing Hive Chevron down icon Chevron up icon
2. Services in Hive Chevron down icon Chevron up icon
3. Understanding the Hive Data Model Chevron down icon Chevron up icon
4. Hive Data Definition Language Chevron down icon Chevron up icon
5. Hive Data Manipulation Language Chevron down icon Chevron up icon
6. Hive Extensibility Features Chevron down icon Chevron up icon
7. Joins and Join Optimization Chevron down icon Chevron up icon
8. Statistics in Hive Chevron down icon Chevron up icon
9. Functions in Hive Chevron down icon Chevron up icon
10. Hive Tuning Chevron down icon Chevron up icon
11. Hive Security Chevron down icon Chevron up icon
12. Hive Integration with Other Frameworks Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Empty star icon Empty star icon 3
(4 Ratings)
5 star 0%
4 star 50%
3 star 25%
2 star 0%
1 star 25%
Mukesh Rao Oct 24, 2016
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
Good book to get a wide exposure to Hive...
Amazon Verified review Amazon
shubha Jul 05, 2020
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
Good book for beginners. More details would have been useful.
Amazon Verified review Amazon
Shopper Oct 05, 2016
Full star icon Full star icon Full star icon Empty star icon Empty star icon 3
It's just ok, explanations are not in detail.
Amazon Verified review Amazon
Sap Jan 24, 2020
Full star icon Empty star icon Empty star icon Empty star icon Empty star icon 1
Hive Sucks
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela