Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Apache Hive Essentials

You're reading from   Apache Hive Essentials Essential techniques to help you process, and get unique insights from, big data

Arrow left icon
Product type Paperback
Published in Jun 2018
Publisher Packt
ISBN-13 9781788995092
Length 210 pages
Edition 2nd Edition
Languages
Tools
Concepts
Arrow right icon
Author (1):
Arrow left icon
Dayong Du Dayong Du
Author Profile Icon Dayong Du
Dayong Du
Arrow right icon
View More author details
Toc

Table of Contents (12) Chapters Close

Preface 1. Overview of Big Data and Hive 2. Setting Up the Hive Environment FREE CHAPTER 3. Data Definition and Description 4. Data Correlation and Scope 5. Data Manipulation 6. Data Aggregation and Sampling 7. Performance Considerations 8. Extensibility Considerations 9. Security Considerations 10. Working with Other Tools 11. Other Books You May Enjoy

Overview of the Hadoop ecosystem

Hadoop was first released by Apache in 2011 as Version 1.0.0, which only contained HDFS and MapReduce. Hadoop was designed as both a computing (MapReduce) and storage (HDFS) platform from the very beginning. With the increasing need for big data analysis, Hadoop attracts lots of other software to resolve big data questions and merges into a Hadoop-centric big data ecosystem. The following diagram gives a brief overview of the Hadoop big data ecosystem in Apache stack:

Apache Hadoop ecosystem

In the current Hadoop ecosystem, HDFS is still the major option when using hard disk storage, and Alluxio provides virtually distributed memory alternatives. On top of HDFS, the Parquet, Avro, and ORC data formats could be used along with a snappy compression algorithm for computing and storage optimization. Yarn, as the first Hadoop general-purpose resource manager, is designed for better resource management and scalability. Spark and Ignite, as in-memory computing engines, are able to run on Yarn to work with Hadoop closely, too.

On the other hand, Kafka, Flink, and Storm are dominating stream processing. HBase is a leading NoSQL database, especially on Hadoop clusters. For machine learning, it comes to Spark MLlib and Madlib along with a new Mahout. Sqoop is still one of the leading tools for exchanging data between Hadoop and relational databases. Flume is a matured, distributed, and reliable log-collecting tool to move or collect data to HDFS. Impala and Drill are able to launch interactive SQL queries directly against the data on Hadoop. In addition, Hive over Spark/Tez along with Live Long And Process (LLAP) offers users the ability to run a query in long-lived processes on different computing frameworks, rather than MapReduce, with in-memory data caching. As a result, Hive is playing more important roles in the ecosystem than ever. We are also glad to see that Ambari as a new generation of cluster-management tools provides more powerful cluster management and coordination in addition to Zookeeper. For scheduling and workflow management, we can either use Airflow or Oozie. Finally, we have an open source governance and metadata service come into the picture, Altas, which empowers the compliance and lineage of big data in the ecosystem.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image