Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Modern Big Data Processing with Hadoop

You're reading from   Modern Big Data Processing with Hadoop Expert techniques for architecting end-to-end big data solutions to get valuable insights

Arrow left icon
Product type Paperback
Published in Mar 2018
Publisher Packt
ISBN-13 9781787122765
Length 394 pages
Edition 1st Edition
Languages
Concepts
Arrow right icon
Authors (3):
Arrow left icon
Manoj R Patil Manoj R Patil
Author Profile Icon Manoj R Patil
Manoj R Patil
Prashant Shindgikar Prashant Shindgikar
Author Profile Icon Prashant Shindgikar
Prashant Shindgikar
V Naresh Kumar V Naresh Kumar
Author Profile Icon V Naresh Kumar
V Naresh Kumar
Arrow right icon
View More author details
Toc

Table of Contents (12) Chapters Close

Preface 1. Enterprise Data Architecture Principles FREE CHAPTER 2. Hadoop Life Cycle Management 3. Hadoop Design Consideration 4. Data Movement Techniques 5. Data Modeling in Hadoop 6. Designing Real-Time Streaming Data Pipelines 7. Large-Scale Data Processing Frameworks 8. Building Enterprise Search Platform 9. Designing Data Visualization Solutions 10. Developing Applications Using the Cloud 11. Production Hadoop Cluster Deployment

Data Modeling in Hadoop

So far, we've learned how to create a Hadoop cluster and how to load data into it. In the previous chapter, we learned about various data ingestion tools and techniques. As we know by now, there are various open source tools available in the market, but there is a single silver bullet tool that can take on all our use cases. Each data ingestion tool has certain unique features; they can prove to be very productive and useful in typical use cases. For example, Sqoop is more useful when used to import and export Hadoop data from and to an RDBMS.

In this chapter, we will learn how to store and model data in Hadoop clusters. Like data ingestion tools, there are various data stores available. These data stores support different data models—that is, columnar data storage, key value pairs, and so on; and they support various file formats, such as ORC...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image