Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Modern Big Data Processing with Hadoop

You're reading from   Modern Big Data Processing with Hadoop Expert techniques for architecting end-to-end big data solutions to get valuable insights

Arrow left icon
Product type Paperback
Published in Mar 2018
Publisher Packt
ISBN-13 9781787122765
Length 394 pages
Edition 1st Edition
Languages
Concepts
Arrow right icon
Authors (3):
Arrow left icon
Manoj R Patil Manoj R Patil
Author Profile Icon Manoj R Patil
Manoj R Patil
Prashant Shindgikar Prashant Shindgikar
Author Profile Icon Prashant Shindgikar
Prashant Shindgikar
V Naresh Kumar V Naresh Kumar
Author Profile Icon V Naresh Kumar
V Naresh Kumar
Arrow right icon
View More author details
Toc

Table of Contents (12) Chapters Close

Preface 1. Enterprise Data Architecture Principles FREE CHAPTER 2. Hadoop Life Cycle Management 3. Hadoop Design Consideration 4. Data Movement Techniques 5. Data Modeling in Hadoop 6. Designing Real-Time Streaming Data Pipelines 7. Large-Scale Data Processing Frameworks 8. Building Enterprise Search Platform 9. Designing Data Visualization Solutions 10. Developing Applications Using the Cloud 11. Production Hadoop Cluster Deployment

Data Movement Techniques

In the last chapter, we learned about how to create and configure a Hadoop cluster, HDFS architecture, various file formats, and the best practices for a Hadoop cluster. We also learned about Hadoop high availability techniques.

Since we now know how to create and configure a Hadoop cluster, in this chapter, we will learn about various techniques of data ingestion into a Hadoop cluster. We know about the advantages of Hadoop, but now, we need data in our Hadoop cluster to utilize its real power.

Data ingestion is considered the very first step in the Hadoop data life cycle. Data can be ingested into Hadoop as either a batch or a (real-time) stream of records. Hadoop is a complete ecosystem, and MapReduce is a batch ecosystem of Hadoop.

The following diagram shows various data ingestion tools:

We will learn about each tool in detail in the next few sections...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime