Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Java Data Analysis

You're reading from   Java Data Analysis Data mining, big data analysis, NoSQL, and data visualization

Arrow left icon
Product type Paperback
Published in Sep 2017
Publisher Packt
ISBN-13 9781787285651
Length 412 pages
Edition 1st Edition
Languages
Concepts
Arrow right icon
Author (1):
Arrow left icon
John R. Hubbard John R. Hubbard
Author Profile Icon John R. Hubbard
John R. Hubbard
Arrow right icon
View More author details
Toc

Table of Contents (14) Chapters Close

Preface 1. Introduction to Data Analysis 2. Data Preprocessing FREE CHAPTER 3. Data Visualization 4. Statistics 5. Relational Databases 6. Regression Analysis 7. Classification Analysis 8. Cluster Analysis 9. Recommender Systems 10. NoSQL Databases 11. Big Data Analysis with Java A. Java Tools Index

Apache Hadoop

Apache Hadoop is an open-source software system that allows for the distributed storage and processing of very large datasets. It implements the MapReduce framework.

The system includes these modules:

  • Hadoop Common: The common libraries and utilities that support the other Hadoop modules
  • Hadoop Distributed File System (HDFSâ„¢): A distributed filesystem that stores data on commodity machines, providing high-throughput access across the cluster
  • Hadoop YARN: A platform for job scheduling and cluster resource management
  • Hadoop MapReduce: An implementation of the Google MapReduce framework

Hadoop originated as the Google File System in 2003. Its developer, Doug Cutting, named it after his son's toy elephant. By 2006, it had become HDFS, the Hadoop Distributed File System.

In April of 2006, using MapReduce, Hadoop set a record of sorting 1.8 TB of data, distributed in 188 nodes, in under 48 hours. Two years later, it set the world record by sorting one terabyte of data in 209...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime