Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Apache Hadoop 3 Quick Start Guide

You're reading from   Apache Hadoop 3 Quick Start Guide Learn about big data processing and analytics

Arrow left icon
Product type Paperback
Published in Oct 2018
Publisher Packt
ISBN-13 9781788999830
Length 220 pages
Edition 1st Edition
Languages
Tools
Concepts
Arrow right icon
Author (1):
Arrow left icon
Hrishikesh Vijay Karambelkar Hrishikesh Vijay Karambelkar
Author Profile Icon Hrishikesh Vijay Karambelkar
Hrishikesh Vijay Karambelkar
Arrow right icon
View More author details
Toc

Table of Contents (10) Chapters Close

Preface 1. Hadoop 3.0 - Background and Introduction FREE CHAPTER 2. Planning and Setting Up Hadoop Clusters 3. Deep Dive into the Hadoop Distributed File System 4. Developing MapReduce Applications 5. Building Rich YARN Applications 6. Monitoring and Administration of a Hadoop Cluster 7. Demystifying Hadoop Ecosystem Components 8. Advanced Topics in Apache Hadoop 9. Other Books You May Enjoy

What this book covers

Chapter 1, Hadoop 3.0 – Background and Introduction, gives you an overview of big data and Apache Hadoop. You will go through the history of Apache Hadoop's evolution, learn about what Hadoop offers today, and explore how it works. Also, you'll learn about the architecture of Apache Hadoop, as well as its new features and releases. Finally, you'll cover the commercial implementations of Hadoop.

Chapter 2, Planning and Setting Up Hadoop Clusters, covers the installation and setup of Apache Hadoop. We will start with learning about the prerequisites for setting up a Hadoop cluster. You will go through the different Hadoop configurations available for users, covering development mode, pseudo-distributed single nodes, and cluster setup. You'll learn how each of these configurations can be set up, and also run an example application of the configuration. Toward the end of the chapter, we will cover how you can diagnose Hadoop clusters by understanding log files and the different debugging tools available.

Chapter 3, Deep Diving into the Hadoop Distributed File System, goes into how HDFS works and its key features. We will look at the different data flowing patterns of HDFS, examining HDFS in different roles. Also, we'll take a look at various command-line interface commands for HDFS and the Hadoop shell. Finally, we'll look at the data structures that are used by HDFS with some examples.

Chapter 4, Developing MapReduce Applications, looks in depth at various topics pertaining to MapReduce. We will start by understanding the concept of MapReduce. We will take a look at the Hadoop application URL ports. Also, we'll study the different data formats needed for MapReduce. Then, we'll take a look at job compilation, remote job runs, and using utilities such as Tool. Finally, we'll learn about unit testing and failure handling.

Chapter 5, Building Rich YARN Applications, teaches you about the YARN architecture and the key features of YARN, such as resource models, federation, and RESTful APIs. Then, you'll configure a YARN environment in a Hadoop distributed cluster. Also, you'll study some of the additional properties of yarn-site.xml. You'll learn about the YARN distributed command-line interface. After this, we will delve into building YARN applications and monitoring them.

Chapter 6, Monitoring and Administration of a Hadoop Cluster, explores the different activities performed by Hadoop administrators for the monitoring and optimization of a Hadoop cluster. You'll learn about the roles and responsibilities of an administrator, followed by cluster planning. You'll dive deep into key management aspects of Hadoop clusters, such as resource management through job scheduling with algorithms such as Fair Scheduler and Capacity Scheduler. Also, you'll discover how to ensure high availability and security for an Apache Hadoop cluster.

Chapter 7, Demystifying Hadoop Ecosystem Components, covers the different components that constitute Hadoop's overall ecosystem offerings to solve complex industrial problems. We will take a brief overview of the tools and software that run on Hadoop. Also, we'll take a look at some components, such as Apache Kafka, Apache PIG, Apache Sqoop, and Apache Flume. After that, we'll cover the SQL and NoSQL Hadoop-based databases: Hive and HBase, respectively.

Chapter 8, Advanced Topics in Apache Hadoop, gets into advanced topics, such as the use of Hadoop for analytics using Apache Spark and processing streaming data using an Apache Storm pipeline. It will provide an overview of real-world use cases for different industries, with some sample code for you to try out independently.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image