$19.99
per month
Video
Feb 2022
14hrs 39mins
1st Edition
-
Get to grips with the high-level architecture of Hadoop
-
Understand the components available in the Hadoop ecosystem, and how they fit together
-
Get ready to manage big data using Hadoop and related technologies
Understanding Hadoop is a highly valuable skill for anyone working at companies that work with large amounts of data. Companies such as Amazon, eBay, Facebook, Google, LinkedIn, IBM, Spotify, Twitter, and Yahoo, use Hadoop in some way to process huge chunks of data. This video course will make you familiar with Hadoop's ecosystem and help you to understand how to apply Hadoop skills in the real world.
The course starts by taking you through the installation process of Hadoop on your desktop. Next, you will manage big data on a cluster with Hadoop Distributed File System (HDFS) and MapReduce, and use Pig and Spark to analyze data on Hadoop. Moving along, you will learn how to store and query your data using applications, such as Sqoop, Hive, MySQL, Phoenix, and MongoDB. Next, you will design real-world systems using the Hadoop ecosystem and learn how to manage clusters with Yet Another Resource Negotiator (YARN), Mesos, Zookeeper, Oozie, Zeppelin, and Hue. Towards the end, you will uncover the techniques to handle and stream data in real-time using Kafka, Flume, Spark Streaming, Flink, and Storm.
By the end of this course, you will become well-versed with the Hadoop ecosystem and will develop the skills required to store, analyze, and scale big data using Hadoop.
All the codes and supporting files for this course are available at - https://github.com/packtpublishing/the-ultimate-hands-on-hadoop
This video course is designed for people at every level; whether you are a software engineer or a programmer who wants to understand the Hadoop ecosystem, or a project manager who wants to become familiar with the Hadoop's lingo, or a system architect who wants to understand the components available in the Hadoop system. To get started with this course, a basic understanding of Python or Scala and ground-level knowledge of the Linux command line are recommended.
-
Become familiar with Hortonworks and the Ambari User Interface (UI)
-
Use Pig and Spark to create scripts to process data on a Hadoop cluster
-
Analyze non-relational data using HBase, Cassandra, and MongoDB
-
Query data interactively with Drill, Phoenix, and Presto
-
Publish data to your Hadoop cluster using Kafka, Sqoop, and Flume
-
Consume streaming data using Spark Streaming, Flink, and Storm