Apache Hadoop
Apache Hadoop is a set of tools that allows you to scale your data processing pipelines to thousands of machines. It includes:
- Hadoop MapReduce: This is a data processing framework
- HDFS:Â This is a distributed filesystem, which allows us to store data on multiple machines
- YARN:Â This is the executor of MapReduce and other jobs
We will only cover MapReduce, as it is the core of Hadoop and it is related to data processing. We will not cover the rest, and we will also not talk about setting up or configuring a Hadoop Cluster as this is slightly beyond scope for this book. If you are interested in knowing more about it, Hadoop: The Definitive Guide by Tom White is an excellent book for learning this subject in depth.Â
In our experiments, we will use the local mode, that is, we will emulate the cluster, but still run the code on a local machine. This is very useful for testing, and once we are sure that it works correctly, it can be deployed to a cluster with no changes.Â