Since the invention of Hadoop, many tools have been developed around the Hadoop ecosystem. These tools are used for data ingestion, data processing, and storage, solving some of the problems Hadoop initially had. In this section, we will be focusing on Apache Pig, which is a distributed processing tool built on top of MapReduce. We will also look into two widely used ingestion tools, namely Apache Kafka and Apache Flume. We will discuss how they are used to bring data from multiple sources. Apache Hbase will be described in this chapter. We will cover the architecture details and how it fits into the CAP theorem. In this chapter, we will cover the following topics:
- Apache Pig architecture
- Writing custom user-defined functions (UDF) in Pig
- Apache HBase walkthrough
- CAP theorem
- Apache Kafka internals
- Building producer...