Chapter 5. Serialization and Hadoop I/O
Hadoop is about big data, and whenever data is handled, discussion and detailing of IO becomes an integral part of the setup. Data needs to be ingested via the network or loaded from an external persistent media. The ingested data needs to be staged during the extraction and transformation steps. Finally, the results need to be stored for consumption by downstream analysis processes for serving data, reporting, and visualization. Each of these stages involves understanding the underlying data storage structure, data formats, and data models. These aspects help in tuning the entire data-handling pipeline for efficiency of storage and speed.
In this chapter, we will look into the IO features and capabilities of Hadoop. Specifically, we will cover the following topics:
- Serialization and deserialization support and their necessity within Hadoop
- Avro—an external serialization framework
- Data compression codecs available within Hadoop and their...