Processing Hive data in the Avro format
Avro is an evolvable schema-driven binary data format. It is hosted and maintained by the Apache Software Foundation (http://avro.apache.org/). It provides a rich data structure to store compact, fast binary data, and it relies on schemas. Avro files store data and schemas together; this helps faster reading of data as the files do not need to look for schema anywhere else. It can also be used in Remote Procedure Calls (RPC). There, the schema is transferred at the time of handshake between a client and server. In this recipe, we will take a look at how to process Avro files in Hive.
Getting ready
To perform this recipe, you should have a running Hadoop cluster as well as the latest version of Hive installed on it. Here, I am using Hive 1.2.1. Hive has built-in support for the Avro file format, so we don't need to import any third-party JARs.
How to do it...
Using Avro SerDe
, we can either read data that is already in the Avro format or write new data...