The art of a big data store - Parquet
For an efficient and performant computing stack, we also need an equally optimal storage mechanism. Parquet fits the bill and can be considered as a best practice. The pattern uses the HDFS file system, curates the Datasets, and stores them in the Parquet format.
Parquet is a very efficient columnar format for data storage, initially developed by contributors from Twitter, Cloudera, Criteo, Berkely AMP Lab, LinkedIn, and Stripe. The Google Dremel paper (Dremel, 2010) inspired the basic algorithms and design of Parquet. It is now a top-level Apache project, parquet-apache
, and is the default format for reading and writing operations in Spark DataFrames. Almost all the big data products, from MPP databases to query engines to visualization tools, interface natively with Parquet. Let's take a quick look at the science of data storage in the big data domain, what we need from a store, and the capabilities of Parquet.
Column projection and data partition
Column...