Analyzing Parquet files using Spark
Parquet is columnar data file format, which is being used extensively. In this recipe, we are going to take a look at how to access this data from Spark and then process it.
Getting ready
To perform this recipe, you should have Hadoop and Spark installed. You also need to install Scala. I am using Scala 2.11.0.
How to do it...
Spark supports the accessing of Parquet files from the SQL context. You can read and write Parquet files using this SQL context. In this recipe, we are going to take a look at how to read a Parquet file from HDFS and process it.
First of all, download the sample parquet file, users.parquet
, and store it in the HDFS /parquet
path https://github.com/deshpandetanmay/hadoop-real-world-cookbook/blob/master/data/users.parquet.
We will create a Scala project with the following files:
SparkParquet\build.sbt SparkParquet\project\assembly.sbt SparkParquet\src\main\scala\com\demo\SparkParquet.scala
The contents of build.sbt
are as follows:
name :...