Creating Pandas DataFrames over Spark
A DataFrame is a distributed collection of data organized into named columns. It is equivalent to a table in a relational database or a DataFrame in R/Python Python with rich optimizations. These can be constructed from a wide variety of sources, such as structured data files (JSON and parquet files), Hive tables, external databases, or from existing RDDs.
PySpark is the Python API for Apache Spark which is designed to scale to huge amounts of data. This recipe shows how to make use of Pandas over Spark.
Getting ready
To step through this recipe, you will need a running Spark cluster either in pseudo distributed mode or in one of the distributed modes, that is, standalone, YARN, or Mesos. Also, have Python and IPython installed on the Linux machine, that is, Ubuntu 14.04.
How to do it…
Invoke
ipython console -profile=pyspark
as follows:In [4]: from pyspark import SparkConf, SparkContext, SQLContext In [5]: import pandas as pd
Creating...