Datasets are strongly typed collections of objects. These objects are usually domain-specific and can be transformed in parallel using relational or functional operations.
These operations are further categorized into actions and transformations. Transformations are functions that generate new datasets, while actions compute datasets and return the transformed results. Transformation functions include Map, FlatMap, Filter, Select, and Aggregate, while Action functions include count, show, and save to any filesystem.
The following instructions will help you create a dataset from a CSV file:
- Initialize SparkSession:
//Scala
import org.apache.spark.sql.SparkSession
val spark = SparkSession.builder().appName("Spark DataSet example").config("spark.config.option", "value").getOrCreate()
// For implicit conversions like converting RDDs to DataFrames...