Introduction
The three workhorses of Spark for efficient processing of data at scale are RDD, DataFrames, and the Dataset API. While each can stand on its own merit, the new paradigm shift favors Dataset as the unifying data API to meet all data wrangling needs in a single interface.
The new Spark 2.0 Dataset API is a type-safe collection of domain objects that can be operated on via transformation (similar to RDDs' filter, map
, flatMap()
, and so on) in parallel using functional or relational operations. For backward compatibility, Dataset has a view called DataFrame, which is a collection of rows that are untyped. In this chapter, we demonstrate all three API sets. The figure ahead summarizes the pros and cons of the key components of Spark for data wrangling:
An advanced developer in machine learning must understand and be able to use all three API sets without any issues, for algorithmic augmentation or legacy reasons. While we recommend that every developer should migrate toward the high...