Spark SQL operations
Working in Spark SQL primarily happens in three stages: the creation of dataset, applying SQL operations, and finally persisting the dataset. We have so far been able to create a dataset from RDD and other data sources (refer to Chapter 5, Working with Data and Storage) and also persist the dataset as discussed in the previous section. Now let's look at some of the ways in which SQL operations can be applied to a dataset.
Untyped dataset operation
Once we have created the dataset, then Spark provides a couple of handy functions which perform basic SQL operation and analysis, such as the following:
show()
: This displays the top 20 rows of the dataset in a tabular form. Strings of more than 20 characters will be truncated, and all cells will be aligned right:
emp_ds.show();
Another variant of the show()
function allows the user to enable or disable the 20 characters limit in the show()
function by passing a Boolean as false to disable truncation of the string:
emp_ds.show(false...