Spark's SQL provides a convenient way to explore data and gain a deeper understanding of the data. Spark's DataFrame construct can be registered as temporary tables. It is possible to run SQL on these registered tables by performing all of the normal operations, such as joining tables and filtering data.
Let's look at an example Spark shell to learn how to explore data by using the following steps:
- Start the Spark shell in a Terminal as follows:
$ spark-shell
- Define the following Scala case called Person with the following three attributes:
- fname: String
- lname: String
- age: Int
scala> case class Person(fname: String, lname: String, age: Int)
defined class Person
- Create a Scala list consisting of a few persons and put it into a Spark dataset of Person as follows:
scala> val personsDS = List(Person("Jon", "Doe...