Spark SQL how-to in a nutshell
Prior to Spark 2.0.0, the heart of Spark SQL was SchemaRDD, which, as you can guess, associates a schema with an RDD. Of course, internally it does a lot of magic by leveraging the ability to scale and distribute processing and providing flexible storage.
In many ways, data access via Spark SQL is deceptively simple; we mean the process of creating one or more appropriate RDDs by paying attention to the layout, data types, and so on, and then accessing them via SchemaRDDs. We get to use all the interesting features of Spark to create the RDDs: structured data from Hive or Parquet, unstructured data from any source, and the ability to apply RDD operations at scale. Then, you need to overlay the respective schemas to the RDDs by creating SchemaRDDs. Voilà! You now have the ability to run SQL over RDDs. You can see the SchemaRDDs being created in the log entries.
Spark SQL with Spark 2.0
The preceding section was true until Spark 2.0 (actually Datasets have been...