Now that we have produced our final Spark data frame, we can write it to disk. Then, from the next chapter onwards, we will read it back into the workspace rather than have to recreate it from scratch. If you are proceeding directly to the next chapter, you can skip this step for now:
- We will save in Parquet file format, which is a very efficient format for Spark and SQL. The %fs (file system) directive allows you to issue a directory (or file listing) command using the ls operating system command.
- Once the file is saved, you can validate the integrity of the file by reading it back in and assigning it to the out_sd dataframe (again).
- Use the head command to verify that the data was read back in:
saveAsParquetFile(out_sd, "/tmp/temp.parquet")
%fs ls
out_sd <- parquetFile(sqlContext, "/tmp/temp.parquet")
...