Techniques for visualizing data using PySpark
Apache Spark is a unified data processing engine and doesn't come out of the box with a graphical user interface, per se. As discussed in the previous sections, data that's been processed by Apache Spark can be stored in data warehouses and visualized using BI tools or natively visualized using notebooks. In this section, we will focus on how to leverage notebooks to interactively process and visualize data using PySpark. As we have done throughout this book, we will be making use of notebooks that come with Databricks Community Edition, though Jupyter and Zeppelin notebooks can also be used.
PySpark native data visualizations
There aren't any data visualization libraries that can work with PySpark DataFrames natively. However, the notebook implementations of cloud-based Spark distributions such as Databricks and Qubole support natively visualizing Spark DataFrames using the built-in display()
function. Let's see...