Summary
In this chapter, we learned how to import data from various sources into a Spark environment as a Spark DataFrame. In addition, we learned how to carry out various SQL operations on that DataFrame, and how to generate various statistical measures, such as correlation analysis, identifying the distribution of data, building a feature importance model, and so on. We also looked into how to generate effective graphs using Plotly offline, where you can generate various plots to develop an analysis report.
This book has hopefully offered a stimulating journey through big data. We started with Python and covered several libraries that are part of the Python data science stack: NumPy and Pandas, We also looked at home we can use Jupyter notebooks. We then saw how to create informative data visualizations, with some guiding principles on what is a good graph, and used Matplotlib and Seaborn to materialize the figures. Then we made a start with the Big Data tools - Hadoop and Spark, thereby...