Summary
After a review of what big data is, we learned about some tools that were designed for the storage and processing of very large volumes of data. Hadoop is an entire ecosystem of frameworks and tools, such as HDFS, designed to store data in a distributed fashion in a huge number of commodity-computing nodes, and YARN, a resource and job manager. We saw how to manipulate data directly on the HDFS using the HDFS fs commands.
We also learned about Spark, a very powerful and flexible parallel processing framework that integrates well with Hadoop. Spark has different APIs, such as SQL, GraphX, and Streaming. We learned how Spark represents data in the DataFrame API and that its computation is similar to pandas’ methods. We also saw how to store data in an efficient manner using the Parquet file format, and how to improve performance when analyzing data using partitioning. To finish up, we saw how to handle unstructured data files, such as text.
In the next chapter, we will go more deeply...