Summary
This chapter covered one component of GCP that allows you to build a data lake, called Dataproc. As we’ve learned in this chapter, learning about Dataproc means learning about Hadoop. We learned about and practiced the core and most popular Hadoop components, HDFS and Spark.
By combining the nature of Hadoop with all the benefits of using the cloud, we also learned about new concepts. A Hadoop ephemeral cluster is relatively new and is only possible because of cloud technology. In a traditional on-premises Hadoop cluster, this highly efficient concept is never an option.
In this chapter, we focused on the core concepts of Spark. We learned about using RDDs and Spark DataFrames. These two concepts are the first entry point before learning about other features such as Spark ML and Spark Streaming. As you get more and more experienced, you will need to start to think about optimization – for example, how to manage parallelism, how to fasten-join, and how to...