Exercise – Building a data lake on a Dataproc cluster
In this exercise, we will use Dataproc to store and process log data. Log data is a good representation of unstructured data. Organizations often need to analyze log data to understand their users' behavior.
In the exercise, we will learn how to use HDFS and PySpark using different methods. In the beginning, we will use Cloud Shell to get a basic understanding of the technologies. In the later sections, we will use Cloud Shell Code Editor and submit the jobs to Dataproc. But for the first step, let's create our Dataproc cluster.
Creating a Dataproc cluster on GCP
To create a Dataproc cluster, access your navigation menu and find Dataproc. You will find the CREATE CLUSTER button, which leads to this Create a cluster page:
There are many configurations in Dataproc. We don't need to set everything. Most of them are optional. For...