You can create a Spark cluster by going to the Cloud Platform Console. Select the project, and then click Continue to open the Clusters page. You would see the Cloud Dataproc clusters that belong to your project, if you have created any.
Click on the Create a cluster button to open the Create a Cloud Data pros cluster page. Refer to the following screenshot:
Once you click on Create a cluster, a detailed form, which is as shown in the following screenshot, shows up:
The previous screenshot shows the Create a Cloud Dataproc cluster page with the default fields automatically filled in for a new cluster-1 cluster. Take a look at the following screenshot:
You can expand the workers, bucket, network, version, initialization, and access options panel to specify one or more worker nodes, a staging bucket, network, initialization, the Cloud Dataproc image version, actions, and project-level access for your cluster. Providing these values is optional.
The default cluster is created with no worker nodes, an auto-created staging bucket, and a default network It also has the latest released Cloud Dataproc image version. You can change these default settings:
Once you have configured all fields on the page, click on the Create button to create the cluster. The cluster name created appears on the Clusters page. The status is updated to Running once the spark cluster is created.
Click on the cluster name created earlier to open the cluster details page. It also has a Overview tab and the CPU utilization graph selected.
You can examine jobs, instances, and so on for the cluster from the other tabs.