A Spark cluster is made up of two types of processes: a driver program and multiple executors. In the local mode, all these processes are run within the same JVM. In a cluster, these processes are usually run on separate nodes.
For example, a typical cluster that runs in Spark's standalone mode (that is, using Spark's built-in cluster management modules) will have the following:
- A master node that runs the Spark standalone master process as well as the driver program
- A number of worker nodes, each running an executor process
While we will be using Spark's local standalone mode throughout this book to illustrate concepts and examples, the same Spark code that we write can be run on a Spark cluster. In the preceding example, if we run the code on a Spark standalone cluster, we could simply pass in the URL for the master node, as follows:
$ MASTER=spark://IP:PORT --class org.apache.spark.examples.SparkPi
./examples/jars/spark-examples_2.11-2.0.0.jar 100
Here, IP is the IP address and PORT is the port of the Spark master. This tells Spark to run the program on the cluster where the Spark master process is running.
A full treatment of Spark's cluster management and deployment is beyond the scope of this book. However, we will briefly teach you how to set up and use an Amazon EC2 cluster later in this chapter.
For an overview of the Spark cluster-application deployment, take a look at the following links: