Deploying on a cluster in standalone mode
Compute resources in a distributed environment need to be managed so that resource utilization is efficient and every job gets a fair chance to run. Spark comes along with its own cluster manager conveniently called standalone mode. Spark also supports working with YARN and Mesos cluster managers.
The cluster manager that should be chosen is mostly driven by both legacy concerns and whether other frameworks, such as MapReduce, are sharing the same compute resource pool. If your cluster has legacy MapReduce jobs running, and all of them cannot be converted to Spark jobs, it is a good idea to use YARN as the cluster manager. Mesos is emerging as a data center operating system to conveniently manage jobs across frameworks, and is very compatible with Spark.
If the Spark framework is the only framework in your cluster, then standalone mode is good enough. As Spark evolves as technology, you will see more and more use cases of Spark being used as the standalone framework serving all big data compute needs. For example, some jobs may be using Apache Mahout at present because MLlib does not have a specific machine-learning library, which the job needs. As soon as MLlib gets this library, this particular job can be moved to Spark.
Getting ready
Let's consider a cluster of six nodes as an example setup: one master and five slaves (replace them with actual node names in your cluster):
Master m1.zettabytes.com Slaves s1.zettabytes.com s2.zettabytes.com s3.zettabytes.com s4.zettabytes.com s5.zettabytes.com
How to do it...
- Since Spark's standalone mode is the default, all you need to do is to have Spark binaries installed on both master and slave machines. Put
/opt/infoobjects/spark/sbin
in path on every node:$ echo "export PATH=$PATH:/opt/infoobjects/spark/sbin" >> /home/hduser/.bashrc
- Start the standalone master server (SSH to master first):
hduser@m1.zettabytes.com~] start-master.sh
Master, by default, starts on port 7077, which slaves use to connect to it. It also has a web UI at port 8088.
- Please SSH to master node and start slaves:
hduser@s1.zettabytes.com~] spark-class org.apache.spark.deploy.worker.Worker spark://m1.zettabytes.com:7077
Argument (for fine-grained configuration, the following parameters work with both master and slaves)
Meaning
-i <ipaddress>,-ip <ipaddress>
IP address/DNS service listens on
-p <port>, --port <port>
Port service listens on
--webui-port <port>
Port for web UI (by default, 8080 for master and 8081 for worker)
-c <cores>,--cores <cores>
Total CPU cores Spark applications that can be used on a machine (worker only)
-m <memory>,--memory <memory>
Total RAM Spark applications that can be used on a machine (worker only)
-d <dir>,--work-dir <dir>
The directory to use for scratch space and job output logs
- Rather than manually starting master and slave daemons on each node, it can also be accomplished using cluster launch scripts.
- First, create the
conf/slaves
file on a master node and add one line per slave hostname (using an example of five slaves nodes, replace with the DNS of slave nodes in your cluster):hduser@m1.zettabytes.com~] echo "s1.zettabytes.com" >> conf/slaves hduser@m1.zettabytes.com~] echo "s2.zettabytes.com" >> conf/slaves hduser@m1.zettabytes.com~] echo "s3.zettabytes.com" >> conf/slaves hduser@m1.zettabytes.com~] echo "s4.zettabytes.com" >> conf/slaves hduser@m1.zettabytes.com~] echo "s5.zettabytes.com" >> conf/slaves
Once the slave machine is set up, you can call the following scripts to start/stop cluster:
Script name
Purpose
start-master.sh
Starts a master instance on the host machine
start-slaves.sh
Starts a slave instance on each node in the slaves file
start-all.sh
Starts both master and slaves
stop-master.sh
Stops the master instance on the host machine
stop-slaves.sh
Stops the slave instance on all nodes in the slaves file
stop-all.sh
Stops both master and slaves
- Connect an application to the cluster through the Scala code:
val sparkContext = new SparkContext(new SparkConf().setMaster("spark://m1.zettabytes.com:7077")
- Connect to the cluster through Spark shell:
$ spark-shell --master spark://master:7077
How it works...
In standalone mode, Spark follows the master slave architecture, very much like Hadoop, MapReduce, and YARN. The compute master daemon is called Spark master and runs on one master node. Spark master can be made highly available using ZooKeeper. You can also add more standby masters on the fly, if needed.
The compute slave daemon is called worker and is on each slave node. The worker daemon does the following:
- Reports the availability of compute resources on a slave node, such as the number of cores, memory, and others, to Spark master
- Spawns the executor when asked to do so by Spark master
- Restarts the executor if it dies
There is, at most, one executor per application per slave machine.
Both Spark master and worker are very lightweight. Typically, memory allocation between 500 MB to 1 GB is sufficient. This value can be set in conf/spark-env.sh
by setting the SPARK_DAEMON_MEMORY
parameter. For example, the following configuration will set the memory to 1 gigabits for both master and worker daemon. Make sure you have sudo
as the super user before running it:
$ echo "export SPARK_DAEMON_MEMORY=1g" >> /opt/infoobjects/spark/conf/spark-env.sh
By default, each slave node has one worker instance running on it. Sometimes, you may have a few machines that are more powerful than others. In that case, you can spawn more than one worker on that machine by the following configuration (only on those machines):
$ echo "export SPARK_WORKER_INSTANCES=2" >> /opt/infoobjects/spark/conf/spark-env.sh
Spark worker, by default, uses all cores on the slave machine for its executors. If you would like to limit the number of cores the worker can use, you can set it to that number (for example, 12) by the following configuration:
$ echo "export SPARK_WORKER_CORES=12" >> /opt/infoobjects/spark/conf/spark-env.sh
Spark worker, by default, uses all the available RAM (1 GB for executors). Note that you cannot allocate how much memory each specific executor will use (you can control this from the driver configuration). To assign another value for the total memory (for example, 24 GB) to be used by all executors combined, execute the following setting:
$ echo "export SPARK_WORKER_MEMORY=24g" >> /opt/infoobjects/spark/conf/spark-env.sh
There are some settings you can do at the driver level:
- To specify the maximum number of CPU cores to be used by a given application across the cluster, you can set the
spark.cores.max
configuration in Spark submit or Spark shell as follows:$ spark-submit --conf spark.cores.max=12
- To specify the amount of memory each executor should be allocated (the minimum recommendation is 8 GB), you can set the
spark.executor.memory
configuration in Spark submit or Spark shell as follows:$ spark-submit --conf spark.executor.memory=8g
The following diagram depicts the high-level architecture of a Spark cluster:
See also
- http://spark.apache.org/docs/latest/spark-standalone.html to find more configuration options