Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Apache Spark 2.x Cookbook

You're reading from   Apache Spark 2.x Cookbook Over 70 cloud-ready recipes for distributed Big Data processing and analytics

Arrow left icon
Product type Paperback
Published in May 2017
Publisher
ISBN-13 9781787127265
Length 294 pages
Edition 1st Edition
Languages
Concepts
Arrow right icon
Author (1):
Arrow left icon
Rishi Yadav Rishi Yadav
Author Profile Icon Rishi Yadav
Rishi Yadav
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. Getting Started with Apache Spark FREE CHAPTER 2. Developing Applications with Spark 3. Spark SQL 4. Working with External Data Sources 5. Spark Streaming 6. Getting Started with Machine Learning 7. Supervised Learning with MLlib — Regression 8. Supervised Learning with MLlib — Classification 9. Unsupervised Learning 10. Recommendations Using Collaborative Filtering 11. Graph Processing Using GraphX and GraphFrames 12. Optimizations and Performance Tuning

Deploying Spark on a cluster in standalone mode

Compute resources in a distributed environment need to be managed so that resource utilization is efficient and every job gets a fair chance to run. Spark comes with its own cluster manager, which is conveniently called standalone mode. Spark also supports working with YARN and Mesos cluster managers.

The cluster manager you choose should be mostly driven by both legacy concerns and whether other frameworks, such as MapReduce, share the same compute resource pool. If your cluster has legacy MapReduce jobs running and all of them cannot be converted into Spark jobs, it is a good idea to use YARN as the cluster manager. Mesos is emerging as a data center operating system to conveniently manage jobs across frameworks, and it is very compatible with Spark.

If the Spark framework is the only framework in your cluster, then the standalone mode is good enough. As Spark is evolving as a technology, you will see more and more use cases of Spark being used as the standalone framework, serving all your big data compute needs. For example, some jobs may use Apache Mahout at present because MLlib does not have a specific machine-learning library, which the job needs. As soon as MLlib gets its library, this particular job can be moved to Spark.

Getting ready

Let's consider a cluster of six nodes as an example setup--one master and five slaves (replace them with the actual node names in your cluster):

    Master
m1.zettabytes.com
Slaves
s1.zettabytes.com
s2.zettabytes.com
s3.zettabytes.com
s4.zettabytes.com
s5.zettabytes.com

How to do it...

  1. Since Spark's standalone mode is the default, all you need to do is have Spark binaries installed on both master and slave machines. Put /opt/infoobjects/spark/sbin in the path on every node:
        $ echo "export PATH=$PATH:/opt/infoobjects/spark/sbin" >> /home/hduser/.bashrc
  1. Start the standalone master server (SSH to master first):
        hduser@m1.zettabytes.com~] start-master.sh
Master, by default, starts on port 7077, which slaves use to connect to it. It also has a web UI at port 8088.
  1. Connect to the master node using a Secure Shell (SSH) connection and then start the slaves:
        hduser@s1.zettabytes.com~] spark-class org.apache.spark.deploy.worker.Worker 
spark://m1.zettabytes.com:7077
Argument 
Meaning

-h <ipaddress/HOST> and--host <ipaddress/HOST>

IP address/DNS service to listen on
-p <port> and --port <port> Port for the service to listen on
--webui-port <port> This is the port for the web UI (by default, 8080 is for the master and 8081 for the worker)
-c <cores> and --cores <cores> These refer to the total CPU core Spark applications that can be used on a machine (worker only)
-m <memory> and --memory <memory> These refer to the total RAM Spark applications that can be used on a machine (worker only)
-d <dir> and --work-dir <dir> These refer to the directory to use for scratch space and job output logs

For fine-grained configuration, the above parameters work with both master and slaves. Rather than manually starting master and slave daemons on each node, it can also be accomplished using cluster launch scripts. Cluster launch scripts are outside the scope of this book. Please refer to books about Chef or Puppet.

  1. First, create the conf/slaves file on a master node and add one line per slave hostname (using an example of five slave nodes, replace the following slave DNS with the DNS of the slave nodes in your cluster):
        hduser@m1.zettabytes.com~] echo "s1.zettabytes.com" >> conf/slaves
hduser@m1.zettabytes.com~] echo "s2.zettabytes.com" >> conf/slaves
hduser@m1.zettabytes.com~] echo "s3.zettabytes.com" >> conf/slaves
hduser@m1.zettabytes.com~] echo "s4.zettabytes.com" >> conf/slaves
hduser@m1.zettabytes.com~] echo "s5.zettabytes.com" >> conf/slaves

Once the slave machine is set up, you can call the following scripts to start/stop the cluster:

Script name Purpose
start-master.sh Starts a master instance on the host machine
start-slaves.sh Starts a slave instance on each node of the slaves file
start-all.sh Starts both the master and slaves
stop-master.sh Stops the master instance on the host machine
stop-slaves.sh Stops the slave instance on all the nodes of the slaves file
stop-all.sh Stops both the master and slaves
  1. Connect an application to the cluster through Scala code:
        val sparkContext = new SparkContext(new 
SparkConf().setMaster("spark://m1.zettabytes.com:7077")Setting master URL for
spark-shell
  1. Connect to the cluster through Spark shell:
        $ spark-shell --master spark://master:7077

How it works...

In standalone mode, Spark follows the master-slave architecture, very much like Hadoop, MapReduce, and YARN. The compute master daemon is called Spark master and runs on one master node. Spark master can be made highly available using ZooKeeper. You can also add more standby masters on the fly if needed.

The compute slave daemon is called a worker, and it exists on each slave node. The worker daemon does the following:

  • Reports the availability of the compute resources on a slave node, such as the number of cores, memory, and others, to the Spark master
  • Spawns the executor when asked to do so by the Spark master
  • Restarts the executor if it dies

There is, at most, one executor per application, per slave machine.

Both Spark master and the worker are very lightweight. Typically, memory allocation between 500 MB to 1 GB is sufficient. This value can be set in conf/spark-env.sh by setting the SPARK_DAEMON_MEMORY parameter. For example, the following configuration will set the memory to 1 gigabits for both the master and worker daemon. Make sure you have sudo as the super user before running it:

    $ echo "export SPARK_DAEMON_MEMORY=1g" >> /opt/infoobjects/spark/conf/spark-env.sh

By default, each slave node has one worker instance running on it. Sometimes, you may have a few machines that are more powerful than others. In that case, you can spawn more than one worker on that machine with the following configuration (only on those machines):

    $ echo "export SPARK_WORKER_INSTANCES=2" >> /opt/infoobjects/spark/conf/spark-env.sh

The Spark worker, by default, uses all the cores on the slave machine for its executors. If you would like to limit the number of cores the worker could use, you can set it to the number of your choice (for example, 12), using the following configuration:

    $ echo "export SPARK_WORKER_CORES=12" >> /opt/infoobjects/spark/conf/spark-env.sh

The Spark worker, by default, uses all of the available RAM (1 GB for executors). Note that you cannot allocate how much memory each specific executor will use (you can control this from the driver configuration). To assign another value to the total memory (for example, 24 GB) to be used by all the executors combined, execute the following setting:

    $ echo "export SPARK_WORKER_MEMORY=24g" >> /opt/infoobjects/spark/conf/spark-env.sh

There are some settings you can do at the driver level:

  • To specify the maximum number of CPU cores to be used by a given application across the cluster, you can set the spark.cores.max configuration in Spark submit  or  Spark shellas follows:
        $ spark-submit --conf spark.cores.max=12
  • To specify the amount of memory that each executor should be allocated (the minimum recommendation is 8 GB), you can set the spark.executor.memory configuration in Spark submit or Spark shell as follows:
        $ spark-submit --conf spark.executor.memory=8g

The following diagram depicts the high-level architecture of a Spark cluster:

See also

You have been reading a chapter from
Apache Spark 2.x Cookbook
Published in: May 2017
Publisher:
ISBN-13: 9781787127265
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image