Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Apache Spark 2.x Cookbook

You're reading from   Apache Spark 2.x Cookbook Over 70 cloud-ready recipes for distributed Big Data processing and analytics

Arrow left icon
Product type Paperback
Published in May 2017
Publisher
ISBN-13 9781787127265
Length 294 pages
Edition 1st Edition
Languages
Concepts
Arrow right icon
Author (1):
Arrow left icon
Rishi Yadav Rishi Yadav
Author Profile Icon Rishi Yadav
Rishi Yadav
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. Getting Started with Apache Spark FREE CHAPTER 2. Developing Applications with Spark 3. Spark SQL 4. Working with External Data Sources 5. Spark Streaming 6. Getting Started with Machine Learning 7. Supervised Learning with MLlib — Regression 8. Supervised Learning with MLlib — Classification 9. Unsupervised Learning 10. Recommendations Using Collaborative Filtering 11. Graph Processing Using GraphX and GraphFrames 12. Optimizations and Performance Tuning

Launching Spark on Amazon EC2

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute instances in the cloud. Amazon EC2 provides the following features:

  • On-demand delivery of IT resources via the Internet
  • Provisioning of as many instances as you like
  • Payment for the hours during which you use instances, such as your utility bill
  • No setup cost, no installation, and no overhead at all
  • Shutting down or terminating instances when you no longer need them
  • Making such instances available on familiar operating systems

EC2 provides different types of instances to meet all your compute needs, such as general-purpose instances, microinstances, memory-optimized instances, storage-optimized instances, and others. They also have a Free Tier of microinstances for trial purposes.

Getting ready

The spark-ec2 script comes bundled with Spark and makes it easy to launch, manage, and shut down clusters on Amazon EC2.

Before you start, do the following things: log in to the Amazon AWS account via http://aws.amazon.com.

  1. Click on Security Credentials under your account name in the top-right corner.
  2. Click on Access Keys and Create New Access Key:
  1. Download the key file (let's save it in the /home/hduser/kp folder as spark-kp1.pem).
  2. Set permissions on the key file to 600.
  3. Set environment variables to reflect access key ID and secret access key (replace the sample values with your own values):
     $ echo "export AWS_ACCESS_KEY_ID="AKIAOD7M2LOWATFXFKQ"" >> /home/hduser/.bashrc
$ echo "export
AWS_SECRET_ACCESS_KEY="+Xr4UroVYJxiLiY8DLT4DLT4D4sxc3ijZGMx1D3pfZ2q"" >>
/home/hduser/.bashrc

$ echo "export PATH=$PATH:/opt/infoobjects/spark/ec2" >> /home/hduser/.bashrc

How to do it...

  1. Spark comes bundled with scripts to launch the Spark cluster on Amazon EC2. Let's launch the cluster using the following command:
        $ cd /home/hduser
$ spark-ec2 -k <key-pair> -i <key-file> -s <num-slaves> launch <cluster-name>
<key-pair> - name of EC2 keypair created in AWS
<key-file> the private key file you downloaded
<num-slaves> number of slave nodes to launch
<cluster-name> name of the cluster
  1. Launch the cluster with the example value:
        $ spark-ec2 -k kp-spark -i /home/hduser/keypairs/kp-spark.pem --hadoop-major-
version 2 -s 3 launch spark-cluster
  1. Sometimes, the default availability zones are not available; in that case, retry sending the request by specifying the specific availability zone you are requesting:
        $ spark-ec2 -k kp-spark -i /home/hduser/keypairs/kp-spark.pem -z us-east-1b --
hadoop-major-version 2 -s 3 launch spark-cluster
  1. If your application needs to retain data after the instance shuts down, attach EBS volume to it (for example, 10 GB space):
        $ spark-ec2 -k kp-spark -i /home/hduser/keypairs/kp-spark.pem --hadoop-major-
version 2 -ebs-vol-size 10 -s 3 launch spark-cluster
  1. If you use Amazon's spot instances, here is the way to do it:
        $ spark-ec2 -k kp-spark -i /home/hduser/keypairs/kp-spark.pem -spot-price=0.15 
--hadoop-major-version 2 -s 3 launch spark-cluster
Spot instances allow you to name your own price for Amazon EC2's computing capacity. You simply bid for spare Amazon EC2 instances and run them whenever your bid exceeds the current spot price, which varies in real time and is based on supply and demand (source: www.amazon.com).
  1. After completing the preceding launch process, check the status of the cluster by going to the webUI URL that will be printed at the end:

  1. Check the status of the cluster:
  1. Now, to access the Spark cluster on EC2, connect to the master node using secure shell protocol (SSH):
         $ spark-ec2 -k spark-kp1 -i /home/hduser/kp/spark-kp1.pem  login spark-cluster
  1. The following image illustrates the result you'll get:
  1. Check the directories in the master node and see what they do:
Directory Description
ephemeral-hdfs This is the Hadoop instance for which data is ephemeral and gets deleted when you stop or restart the machine.
persistent-hdfs Each node has a very small amount of persistent storage (approximately 3 GB). If you use this instance, data will be retained in that space.
hadoop-native This refers to the native libraries that support Hadoop, such as snappy compression libraries.
scala This refers to the Scala installation.
shark This refers to the Shark installation (Shark is no longer supported and is replaced by Spark SQL).
spark This refers to the Spark installation.
spark-ec2 This refers to the files that support this cluster deployment.
tachyon This refers to the Tachyon installation.
  1. Check the HDFS version in an ephemeral instance:
        $ ephemeral-hdfs/bin/hadoop version
Hadoop 2.0.0-chd4.2.0
  1. Check the HDFS version in a persistent instance with the following command:
        $ persistent-hdfs/bin/hadoop version
Hadoop 2.0.0-chd4.2.0
  1. Change the configuration level of the logs:
        $ cd spark/conf
  1. The default log level information is too verbose, so let's change it to Error:
  •  Create the log4.properties file by renaming the template:
        $ mv log4j.properties.template log4j.properties
  • Open log4j.properties in vi or your favorite editor:
        $ vi log4j.properties
  • Change the second line from | log4j.rootCategory=INFO, console to | log4j.rootCategory=ERROR, console.
  1. Copy the configuration to all the slave nodes after the change:
        $ spark-ec2/copydir spark/conf
  1. You should get something like this:
  1. Destroy the Spark cluster:
        $ spark-ec2 destroy spark-cluster

See also

You have been reading a chapter from
Apache Spark 2.x Cookbook
Published in: May 2017
Publisher:
ISBN-13: 9781787127265
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime