Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Apache Spark 2.x Cookbook

You're reading from   Apache Spark 2.x Cookbook Over 70 cloud-ready recipes for distributed Big Data processing and analytics

Arrow left icon
Product type Paperback
Published in May 2017
Publisher
ISBN-13 9781787127265
Length 294 pages
Edition 1st Edition
Languages
Concepts
Arrow right icon
Author (1):
Arrow left icon
Rishi Yadav Rishi Yadav
Author Profile Icon Rishi Yadav
Rishi Yadav
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. Getting Started with Apache Spark FREE CHAPTER 2. Developing Applications with Spark 3. Spark SQL 4. Working with External Data Sources 5. Spark Streaming 6. Getting Started with Machine Learning 7. Supervised Learning with MLlib — Regression 8. Supervised Learning with MLlib — Classification 9. Unsupervised Learning 10. Recommendations Using Collaborative Filtering 11. Graph Processing Using GraphX and GraphFrames 12. Optimizations and Performance Tuning

Installing Spark from binaries

You can build Spark from the source code, or you can download precompiled binaries from http://spark.apache.org. For a standard use case, binaries are good enough, and this recipe will focus on installing Spark using binaries.

Getting ready

At the time of writing, Spark's current version is 2.1. Please check the latest version from Spark's download page at http://spark.apache.org/downloads.html. Binaries are developed with the most recent and stable version of Hadoop. To use a specific version of Hadoop, the recommended approach is that you build it from sources, which we will cover in the next recipe.

All the recipes in this book are developed using Ubuntu Linux, but they should work fine on any POSIX environment. Spark expects Java to be installed and the JAVA_HOME environment variable set.

In Linux/Unix systems, there are certain standards for the location of files and directories, which we are going to follow in this book. The following is a quick cheat sheet:

Directory Description
/bin This stores essential command binaries
/etc This is where host-specific system configurations are located
/opt This is where add-on application software packages are located
/var This is where variable data is located
/tmp This stores the temporary files
/home This is where user home directories are located

How to do it...

Here are the installation steps:

  1. Open the terminal and download the binaries using the following command:
        $ wget http://d3kbcqa49mib13.cloudfront.net/spark-2.1.0-bin-hadoop2.7.tgz
  1. Unpack the binaries:
        $ tar -zxf spark-2.1.0-bin-hadoop2.7.tgz
  1. Rename the folder containing the binaries by stripping the version information:
        $ sudo mv spark-2.1.0-bin-hadoop2.7 spark
  1. Move the configuration folder to the /etc folder so that it can be turned into a symbolic link later:
        $ sudo mv spark/conf/* /etc/spark
  1. Create your company-specific installation directory under /opt. As the recipes in this book are tested on the infoobjects sandbox, use infoobjects as the directory name. Create the /opt/infoobjects directory:
        $ sudo mkdir -p /opt/infoobjects
  1. Move the spark directory to /opt/infoobjects, as it's an add-on software package:
        $ sudo mv spark /opt/infoobjects/
  1. Change the permissions of the spark home directory, namely 0755 = user:read-write-execute group:read-execute world:read-execute:
        $ sudo chmod -R 755 /opt/infoobjects/spark
  1. Move to the spark home directory:
        $ cd /opt/infoobjects/spark
  1. Create the symbolic link:
        $ sudo ln -s /etc/spark conf
  1. Append Spark binaries path to PATH in .bashrc:
        $ echo "export PATH=$PATH:/opt/infoobjects/spark/bin" >> /home/hduser/.bashrc
  1. Open a new terminal.
  2. Create the log directory in /var:
        $ sudo mkdir -p /var/log/spark
  1. Make hduser the owner of Spark's log directory:
        $ sudo chown -R hduser:hduser /var/log/spark
  1. Create Spark's tmp directory:
        $ mkdir /tmp/spark
  1. Configure Spark with the help of the following command lines:
     $ cd /etc/spark
$ echo "export HADOOP_CONF_DIR=/opt/infoobjects/hadoop/etc/hadoop" >> spark-env.sh
$ echo "export YARN_CONF_DIR=/opt/infoobjects/hadoop/etc/Hadoop" >> spark-env.sh
$ echo "export SPARK_LOG_DIR=/var/log/spark" >> spark-env.sh
$ echo "export SPARK_WORKER_DIR=/tmp/spark" >> spark-env.sh
  1. Change the ownership of the spark home directory to root:
        $ sudo chown -R root:root /opt/infoobjects/spark
You have been reading a chapter from
Apache Spark 2.x Cookbook
Published in: May 2017
Publisher:
ISBN-13: 9781787127265
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime