Installing Spark from binaries
Spark can be either built from the source code or precompiled binaries can be downloaded from http://spark.apache.org. For a standard use case, binaries are good enough, and this recipe will focus on installing Spark using binaries.
Getting ready
All the recipes in this book are developed using Ubuntu Linux but should work fine on any POSIX environment. Spark expects Java to be installed and the JAVA_HOME
environment variable to be set.
In Linux/Unix systems, there are certain standards for the location of files and directories, which we are going to follow in this book. The following is a quick cheat sheet:
Directory |
Description |
---|---|
|
Essential command binaries |
|
Host-specific system configuration |
|
Add-on application software packages |
|
Variable data |
|
Temporary files |
|
User home directories |
How to do it...
At the time of writing this, Spark's current version is 1.4. Please check the latest version from Spark's download page at http://spark.apache.org/downloads.html. Binaries are developed with a most recent and stable version of Hadoop. To use a specific version of Hadoop, the recommended approach is to build from sources, which will be covered in the next recipe.
The following are the installation steps:
- Open the terminal and download binaries using the following command:
$ wget http://d3kbcqa49mib13.cloudfront.net/spark-1.4.0-bin-hadoop2.4.tgz
- Unpack binaries:
$ tar -zxf spark-1.4.0-bin-hadoop2.4.tgz
- Rename the folder containing binaries by stripping the version information:
$ sudo mv spark-1.4.0-bin-hadoop2.4 spark
- Move the configuration folder to the
/etc
folder so that it can be made a symbolic link later:$ sudo mv spark/conf/* /etc/spark
- Create your company-specific installation directory under
/opt
. As the recipes in this book are tested oninfoobjects
sandbox, we are going to useinfoobjects
as directory name. Create the/opt/infoobjects
directory:$ sudo mkdir -p /opt/infoobjects
- Move the
spark
directory to/opt/infoobjects
as it's an add-on software package:$ sudo mv spark /opt/infoobjects/
- Change the ownership of the
spark
home directory toroot
:$ sudo chown -R root:root /opt/infoobjects/spark
- Change permissions of the
spark
home directory,0755 = user:read-write-execute group:read-execute world:read-execute
:$ sudo chmod -R 755 /opt/infoobjects/spark
- Move to the
spark
home directory:$ cd /opt/infoobjects/spark
- Create the symbolic link:
$ sudo ln -s /etc/spark conf
- Append to
PATH
in.bashrc
:$ echo "export PATH=$PATH:/opt/infoobjects/spark/bin" >> /home/hduser/.bashrc
- Open a new terminal.
- Create the
log
directory in/var
:$ sudo mkdir -p /var/log/spark
- Make
hduser
the owner of the Sparklog
directory.$ sudo chown -R hduser:hduser /var/log/spark
- Create the Spark
tmp
directory:$ mkdir /tmp/spark
- Configure Spark with the help of the following command lines:
$ cd /etc/spark $ echo "export HADOOP_CONF_DIR=/opt/infoobjects/hadoop/etc/hadoop" >> spark-env.sh $ echo "export YARN_CONF_DIR=/opt/infoobjects/hadoop/etc/Hadoop" >> spark-env.sh $ echo "export SPARK_LOG_DIR=/var/log/spark" >> spark-env.sh $ echo "export SPARK_WORKER_DIR=/tmp/spark" >> spark-env.sh