Installing Apache Spark
As mentioned in the earlier pages, while Spark can be deployed on a cluster, you can also run it in local mode on a single machine.
In this chapter, we are going to download and install Apache Spark on a Linux machine and run it in local mode. Before we do anything we need to download Apache Spark from Apache's web page for the Spark project:
- Use your recommended browser to navigate to http://spark.apache.org/downloads.html.
- Choose a Spark release. You'll find all previous Spark releases listed here. We'll go with release 2.0.0 (at the time of writing, only the preview edition was available).
- You can download Spark source code, which can be built for several versions of Hadoop, or download it for a specific Hadoop version. In this case, we are going to download one that has been pre-built for Hadoop 2.7 or later.
- You can also choose to download directly or from among a number of different Mirrors. For the purpose of our exercise we'll use direct download and download it to our preferred location.
Note
If you are using Windows, please remember to use a pathname without any spaces.
- The file that you have downloaded is a compressed TAR archive. You need to extract the archive.
Note
The TAR utility is generally used to unpack TAR files. If you don't have TAR, you might want to download that from the repository or use 7-ZIP, which is also one of my favorite utilities.
- Once unpacked, you will see a number of directories/files. Here's what you would typically see when you list the contents of the unpacked directory:
The
bin
folder contains a number of executable shell scripts such aspypark
,sparkR
,spark-shell
,spark-sql
, andspark-submit
. All of these executables are used to interact with Spark, and we will be using most if not all of these. - If you see my particular download of spark you will find a folder called
yarn
. The example below is a Spark that was built for Hadoop version 2.7 which comes with YARN as a cluster manager.Figure 1.2: Spark folder contents
We'll start by running Spark shell, which is a very simple way to get started with Spark and learn the API. Spark shell is a Scala Read-Evaluate-Print-Loop (REPL), and one of the few REPLs available with Spark which also include Python and R.
You should change to the Spark download directory and run the Spark shell as follows: /bin/spark-shell
Figure 1.3: Starting Spark shell
We now have Spark running in standalone mode. We'll discuss the details of the deployment architecture a bit later in this chapter, but now let's kick start some basic Spark programming to appreciate the power and simplicity of the Spark framework.