Running your first program using Apache Spark 2.0 with the IntelliJ IDE
The purpose of this program is to get you comfortable with compiling and running a recipe using the Spark 2.0 development environment you just set up. We will explore the components and steps in later chapters.
We are going to write our own version of the Spark 2.0.0 program and examine the output so we can understand how it works. To emphasize, this short recipe is only a simple RDD program with Scala sugar syntax to make sure you have set up your environment correctly before starting to work with more complicated recipes.
How to do it...
- Start a new project in IntelliJ or in an IDE of your choice. Make sure that the necessary JAR files are included.
- Download the sample code for the book, find the
myFirstSpark20.scala
file, and place the code in the following directory.
We installed Spark 2.0 in the C:\spark-2.0.0-bin-hadoop2.7\
directory on a Windows machine.
- Place theÂ
myFirstSpark20.scala
 file in theÂC:\spark-2.0.0-bin...