Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Apache Spark Deep Learning Cookbook

You're reading from   Apache Spark Deep Learning Cookbook Over 80 best practice recipes for the distributed training and deployment of neural networks using Keras and TensorFlow

Arrow left icon
Product type Paperback
Published in Jul 2018
Publisher Packt
ISBN-13 9781788474221
Length 474 pages
Edition 1st Edition
Languages
Arrow right icon
Authors (2):
Arrow left icon
Ahmed Sherif Ahmed Sherif
Author Profile Icon Ahmed Sherif
Ahmed Sherif
Amrith Ravindra Amrith Ravindra
Author Profile Icon Amrith Ravindra
Amrith Ravindra
Arrow right icon
View More author details
Toc

Table of Contents (15) Chapters Close

Preface 1. Setting Up Spark for Deep Learning Development FREE CHAPTER 2. Creating a Neural Network in Spark 3. Pain Points of Convolutional Neural Networks 4. Pain Points of Recurrent Neural Networks 5. Predicting Fire Department Calls with Spark ML 6. Using LSTMs in Generative Networks 7. Natural Language Processing with TF-IDF 8. Real Estate Value Prediction Using XGBoost 9. Predicting Apple Stock Market Cost with LSTM 10. Face Recognition Using Deep Convolutional Networks 11. Creating and Visualizing Word Vectors Using Word2Vec 12. Creating a Movie Recommendation Engine with Keras 13. Image Classification with TensorFlow on Spark 14. Other Books You May Enjoy

Integrating Jupyter notebooks with Spark

When learning Python for the first time, it is useful to use Jupyter notebooks as an interactive developing environment (IDE). This is one of the main reasons why Anaconda is so powerful. It fully integrates all of the dependencies between Python and Jupyter notebooks. The same can be done with PySpark and Jupyter notebooks. While Spark is written in Scala, PySpark allows for the translation of code to occur within Python instead.

Getting ready

Most of the work in this section will just require accessing the .bashrc script from the terminal.

How to do it...

PySpark is not configured to work within Jupyter notebooks by default, but a slight tweak of the .bashrc script can remedy this issue. We will walk through these steps in this section:

  1. Access the .bashrc script by executing the following command:
$ nano .bashrc
  1. Scrolling all the way to the end of the script should reveal the last command modified, which should be the PATH set by Anaconda during the installation earlier in the previous section. The PATH should appear as seen in the following:
# added by Anaconda3 4.4.0 installer
export PATH="/home/asherif844/anaconda3/bin:$PATH"
  1. Underneath, the PATH added by the Anaconda installer can include a custom function that helps communicate the Spark installation with the Jupyter notebook installation from Anaconda3. For the purposes of this chapter and remaining chapters, we will name that function sparknotebook. The configuration should appear as the following for sparknotebook():
function sparknotebook()
{
export SPARK_HOME=/home/asherif844/spark-2.2.0-bin-hadoop2.7
export PYSPARK_PYTHON=python3
export PYSPARK_DRIVER_PYTHON=jupyter
export PYSPARK_DRIVER_PYTHON_OPTS="notebook"
$SPARK_HOME/bin/pyspark
}
  1. The updated .bashrc script should look like the following once saved:
  1. Save and exit from the .bashrc file. It is recommended to communicate that the .bashrc file has been updated by executing the following command and restarting the terminal application:
$ source .bashrc

How it works...

Our goal in this section is to integrate Spark directly into a Jupyter notebook so that we are not doing our development at the terminal and instead utilizing the benefits of developing within a notebook. This section explains how the Spark integration within a Jupyter notebook takes place.

  1. We will create a command function, sparknotebook, that we can call from the terminal to open up a Spark session through Jupyter notebooks from the Anaconda installation. This requires two settings to be set in the .bashrc file:
    1. PySpark Python be set to python 3
    2. PySpark driver for python to be set to Jupyter
  2. The sparknotebook function can now be accessed directly from the terminal by executing the following command:
$ sparknotebook
  1. The function should then initiate a brand new Jupyter notebook session through the default web browser. A new Python script within Jupyter notebooks with a .ipynb extension can be created by clicking on the New button on the right-hand side and by selecting Python 3 under Notebook: as seen in the following screenshot:
  1. Once again, just as was done at the terminal level for Spark, a simple script of sc will be executed within the notebook to confirm that Spark is up and running through Jupyter:
  1. Ideally, the Version, Master, and AppName should be identical to the earlier output when sc was executed at the terminal. If this is the case, then PySpark has been successfully installed and configured to work with Jupyter notebooks.

There's more...

It is important to note that if we were to call a Jupyter notebook through the terminal without specifying sparknotebook, our Spark session will never be initiated and we will receive an error when executing the SparkContext script.

We can access a traditional Jupyter notebook by executing the following at the terminal:

jupyter-notebook

Once we start the notebook, we can try and execute the same script for sc.master as we did previously, but this time we will receive the following error:

See also

You have been reading a chapter from
Apache Spark Deep Learning Cookbook
Published in: Jul 2018
Publisher: Packt
ISBN-13: 9781788474221
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime