Working with CaffeOnSpark
CaffeOnSpark brings deep learning to Hadoop and Spark clusters. By combining salient features from the deep learning framework Caffe and Big DataFrame works such as Apache Spark and Apache Hadoop, CaffeOnSpark enables distributed deep learning on a cluster of GPU and CPU servers. As a distributed extension of Caffe, CaffeOnSpark supports neural network model training, testing and feature extraction.
This is a Spark deep learning package. The API supports DataFrames so that the application can interface with a training dataset that was prepared using a Spark application and extract the predictions from the model or features from intermediate layers for results and data analysis using MLlib or SQL.
Getting ready
To step through this recipe, you will need a running Spark Cluster either in pseudo distributed mode or in one of the distributed modes, that is, standalone, YARN, or Mesos and CaffeOnSpark ready to be run on a Spark/YARN cluster.
How to do it…
A deep learning...