Working with H2O on Spark
Sparkling Water is executed as a regular Spark application. It provides a way to initialize H2O services on each node in the Spark Cluster and to access data stored in the data structures of Spark and H2O. The Sparkling Water application is launched inside a spark executor created after submitting the application. At this point, H2O starts the services, including the distributed key value (KV) storage and the memory manager.
Getting ready
To step through this recipe, you will need a running Spark Cluster in any one of the following modes: Local, standalone, YARN, Mesos. You must also include the Spark MLlib package in the build.sbt
file so that it downloads the related libraries and the API can be used. Install Hadoop (optionally), Scala, and Java.
How to do it…
In this recipe, we'll learn how to download and install H2O services in a Spark Cluster. We'll also use the H2O API in Spark.
The list of sub-recipes in this section is as follows:
Downloading and installing H2O...