As usual, the first step involves the loading of data into memory. At this point, we can decide to use Spark or H2O data-loading capabilities. Since data is stored in the CSV file format, we will use the H2O parser to give us a quick visual insight into the data:
val DATASET_DIR = s"${sys.env.get("DATADIR").getOrElse("data")}"
val DATASETS = Array("LoanStats3a.CSV", "LoanStats3b.CSV")
import java.net.URI
import water.fvec.H2OFrame
val loanDataHf = new H2OFrame(DATASETS.map(name => URI.create(s"${DATASET_DIR}/${name}")):_*)
The loaded dataset can be directly explored in the H2O Flow UI. We can directly verify the number of rows, columns, and size of data stored in memory: