Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Mastering Machine Learning with Spark 2.x

You're reading from   Mastering Machine Learning with Spark 2.x Harness the potential of machine learning, through spark

Arrow left icon
Product type Paperback
Published in Aug 2017
Publisher Packt
ISBN-13 9781785283451
Length 340 pages
Edition 1st Edition
Languages
Arrow right icon
Authors (3):
Arrow left icon
Alex Tellez Alex Tellez
Author Profile Icon Alex Tellez
Alex Tellez
Michal Malohlava Michal Malohlava
Author Profile Icon Michal Malohlava
Michal Malohlava
Max Pumperla Max Pumperla
Author Profile Icon Max Pumperla
Max Pumperla
Arrow right icon
View More author details
Toc

What's the difference between H2O and Spark's MLlib?

As stated previously, MLlib is a library of popular machine learning algorithms built using Spark. Not surprisingly, H2O and MLlib share many of the same algorithms but differ in both their implementation and functionality. One very handy feature of H2O is that it allows users to visualize their data and perform feature engineering tasks, which we will cover in depth in later chapters. The visualization of data is accomplished by a web-friendly GUI and allows users a friendly interface to seamlessly switch between a code shell and a notebook-friendly environment. The following is an example of the H2O notebook - called Flow - that you will become familiar with soon:

One other nice addition is that H2O allows data scientists to grid search many hyper-parameters that ship with their algorithms. Grid search is a way of optimizing all the hyperparameters of an algorithm to make model configuration easier. Often, it is difficult to know which hyperparameters to change and how to change them; the grid search allows us to explore many hyperparameters simultaneously, measure the output, and help select the best models based on our quality requirements. The H2O grid search can be combined with model cross-validation and various stopping criteria, resulting in advanced strategies such as picking 1000 random parameters from a huge parameters hyperspace and finding the best model that can be trained under two minutes and with AUC greater than 0.7

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime