Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Apache Spark Machine Learning Blueprints

You're reading from   Apache Spark Machine Learning Blueprints Develop a range of cutting-edge machine learning projects with Apache Spark using this actionable guide

Arrow left icon
Product type Paperback
Published in May 2016
Publisher Packt
ISBN-13 9781785880391
Length 252 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Alex Liu Alex Liu
Author Profile Icon Alex Liu
Alex Liu
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. Spark for Machine Learning FREE CHAPTER 2. Data Preparation for Spark ML 3. A Holistic View on Spark 4. Fraud Detection on Spark 5. Risk Scoring on Spark 6. Churn Prediction on Spark 7. Recommendations on Spark 8. Learning Analytics on Spark 9. City Analytics on Spark 10. Learning Telco Data on Spark 11. Modeling Open Data on Spark Index

ML workflow examples

To further understand machine learning workflows, let us review some examples here.

In the later chapters of this book, we will work on risk modelling, fraud detection, customer view, churn prediction, and recommendation. For many of these types of projects, the goal is often to identify causes of certain problems, or to build a causal model. Below is one example of a workflow to develop a causal model.

  1. Check data structure to ensure a good understanding of the data:
    • Is the data a cross sectional data? Is implicit timing incorporated?
    • Are categorical variables used?
  2. Check missing values:
    • Don't know or forget as an answer may be recoded as neutral or treated as a special category
    • Some variables may have a lot of missing values
    • To recode some variables as needed
  3. Conduct some descriptive studies to begin telling stories:
    • Use comparing means and crosstabulations
    • Check variability of some key variables (standard deviation and variance)
  4. Select groups of ind variables (exogenous variables):
    • As candidates of causes
  5. Basic descriptive statistics:
    • Mean, standard deviaton, and frequencies for all variables
  6. Measurement work:
    • Study dimensions of some measurements (efa exploratory factor analysis may be useful here)
    • May form measurement models
  7. Local models:
    • Identify sections out from the whole picture to explore relationship
    • Use crosstabulations
    • Graphical plots
    • Use logistic regression
    • Use linear regression
  8. Conduct some partial correlation analysis to help model specification.
  9. Propose structural equation models by using the results of (8):
    • Identify main structures and sub structures
    • Connect measurements with structure models
  10. Initial fits:
    • Use spss to create data sets for lisrel or mplus
    • Programming in lisrel or mplus
  11. Model modification:
    • Use SEM results (mainly model fit indices) to guide
    • Re-analyze partial correlations
  12. Diagnostics:
    • Distribution
    • Residuals
    • Curves
  13. Final model estimation may be reached here:
    • If not repeat step 13 and 14
  14. Explaining the model (causal effects identified and quantified).

    Note

    Also refer to http://www.researchmethods.org/step-by-step1.pdf, Spark Pipelines

The Apache Spark team has recognized the importance of machine learning workflows and they have developed Spark Pipelines to enable good handling of them.

Spark ML represents a ML workflow as a pipeline, which consists of a sequence of PipelineStages to be run in a specific order.

PipelineStages include Spark Transformers, Spark Estimators and Spark Evaluators.

ML workflows can be very complicated, so that creating and tuning them is very time consuming. The Spark ML Pipeline was created to make the construction and tuning of ML workflows easy, and especially to represent the following main stages:

  1. Loading data
  2. Extracting features
  3. Estimating models
  4. Evaluating models
  5. Explaining models

With regards to the above tasks, Spark Transformers can be used to extract features. Spark Estimators can be used to train and estimate models, and Spark Evaluators can be used to evaluate models.

Technically, in Spark, a Pipeline is specified as a sequence of stages, and each stage is either a Transformer, an Estimator, or an Evaluator. These stages are run in order, and the input dataset is modified as it passes through each stage. For Transformer stages, the transform() method is called on the dataset. For estimator stages, the fit() method is called to produce a Transformer (which becomes part of the PipelineModel, or fitted Pipeline), and that Transformer's transform() method is called on the dataset.

The specifications given above are all for linear Pipelines. It is possible to create non-linear Pipelines as long as the data flow graph forms a Directed Acyclic Graph (DAG).

Note

For more info on Spark pipeline, please visit:

http://spark.apache.org/docs/latest/ml-guide.html#pipeline

You have been reading a chapter from
Apache Spark Machine Learning Blueprints
Published in: May 2016
Publisher: Packt
ISBN-13: 9781785880391
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image