ML workflow examples
To further understand machine learning workflows, let us review some examples here.
In the later chapters of this book, we will work on risk modelling, fraud detection, customer view, churn prediction, and recommendation. For many of these types of projects, the goal is often to identify causes of certain problems, or to build a causal model. Below is one example of a workflow to develop a causal model.
- Check data structure to ensure a good understanding of the data:
- Is the data a cross sectional data? Is implicit timing incorporated?
- Are categorical variables used?
- Check missing values:
- Don't know or forget as an answer may be recoded as neutral or treated as a special category
- Some variables may have a lot of missing values
- To recode some variables as needed
- Conduct some descriptive studies to begin telling stories:
- Use comparing means and crosstabulations
- Check variability of some key variables (standard deviation and variance)
- Select groups of
ind
variables (exogenous variables):- As candidates of causes
- Basic descriptive statistics:
- Mean, standard deviaton, and frequencies for all variables
- Measurement work:
- Study dimensions of some measurements (efa exploratory factor analysis may be useful here)
- May form measurement models
- Local models:
- Identify sections out from the whole picture to explore relationship
- Use crosstabulations
- Graphical plots
- Use logistic regression
- Use linear regression
- Conduct some partial correlation analysis to help model specification.
- Propose structural equation models by using the results of (8):
- Identify main structures and sub structures
- Connect measurements with structure models
- Initial fits:
- Use spss to create data sets for lisrel or mplus
- Programming in lisrel or mplus
- Model modification:
- Use SEM results (mainly model fit indices) to guide
- Re-analyze partial correlations
- Diagnostics:
- Distribution
- Residuals
- Curves
- Final model estimation may be reached here:
- If not repeat step 13 and 14
- Explaining the model (causal effects identified and quantified).
Note
Also refer to http://www.researchmethods.org/step-by-step1.pdf, Spark Pipelines
The Apache Spark team has recognized the importance of machine learning workflows and they have developed Spark Pipelines to enable good handling of them.
Spark ML represents a ML workflow as a pipeline, which consists of a sequence of PipelineStages to be run in a specific order.
PipelineStages include Spark Transformers, Spark Estimators and Spark Evaluators.
ML workflows can be very complicated, so that creating and tuning them is very time consuming. The Spark ML Pipeline was created to make the construction and tuning of ML workflows easy, and especially to represent the following main stages:
- Loading data
- Extracting features
- Estimating models
- Evaluating models
- Explaining models
With regards to the above tasks, Spark Transformers can be used to extract features. Spark Estimators can be used to train and estimate models, and Spark Evaluators can be used to evaluate models.
Technically, in Spark, a Pipeline is specified as a sequence of stages, and each stage is either a Transformer, an Estimator, or an Evaluator. These stages are run in order, and the input dataset is modified as it passes through each stage. For Transformer stages, the transform()
method is called on the dataset. For estimator stages, the fit()
method is called to produce a Transformer (which becomes part of the PipelineModel, or fitted Pipeline), and that Transformer's transform()
method is called on the dataset.
The specifications given above are all for linear Pipelines. It is possible to create non-linear Pipelines as long as the data flow graph forms a Directed Acyclic Graph (DAG).
Note
For more info on Spark pipeline, please visit: