The data analysis process
When you have a good understanding of a phenomenon, it is possible to make predictions about it. Data analysis helps us to make this possible through exploring the past and creating predictive models.
The data analysis process is composed of the following steps:
The statement of problem
Obtain your data
Clean the data
Normalize the data
Transform the data
Exploratory statistics
Exploratory visualization
Predictive modeling
Validate your model
Visualize and interpret your results
Deploy your solution
All these activities can be grouped as shown in the following figure:
The problem
The problem definition starts with high-level questions such as how to track differences in behavior between groups of customers, or what's going to be the gold price in the next month. Understanding the objectives and requirements from a domain perspective is the key to a successful data analysis project.
Types of data analysis questions are listed as follows:
Inferential
Predictive
Descriptive
Exploratory
Causal
Correlational
Data preparation
Data preparation is about how to obtain, clean, normalize, and transform the data into an optimal dataset, trying to avoid any possible data quality issues such as invalid, ambiguous, out-of-range, or missing values. This process can take a lot of your time. In Chapter 2, Working with Data, we go into more detail about working with data, using OpenRefine to address the complicated tasks. Analyzing data that has not been carefully prepared can lead you to highly misleading results.
The characteristics of good data are listed as follows:
Complete
Coherent
Unambiguous
Countable
Correct
Standardized
Non-redundant
Data exploration
Data exploration is essentially looking at the data in a graphical or statistical form trying to find patterns, connections, and relations in the data. Visualization is used to provide overviews in which meaningful patterns may be found.
In Chapter 3, Data Visualization, we present a visualization framework (D3.js
) and we implement some examples on how to use visualization as a data exploration tool.
Predictive modeling
Predictive modeling is a process used in data analysis to create or choose a statistical model trying to best predict the probability of an outcome. In this book, we use a variety of those models and we can group them in three categories based on its outcome:
Chapter |
Algorithm | |
---|---|---|
Categorical outcome (Classification) |
4 |
Naïve Bayes Classifier |
11 |
Natural Language Toolkit + Naïve Bayes Classifier | |
Numerical outcome (Regression) |
6 |
Random Walk |
8 |
Support Vector Machines | |
9 |
Cellular Automata | |
8 |
Distance Based Approach + k-nearest neighbor | |
Descriptive modeling (Clustering) |
5 |
Fast Dynamic Time Warping (FDTW) + Distance Metrics |
10 |
Force Layout and Fruchterman-Reingold layout |
Another important task we need to accomplish in this step is evaluating the model we chose to be optimal for the particular problem.
The No Free Lunch Theorem proposed by Wolpert in 1996 stated:
"No Free Lunch theorems have shown that learning algorithms cannot be universally good."
The model evaluation helps us to ensure that our analysis is not over-optimistic or over-fitted. In this book, we are going to present two different ways to validate the model:
Cross-validation: We divide the data into subsets of equal size and test the predictive model in order to estimate how it is going to perform in practice. We will implement cross-validation in order to validate the robustness of our model as well as evaluate multiple models to identify the best model based on their performance.
Hold-Out: Mostly, large dataset is randomly divided in to three subsets: training set, validation set, and test set.
Visualization of results
This is the final step in our analysis process and we need to answer the following questions:
How is it going to present the results?
For example, in tabular reports, 2D plots, dashboards, or infographics.
Where is it going to be deployed?
For example, in hard copy printed, poster, mobile devices, desktop interface, or web.
Each choice will depend on the kind of analysis and a particular data. In the following chapters, we will learn how to use standalone plotting in Python with matplotlib
and web visualization with D3.js
.