Preface
The work that researchers do to prepare data for analysis – extraction, transformation, cleaning, and exploration – has not changed fundamentally with the increased popularity of machine learning tools. When we prepared data for multivariate analyses 30 years ago, we were every bit as concerned with missing values, outliers, the shape of the distribution of our variables, and how variables correlate, as we are when we use machine learning algorithms now. Although it is true that widespread use of the same libraries for machine learning (scikit-learn, TensorFlow, PyTorch, and others) does encourage greater uniformity in approach, good data cleaning and exploration practices are largely unchanged.
How we talk about machine learning is still very much algorithm-focused; just choose the right model and organization-changing insights will follow. But we have to make room for the same kind of learning from data that we have been engaged in over the last few decades, where the predictions we make from data, our modeling of relationships in the data, and our cleaning and exploration of that data are very much part of the conversation. Getting our models right has as much to do with gleaning as much information as we can from a histogram or a confusion matrix as from carefully tuning hyperparameters.
Similarly, the work that data analysts and scientists do does not progress neatly from cleaning, to exploration, to preprocessing, to modeling, to evaluation. We have potential models in mind at each step of the process, regularly updating our previous models. For example, we may initially think that we will be using logistic regression to model a particular binary target but then recognize when we see the distribution of features that we might need to at least try using random forest classification. We will discuss implications for modeling throughout this text, even when explaining relatively routine data cleaning tasks. We will also explore the use of machine learning tools early in the process to help us identify anomalies, impute values, and select features.
This points to another change in the workflow of data analysts and scientists over the last decade – less emphasis on the one model and greater acceptance of model building as an iterative process. A project might require multiple machine learning algorithms – for example, principal component analysis to reduce dimensions (the number of features) and then logistic regression for classification.
That being said, there is one key difference in our approach to data cleaning, exploration, and modeling as machine learning tools guide more of our work – an increased emphasis on prediction over an understanding of the underlying data. We are more concerned with how well our features (also known as independent variables, inputs, or predictors) predict our targets (dependent variables, outputs, responses) than with the relationships between features and the underlying structure of our data. I point out throughout the first two sections of this book how that alters our focus somewhat, even when we are cleaning and exploring our data.