What this book covers
Chapter 1, Understanding Data Segmentation, looks at how classifying the data of similar values is an approach for planning a strategy depending on the characteristics of the range of values of the groups. This strategy is more important when you deal with a problem with several variables, for example, finding the different groups of revenues for each season of the year and the quantities delivered for logistics demand planning.
Chapter 2, Applying Linear Regression, shows that the target of linear regression is to use related variables to predict the behavior of the values and build scenarios of what could happen in different situations, using the regression model as a framework for foreseeing the situations.
Chapter 3, What is Time Series?, examines how a time series model could do a forecast of the data, taking into account the seasonal trends based on the past time values.
Chapter 4, An Introduction to Data Grouping, delves into the importance of finding a different approach for each group. In complex multivariable problems, we need the assistance of machine learning algorithms such as K-means to find the optimal number of segments and the group's values range.
Chapter 5, Finding the Optimal Number of Single Variable Groups, shows how running an add-in for Excel that uses the K-means algorithm can help to get the optimal number of groups for the data that we are researching. In this case, we will start with a problem of just one variable to explain the concepts.
Chapter 6, Finding the Optimal Number of Multi-Variable Groups, demonstrates how to use the Excel add-in to do the grouping of problems of several variables, for example, the classification of quantity, revenue, and season of the inventory rotation.
Chapter 7, Analyzing Outliers for Data Anomalies, delves into another approach to data segmentation: researching what happens with the values that have a long-distance separation of all the groups. These values are anomalies, such as very short value expenses happening at non-business hours that could indicate evidence of possible fraud attempts.
Chapter 8, Finding the Relationship between Variables, shows how we have to do statistical tests of the relationship of the variables to check whether they are useful to design a predictive model before building a linear model.
Chapter 9, Building, Training, and Validating a Linear Model, talks about what happens after the relationship of the variables is statistically tested as useful to build a predictive model; we will use a portion of the data (regularly 20%) to test the model and see whether it gives a good sense of results similar to the known data.
Chapter 10, Building, Training, and Validating a Multiple Regression Model, discusses multiple regression, which involves three or more variables. We will see how to apply the statistical tests to see the most useful variables to build the predictive model. Then, we will test the regression with 20% of the data and see whether it makes sense to use the model to build new scenarios with unknown data.
Chapter 11, Testing Data for Time Series Compliance, shows how the time series forecast relies on the relationship of the present values to the past values. We will apply statistical methods to find whether the data is useful for a forecast model.
Chapter 12, Working with Time Series Using the Centered Moving Average and a Trending Component, explores the forecast model's dependence on two components: the centered moving average (which gives the seasonal ups and downs variations) and the linear regression (which gives the positive or negative orientation of the trend). Once we have these calculations, we will be able to test and use the model.
Chapter 13, Training, Validating, and Running the Model, covers the statistical tests time series and then models with 80% of the data. Then, we will test the time series with the remaining 20% and see whether the model returns results that make sense depending on our experience. Finally, we will use the model to do forecasts.