Summary
In this chapter, we learned why it's important to find the optimal number of groups before we conduct K-means clustering. Once we have the groups, we analyze whether they are compliant with the best-case scenario for segments having a small standard deviation. Research outliers to find out whether their behavior could lead to further investigation, such as fraud detection.
We need a machine learning function such as K-means clustering to segment data because classifying by simple inspection using a 2D or 3D chart is not practical and is sometimes impossible. Segmentation with three or more variables is more complicated because it is not possible to plot them.
K-means clustering helps us to find the optimal number of segments or groups for our data. The best case is to have segments that are as compact as possible.
Each segment has a mean, or centroid, and its values are supposed to be as close as possible to the centroid. This means that the standard deviation of each segment must be as small as possible.
You need to pay attention to segments with large standard deviations because they could be outliers. This type of value in our dataset could mean a preview for future problems because they have a random and irregular behavior outside the rest of the data's normal execution.
In the next chapter, we will get an introduction to the linear regression supervised machine learning algorithm. Linear regression needs statistical tests for the data to measure its level of relationship and to check whether it is useful for the model. Otherwise, it is not worth building the model.