k-means versus Hierarchical Clustering
Now that we have expanded our understanding of how k-means clustering works, it is important to explore where hierarchical clustering fits into the picture. As mentioned in the linkage criteria section, there is some potential direct overlap when it comes to grouping data points together using centroids. Universal to all of the approaches mentioned so far, is also the use of a distance function to determine similarity. Due to our in-depth exploration in the previous chapter, we have kept using the Euclidean distance, but we understand that any distance function can be used to determine similarity.
In practice, here are some quick highlights for choosing one clustering method over another:
Hierarchical clustering benefits from not needing to pass in an explicit "k" number of clusters apriori. This means that you can find all the potential clusters and decide which clusters make the most sense after the algorithm has completed.
k-means clustering benefits...