In the K-means clustering algorithm, we had our K cluster from day one. With each iteration, some samples may change their allegiances and some clusters may change their centroids, but in the end, the clusters are defined from the very beginning. Conversely, in agglomerative clustering, no clusters exist at the beginning. Initially, each sample belongs to its own cluster. We have as many clusters in the beginning as there are data samples. Then, we find the two closest samples and aggregate them into one cluster. After that, we keep iterating by combining the next closest two samples, two clusters, or the next closest sample and a cluster. As you can see, with each iteration, the number of clusters decreases by one until all our samples join a single cluster. Putting all the samples into one cluster sounds unintuitive. Thus, we have the option to...
Agglomerative clustering
"The most populous city is but an agglomeration of wildernesses."
- Aldous Huxley