Chapter 2: Hierarchical Clustering
Activity 2: Applying Linkage Criteria
Solution:
Visualize the x dataset that we created in Exercise 7, Building a Hierarchy:
from scipy.cluster.hierarchy import linkage, dendrogram, fcluster from sklearn.datasets import make_blobs import matplotlib.pyplot as plt %matplotlib inline # Generate a random cluster dataset to experiment on. X = coordinate points, y = cluster labels (not needed) X, y = make_blobs(n_samples=1000, centers=8, n_features=2, random_state=800) # Visualize the data plt.scatter(X[:,0], X[:,1]) plt.show()
The output is as follows:
Create a list with all the possible linkage method hyperparameters:
methods = ['centroid', 'single', 'complete', 'average', 'weighted']
Loop through each of the methods in the list that you just created and display the effect that they have on the same dataset:
for method in methods: Â Â Â Â distances = linkage(X, method=method, metric="euclidean") Â Â Â Â clusters = fcluster(distances, 3, criterion="distance") Â Â Â Â plt.title('linkage: ' + method) Â Â Â Â plt.scatter(X[:,0], X[:,1], c=clusters, cmap='tab20b') Â Â Â Â plt.show()
The output is as follows:
As you can see from the preceding plots, by simply changing the linkage criteria, you can dramatically change the efficacy of your clustering. In this dataset, centroid and average linkage work best at finding discrete clusters that make sense. This is clear from the fact that we generated a dataset of eight clusters, and centroid and average linkage are the only ones that show the clusters that are represented using eight different colors. The other linkage types fall short – most noticeably, single linkage.
Analysis:
Activity 3: Comparing k-means with Hierarchical Clustering
Solution:
Import the necessary packages from scikit-learn (KMeans, AgglomerativeClustering, and silhouette_score), as follows:
from sklearn.cluster import KMeans from sklearn.cluster import AgglomerativeClustering from sklearn.metrics import silhouette_score import pandas as pd import matplotlib.pyplot as plt
Read the wine dataset into the pandas DataFrame and print a small sample:
wine_df = pd.read_csv("wine_data.csv") print(wine_df.head)
The output is as follows:
Visualize the wine dataset to understand the data structure:
plt.scatter(wine_df.values[:,0], wine_df.values[:,1]) plt.title("Wine Dataset") plt.xlabel("OD Reading") plt.ylabel("Proline") plt.show()
The output is as follows:
Use the sklearn implementation of k-means on the wine dataset, knowing that there are three wine types:
km = KMeans(3) km_clusters = km.fit_predict(wine_df)
Use the sklearn implementation of hierarchical clustering on the wine dataset:
ac = AgglomerativeClustering(3, linkage='average') ac_clusters = ac.fit_predict(wine_df)
Plot the predicted clusters from k-means, as follows:
plt.scatter(wine_df.values[:,0], wine_df.values[:,1], c=km_clusters) plt.title("Wine Clusters from Agglomerative Clustering") plt.xlabel("OD Reading") plt.ylabel("Proline") plt.show()
The output is as follows:
Plot the predicted clusters from hierarchical clustering, as follows:
plt.scatter(wine_df.values[:,0], wine_df.values[:,1], c=ac_clusters) plt.title("Wine Clusters from Agglomerative Clustering") plt.xlabel("OD Reading") plt.ylabel("Proline") plt.show()
The output is as follows:
Compare the silhouette score of each clustering method:
print("Silhouette Scores for Wine Dataset:\n") print("k-means Clustering: ", silhouette_score(X[:,11:13], km_clusters)) print("Agg Clustering: ", silhouette_score(X[:,11:13], ac_clusters))
The output will be as follows:
As you can see from the preceding silhouette metric, agglomerative clustering narrowly beats k-means clustering when it comes to separating the clusters by mean intra-cluster distance. This is not the case for every version of agglomerative clustering, however. Instead, try different linkage types and examine how the silhouette score and clustering changes between each!