Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Learning Predictive Analytics with Python

You're reading from   Learning Predictive Analytics with Python Gain practical insights into predictive modelling by implementing Predictive Analytics algorithms on public datasets with Python

Arrow left icon
Product type Paperback
Published in Feb 2016
Publisher
ISBN-13 9781783983261
Length 354 pages
Edition 1st Edition
Languages
Arrow right icon
Authors (2):
Arrow left icon
Ashish Kumar Ashish Kumar
Author Profile Icon Ashish Kumar
Ashish Kumar
Gary Dougan Gary Dougan
Author Profile Icon Gary Dougan
Gary Dougan
Arrow right icon
View More author details
Toc

Table of Contents (12) Chapters Close

Preface 1. Getting Started with Predictive Modelling FREE CHAPTER 2. Data Cleaning 3. Data Wrangling 4. Statistical Concepts for Predictive Modelling 5. Linear Regression with Python 6. Logistic Regression with Python 7. Clustering with Python 8. Trees and Random Forests with Python 9. Best Practices for Predictive Modelling A. A List of Links
Index

Fine-tuning the clustering

Deciding the optimum value of K is one of the tough parts while performing a k-means clustering. There are a few methods that can be used to do this.

The elbow method

We earlier discussed that a good cluster is defined by the compactness between the observations of that cluster. The compactness is quantified by something called intra-cluster distance. The intra-cluster distance for a cluster is essentially the sum of pair-wise distances between all possible pairs of points in that cluster.

If we denote intra-cluster distance by W, then for a cluster k intra-cluster, the distance can be denoted by:

The elbow method

Generally, the normalized intra-cluster distance is used, which is given by:

The elbow method

Here Xi and Xj are points in the cluster, Mk is the centroid of the cluster, Nk is the number of points in the centroid, and K is the number of clusters.

Wk' is actually a measure of the variance between the points in the same cluster. Since it is normalized, its value would range from 0 to...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image