Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Data Cleaning and Exploration with Machine Learning

You're reading from   Data Cleaning and Exploration with Machine Learning Get to grips with machine learning techniques to achieve sparkling-clean data quickly

Arrow left icon
Product type Paperback
Published in Aug 2022
Publisher Packt
ISBN-13 9781803241678
Length 542 pages
Edition 1st Edition
Arrow right icon
Author (1):
Arrow left icon
Michael Walker Michael Walker
Author Profile Icon Michael Walker
Michael Walker
Arrow right icon
View More author details
Toc

Table of Contents (23) Chapters Close

Preface 1. Section 1 – Data Cleaning and Machine Learning Algorithms
2. Chapter 1: Examining the Distribution of Features and Targets FREE CHAPTER 3. Chapter 2: Examining Bivariate and Multivariate Relationships between Features and Targets 4. Chapter 3: Identifying and Fixing Missing Values 5. Section 2 – Preprocessing, Feature Selection, and Sampling
6. Chapter 4: Encoding, Transforming, and Scaling Features 7. Chapter 5: Feature Selection 8. Chapter 6: Preparing for Model Evaluation 9. Section 3 – Modeling Continuous Targets with Supervised Learning
10. Chapter 7: Linear Regression Models 11. Chapter 8: Support Vector Regression 12. Chapter 9: K-Nearest Neighbors, Decision Tree, Random Forest, and Gradient Boosted Regression 13. Section 4 – Modeling Dichotomous and Multiclass Targets with Supervised Learning
14. Chapter 10: Logistic Regression 15. Chapter 11: Decision Trees and Random Forest Classification 16. Chapter 12: K-Nearest Neighbors for Classification 17. Chapter 13: Support Vector Machine Classification 18. Chapter 14: Naïve Bayes Classification 19. Section 5 – Clustering and Dimensionality Reduction with Unsupervised Learning
20. Chapter 15: Principal Component Analysis 21. Chapter 16: K-Means and DBSCAN Clustering 22. Other Books You May Enjoy

Feature scaling

Often, the features we want to use in our model are on very different scales. Put simply, the distance between the minimum and maximum values, or the range, varies substantially across possible features. For example, in the COVID-19 data, the total cases feature goes from 1 to almost 34 million, while aged 65 or older goes from 9 to 27 (the number represents the percentage of the population).

Having features on very different scales impacts many machine learning algorithms. For example, KNN models often use Euclidean distance, and features with greater ranges will have a greater influence on the model. Scaling can address this problem.

In this section, we will go over two popular approaches to scaling: min-max scaling and standard (or z-score) scaling. Min-max scaling replaces each value with its location in the range. More precisely, the following happens:

=

Here, is the min-max score, is the value for the observation of the feature, and and are...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at AU $24.99/month. Cancel anytime