Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Data Cleaning and Exploration with Machine Learning

You're reading from   Data Cleaning and Exploration with Machine Learning Get to grips with machine learning techniques to achieve sparkling-clean data quickly

Arrow left icon
Product type Paperback
Published in Aug 2022
Publisher Packt
ISBN-13 9781803241678
Length 542 pages
Edition 1st Edition
Arrow right icon
Author (1):
Arrow left icon
Michael Walker Michael Walker
Author Profile Icon Michael Walker
Michael Walker
Arrow right icon
View More author details
Toc

Table of Contents (23) Chapters Close

Preface 1. Section 1 – Data Cleaning and Machine Learning Algorithms
2. Chapter 1: Examining the Distribution of Features and Targets FREE CHAPTER 3. Chapter 2: Examining Bivariate and Multivariate Relationships between Features and Targets 4. Chapter 3: Identifying and Fixing Missing Values 5. Section 2 – Preprocessing, Feature Selection, and Sampling
6. Chapter 4: Encoding, Transforming, and Scaling Features 7. Chapter 5: Feature Selection 8. Chapter 6: Preparing for Model Evaluation 9. Section 3 – Modeling Continuous Targets with Supervised Learning
10. Chapter 7: Linear Regression Models 11. Chapter 8: Support Vector Regression 12. Chapter 9: K-Nearest Neighbors, Decision Tree, Random Forest, and Gradient Boosted Regression 13. Section 4 – Modeling Dichotomous and Multiclass Targets with Supervised Learning
14. Chapter 10: Logistic Regression 15. Chapter 11: Decision Trees and Random Forest Classification 16. Chapter 12: K-Nearest Neighbors for Classification 17. Chapter 13: Support Vector Machine Classification 18. Chapter 14: Naïve Bayes Classification 19. Section 5 – Clustering and Dimensionality Reduction with Unsupervised Learning
20. Chapter 15: Principal Component Analysis 21. Chapter 16: K-Means and DBSCAN Clustering 22. Other Books You May Enjoy

Summary

In this chapter, we went over a range of feature selection methods, from filter to wrapper to embedded methods. We also saw how they work with categorical and continuous targets. For wrapper and embedded methods, we considered how they work with different algorithms.

Filter methods are very easy to run and interpret and are easy on system resources. However, they do not take other features into account when evaluating each feature. Nor do they tell us how that assessment might vary by the algorithm used. Wrapper methods do not have any of these limitations but they are computationally expensive. Embedded methods are often a good compromise, selecting features based on multivariate relationships and a given algorithm without taxing system resources as much as wrapper methods. We also explored how a dimension reduction method, PCA, could improve our feature selection.

You also probably noticed that I slipped in a little bit of model validation during this chapter. We will...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime