Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Principles of Data Science

You're reading from   Principles of Data Science A beginner's guide to essential math and coding skills for data fluency and machine learning

Arrow left icon
Product type Paperback
Published in Jan 2024
Publisher Packt
ISBN-13 9781837636303
Length 326 pages
Edition 3rd Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Sinan Ozdemir Sinan Ozdemir
Author Profile Icon Sinan Ozdemir
Sinan Ozdemir
Arrow right icon
View More author details
Toc

Table of Contents (18) Chapters Close

Preface 1. Chapter 1: Data Science Terminology 2. Chapter 2: Types of Data FREE CHAPTER 3. Chapter 3: The Five Steps of Data Science 4. Chapter 4: Basic Mathematics 5. Chapter 5: Impossible or Improbable – A Gentle Introduction to Probability 6. Chapter 6: Advanced Probability 7. Chapter 7: What Are the Chances? An Introduction to Statistics 8. Chapter 8: Advanced Statistics 9. Chapter 9: Communicating Data 10. Chapter 10: How to Tell if Your Toaster is Learning – Machine Learning Essentials 11. Chapter 11: Predictions Don’t Grow on Trees, or Do They? 12. Chapter 12: Introduction to Transfer Learning and Pre-Trained Models 13. Chapter 13: Mitigating Algorithmic Bias and Tackling Model and Data Drift 14. Chapter 14: AI Governance 15. Chapter 15: Navigating Real-World Data Science Case Studies in Action 16. Index 17. Other Books You May Enjoy

Feature extraction and PCA

A common problem when working with data, particularly when it comes to ML, is having an overwhelming number of columns and not enough rows to handle such a quantity of columns.

A great example of this is when we were looking at the send cash now example in our naïve Bayes example earlier. Remember we had literally 0 instances of texts with that exact phrase? In that case, we turned to a naïve assumption that allowed us to extrapolate a probability for both of our categories.

The reason we had this problem in the first place is because of something called the curse of dimensionality (COD). The COD basically says that as we introduce new feature columns, we need exponentially more rows (data points) to consider the increased number of possibilities.

Consider an example where we attempt to use a learning model that utilizes the distance between points on a corpus of text that has 4,086 pieces of text and that the whole thing has been count...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime