Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Principles of Data Science

You're reading from   Principles of Data Science A beginner's guide to essential math and coding skills for data fluency and machine learning

Arrow left icon
Product type Paperback
Published in Jan 2024
Publisher Packt
ISBN-13 9781837636303
Length 326 pages
Edition 3rd Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Sinan Ozdemir Sinan Ozdemir
Author Profile Icon Sinan Ozdemir
Sinan Ozdemir
Arrow right icon
View More author details
Toc

Table of Contents (18) Chapters Close

Preface 1. Chapter 1: Data Science Terminology 2. Chapter 2: Types of Data FREE CHAPTER 3. Chapter 3: The Five Steps of Data Science 4. Chapter 4: Basic Mathematics 5. Chapter 5: Impossible or Improbable – A Gentle Introduction to Probability 6. Chapter 6: Advanced Probability 7. Chapter 7: What Are the Chances? An Introduction to Statistics 8. Chapter 8: Advanced Statistics 9. Chapter 9: Communicating Data 10. Chapter 10: How to Tell if Your Toaster is Learning – Machine Learning Essentials 11. Chapter 11: Predictions Don’t Grow on Trees, or Do They? 12. Chapter 12: Introduction to Transfer Learning and Pre-Trained Models 13. Chapter 13: Mitigating Algorithmic Bias and Tackling Model and Data Drift 14. Chapter 14: AI Governance 15. Chapter 15: Navigating Real-World Data Science Case Studies in Action 16. Index 17. Other Books You May Enjoy

Summary

Our exploration into the world of ML has revealed a vast landscape that extends well beyond the foundational techniques of linear and logistic regression. We delved into decision trees, which provide intuitive insights into data through their hierarchical structure. Naïve Bayes classification offered us a probabilistic perspective, showing how to make predictions under the assumption of feature independence. We ventured into dimensionality reduction, encountering techniques such as feature extraction, which help overcome the COD and reduce computational complexity.

k-means clustering introduced us to the realm of UL, where we learned to find hidden patterns and groupings in data without pre-labeled outcomes. Across these methods, we’ve seen how ML can tackle a plethora of complex problems, from predicting categorical outcomes to uncovering latent structures in data.

Through practical examples, we’ve compared and contrasted SL, which relies on labeled...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime