Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Principles of Data Science

You're reading from   Principles of Data Science Mathematical techniques and theory to succeed in data-driven industries

Arrow left icon
Product type Paperback
Published in Dec 2016
Publisher Packt
ISBN-13 9781785887918
Length 388 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Sinan Ozdemir Sinan Ozdemir
Author Profile Icon Sinan Ozdemir
Sinan Ozdemir
Arrow right icon
View More author details
Toc

Table of Contents (15) Chapters Close

Preface 1. How to Sound Like a Data Scientist FREE CHAPTER 2. Types of Data 3. The Five Steps of Data Science 4. Basic Mathematics 5. Impossible or Improbable – A Gentle Introduction to Probability 6. Advanced Probability 7. Basic Statistics 8. Advanced Statistics 9. Communicating Data 10. How to Tell If Your Toaster Is Learning – Machine Learning Essentials 11. Predictions Don't Grow on Trees – or Do They? 12. Beyond the Essentials 13. Case Studies Index

K folds cross-validation

K folds cross-validation is a much better estimator of our model's performance, even more so than our train-test split. Here's how it works:

  1. We will take a finite number of equal slices of our data (usually 3, 5, or 10). Assume that this number is called k.
  2. For each "fold" of the cross-validation, we will treat k-1 of the sections as the training set, and the remaining section as our test set.
  3. For the remaining folds, a different arrangement of k-1 sections is considered for our training set and a different section is our training set.
  4. We compute a set metric for each fold of the cross-validation.
  5. We average our scores at the end.

Cross-validation is effectively using multiple train-test splits being done on the same dataset. This is done for a few reasons, but mainly because cross-validation is the most honest estimate of our model's out of the sample error.

To explain this visually, let's look at our mammal brain and body weight example...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at £16.99/month. Cancel anytime