Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Essential Statistics for Non-STEM Data Analysts

You're reading from   Essential Statistics for Non-STEM Data Analysts Get to grips with the statistics and math knowledge needed to enter the world of data science with Python

Arrow left icon
Product type Paperback
Published in Nov 2020
Publisher Packt
ISBN-13 9781838984847
Length 392 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Rongpeng Li Rongpeng Li
Author Profile Icon Rongpeng Li
Rongpeng Li
Arrow right icon
View More author details
Toc

Table of Contents (19) Chapters Close

Preface 1. Section 1: Getting Started with Statistics for Data Science
2. Chapter 1: Fundamentals of Data Collection, Cleaning, and Preprocessing FREE CHAPTER 3. Chapter 2: Essential Statistics for Data Assessment 4. Chapter 3: Visualization with Statistical Graphs 5. Section 2: Essentials of Statistical Analysis
6. Chapter 4: Sampling and Inferential Statistics 7. Chapter 5: Common Probability Distributions 8. Chapter 6: Parametric Estimation 9. Chapter 7: Statistical Hypothesis Testing 10. Section 3: Statistics for Machine Learning
11. Chapter 8: Statistics for Regression 12. Chapter 9: Statistics for Classification 13. Chapter 10: Statistics for Tree-Based Methods 14. Chapter 11: Statistics for Ensemble Methods 15. Section 4: Appendix
16. Chapter 12: A Collection of Best Practices 17. Chapter 13: Exercises and Projects 18. Other Books You May Enjoy

Understanding and using the boosting module

Unlike bagging, which focuses on reducing variance, the goal of boosting is to reduce bias without increasing variance.

Bagging creates a bunch of base estimators with equal importance, or weights, in terms of determining the final prediction. The data that's fed into the base estimators is also uniformly resampled from the training set.

Determining the possibility of parallel processing

From the description of bagging we provided, you may imagine that it is relatively easy to run bagging algorithms. Each process can independently perform sampling and model training. Aggregation is only performed at the last step, when all the base estimators have been trained. In the preceding code snippet, I set n_jobs = 20 to build the bagging classifier. When it is being trained, 20 cores on the host machine will be used at most.

Boosting solves a different problem. The primary goal is to create an estimator with low bias. In the world...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at ₹800/month. Cancel anytime