Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Principles of Data Science

You're reading from   Principles of Data Science Understand, analyze, and predict data using Machine Learning concepts and tools

Arrow left icon
Product type Paperback
Published in Dec 2018
Publisher Packt
ISBN-13 9781789804546
Length 424 pages
Edition 2nd Edition
Languages
Tools
Arrow right icon
Authors (3):
Arrow left icon
Sunil Kakade Sunil Kakade
Author Profile Icon Sunil Kakade
Sunil Kakade
Sinan Ozdemir Sinan Ozdemir
Author Profile Icon Sinan Ozdemir
Sinan Ozdemir
Marco Tibaldeschi Marco Tibaldeschi
Author Profile Icon Marco Tibaldeschi
Marco Tibaldeschi
Arrow right icon
View More author details
Toc

Table of Contents (17) Chapters Close

Preface 1. How to Sound Like a Data Scientist FREE CHAPTER 2. Types of Data 3. The Five Steps of Data Science 4. Basic Mathematics 5. Impossible or Improbable - A Gentle Introduction to Probability 6. Advanced Probability 7. Basic Statistics 8. Advanced Statistics 9. Communicating Data 10. How to Tell If Your Toaster Is Learning – Machine Learning Essentials 11. Predictions Don't Grow on Trees - or Do They? 12. Beyond the Essentials 13. Case Studies 14. Building Machine Learning Models with Azure Databricks and Azure Machine Learning service Other Books You May Enjoy Index

Naive Bayes classification

Let's get right into it! Let's begin with Naive Bayes classification. This machine learning model relies heavily on the results from the previous chapters, specifically with Bayes' theorem:

Naive Bayes classification

Let's look a little closer at the specific features of this formula:

  • P(H) is the probability of the hypothesis before we observe the data, called the prior probability, or just prior
  • P(H|D) is what we want to compute, the probability of the hypothesis after we observe the data, called the posterior
  • P(D|H) is the probability of the data under the given hypothesis, called the likelihood
  • P(D) is the probability of the data under any hypothesis, called the normalizing constant

Naive Bayes classification is a classification model, and therefore a supervised model. Given this, what kind of data do we need?

  • Labeled data
  • Unlabeled data

(Insert Jeopardy music here)

If you answered labeled data, then you're well on your way to becoming a data scientist!

Suppose we have...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image