Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Principles of Data Science

You're reading from   Principles of Data Science A beginner's guide to essential math and coding skills for data fluency and machine learning

Arrow left icon
Product type Paperback
Published in Jan 2024
Publisher Packt
ISBN-13 9781837636303
Length 326 pages
Edition 3rd Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Sinan Ozdemir Sinan Ozdemir
Author Profile Icon Sinan Ozdemir
Sinan Ozdemir
Arrow right icon
View More author details
Toc

Table of Contents (18) Chapters Close

Preface 1. Chapter 1: Data Science Terminology FREE CHAPTER 2. Chapter 2: Types of Data 3. Chapter 3: The Five Steps of Data Science 4. Chapter 4: Basic Mathematics 5. Chapter 5: Impossible or Improbable – A Gentle Introduction to Probability 6. Chapter 6: Advanced Probability 7. Chapter 7: What Are the Chances? An Introduction to Statistics 8. Chapter 8: Advanced Statistics 9. Chapter 9: Communicating Data 10. Chapter 10: How to Tell if Your Toaster is Learning – Machine Learning Essentials 11. Chapter 11: Predictions Don’t Grow on Trees, or Do They? 12. Chapter 12: Introduction to Transfer Learning and Pre-Trained Models 13. Chapter 13: Mitigating Algorithmic Bias and Tackling Model and Data Drift 14. Chapter 14: AI Governance 15. Chapter 15: Navigating Real-World Data Science Case Studies in Action 16. Index 17. Other Books You May Enjoy

Performing naïve Bayes classification

Let’s get right into it! Let’s begin with naïve Bayes classification. This ML model relies heavily on results from previous chapters, specifically with Bayes’ theorem:

<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><mrow><mi>P</mi><mfenced open="(" close=")"><mrow><mi>H</mi><mo>|</mo><mi>D</mi></mrow></mfenced><mo>=</mo><mfrac><mrow><mi>P</mi><mfenced open="(" close=")"><mrow><mi>D</mi><mo>|</mo><mi>H</mi></mrow></mfenced><mo>â‹…</mo><mi>P</mi><mfenced open="(" close=")"><mi>H</mi></mfenced></mrow><mrow><mi>P</mi><mfenced open="(" close=")"><mi>D</mi></mfenced></mrow></mfrac></mrow></mrow></math>

Let’s look a little closer at the specific features of this formula:

  • P(H) is the probability of the hypothesis before we observe the data, called the prior probability, or just prior
  • P(H|D) is what we want to compute: the probability of the hypothesis after we observe the data, called the posterior
  • P(D|H) is the probability of the data under the given hypothesis, called the likelihood
  • P(D) is the probability of the data under any hypothesis, called the normalizing constant

Naïve Bayes classification is a classification model, and therefore a supervised model. Given this, what kind of data do we need – labeled or unlabeled data?

(Insert Jeopardy music here)

If you answered labeled data, then...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image