Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On Machine Learning with scikit-learn and Scientific Python Toolkits

You're reading from   Hands-On Machine Learning with scikit-learn and Scientific Python Toolkits A practical guide to implementing supervised and unsupervised machine learning algorithms in Python

Arrow left icon
Product type Paperback
Published in Jul 2020
Publisher Packt
ISBN-13 9781838826048
Length 384 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Tarek Amr Tarek Amr
Author Profile Icon Tarek Amr
Tarek Amr
Arrow right icon
View More author details
Toc

Table of Contents (18) Chapters Close

Preface 1. Section 1: Supervised Learning
2. Introduction to Machine Learning FREE CHAPTER 3. Making Decisions with Trees 4. Making Decisions with Linear Equations 5. Preparing Your Data 6. Image Processing with Nearest Neighbors 7. Classifying Text Using Naive Bayes 8. Section 2: Advanced Supervised Learning
9. Neural Networks – Here Comes Deep Learning 10. Ensembles – When One Model Is Not Enough 11. The Y is as Important as the X 12. Imbalanced Learning – Not Even 1% Win the Lottery 13. Section 3: Unsupervised Learning and More
14. Clustering – Making Sense of Unlabeled Data 15. Anomaly Detection – Finding Outliers in Data 16. Recommender System – Getting to Know Their Taste 17. Other Books You May Enjoy

How do decision trees learn?

It's time to find out how decision trees actually learn in order to configure them. In the internal structure we just printed, the tree decided to use a petal width of 0.8 as its initial splitting decision. This was done because decision trees try to build the smallest possible tree using the following technique.

It went through all the features trying to find a feature (petal width, here) and a value within that feature (0.8, here) so that if we split all our training data into two parts (one for petal width ≤ 0.8, and one for petal width > 0.8), we get the purest split possible. In other words, it tries to find a condition where we can separate our classes as much as possible. Then, for each side, it iteratively tries to split the data further using the same technique.

Splitting criteria

If we onlyhad two classes, an ideal split would put members of one class on one side and members of the others on the other...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image