Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
The Python Workshop

You're reading from   The Python Workshop Learn to code in Python and kickstart your career in software development or data science

Arrow left icon
Product type Paperback
Published in Nov 2019
Publisher Packt
ISBN-13 9781839218859
Length 608 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Authors (6):
Arrow left icon
Andrew Bird Andrew Bird
Author Profile Icon Andrew Bird
Andrew Bird
Graham Lee Graham Lee
Author Profile Icon Graham Lee
Graham Lee
Corey Wade Corey Wade
Author Profile Icon Corey Wade
Corey Wade
Dr. Lau Cher Han Dr. Lau Cher Han
Author Profile Icon Dr. Lau Cher Han
Dr. Lau Cher Han
Olivier Pons Olivier Pons
Author Profile Icon Olivier Pons
Olivier Pons
Mario Corchero Jiménez Mario Corchero Jiménez
Author Profile Icon Mario Corchero Jiménez
Mario Corchero Jiménez
+2 more Show less
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. Vital Python – Math, Strings, Conditionals, and Loops FREE CHAPTER 2. Python Structures 3. Executing Python – Programs, Algorithms, and Functions 4. Extending Python, Files, Errors, and Graphs 5. Constructing Python – Classes and Methods 6. The Standard Library 7. Becoming Pythonic 8. Software Development 9. Practical Python – Advanced Topics 10. Data Analytics with pandas and NumPy 11. Machine Learning Appendix

K-Nearest Neighbors, Decision Trees, and Random Forests

Are there other machine learning algorithms, besides LinearRegression(), that is suitable for the Boston Housing dataset? Absolutely. There are many regressors in the scikit-learn library that may be used. Regressors are generally considered a class of machine learning algorithms that are suitable for continuous target values. In addition to Linear Regression, Ridge, and Lasso, we can try K-Nearest Neighbors, Decision Trees, and Random Forests. These models perform well on a wide range of datasets. Let's try them out and analyze them individually.

K-Nearest Neighbors

The idea behind K-Nearest Neighbors (KNN) is straightforward. When choosing the output of a row with an unknown label, the prediction is the same as the output of its k-nearest neighbors, where k may be any whole number.

For instance, let's say that k=3. Given an unknown label, we take n columns for this row and place them in n-dimensional space...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image