Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
XGBoost for Regression Predictive Modeling and Time Series Analysis

You're reading from   XGBoost for Regression Predictive Modeling and Time Series Analysis Learn how to build, evaluate, and deploy predictive models with expert guidance

Arrow left icon
Product type Paperback
Published in Dec 2024
Publisher Packt
ISBN-13 9781805123057
Length 308 pages
Edition 1st Edition
Arrow right icon
Authors (2):
Arrow left icon
Joyce Weiner Joyce Weiner
Author Profile Icon Joyce Weiner
Joyce Weiner
Partha Pritam Deka Partha Pritam Deka
Author Profile Icon Partha Pritam Deka
Partha Pritam Deka
Arrow right icon
View More author details
Toc

Table of Contents (19) Chapters Close

Preface 1. Part 1:Introduction to Machine Learning and XGBoost with Case Studies
2. Chapter 1: An Overview of Machine Learning, Classification, and Regression FREE CHAPTER 3. Chapter 2: XGBoost Quick Start Guide with an Iris Data Case Study 4. Chapter 3: Demystifying the XGBoost Paper 5. Chapter 4: Adding on to the Quick Start – Switching out the Dataset with a Housing Data Case Study 6. Part 2: Practical Applications – Data, Features, and Hyperparameters
7. Chapter 5: Classification and Regression Trees, Ensembles, and Deep Learning Models – What’s Best for Your Data? 8. Chapter 6: Data Cleaning, Imbalanced Data, and Other Data Problems 9. Chapter 7: Feature Engineering 10. Chapter 8: Encoding Techniques for Categorical Features 11. Chapter 9: Using XGBoost for Time Series Forecasting 12. Chapter 10: Model Interpretability, Explainability, and Feature Importance with XGBoost 13. Part 3: Model Evaluation Metrics and Putting Your Model into Production
14. Chapter 11: Metrics for Model Evaluations and Comparisons 15. Chapter 12: Managing a Feature Engineering Pipeline in Training and Inference 16. Chapter 13: Deploying Your XGBoost Model 17. Index 18. Other Books You May Enjoy

Classification and regression decision tree models

Classification and regression trees (CART) are a type of supervised learning algorithm that can be used both for classification and regression problems.

In a classification problem, the goal is to predict the class, label, or category of a data point or an object. One example of a classification problem is to predict whether there will be customer churn or if a customer will purchase a product based on historical data.

In a regression problem, the goal is to predict a continuous numerical value, such as the price of a house based on the input features. For example, a regression CART model could be used to predict the price of a house based on input features, such as its size, location, and other relevant features.

CART models are built by recursively splitting the data into subsets based on the value of a feature that best separates the data. The algorithm chooses the feature that maximizes the separation of the classes or minimizes the variance of the target variable. The splitting process is repeated until the data are no longer able to be split further.

This process creates a tree-like structure where each internal node represents a feature or attribute, and each leaf node represents a predicted class label or a predicted continuous value. The tree can then be used to predict the class label or continuous value for new data points by following the path down the tree based on their features.

Figure 1.1 – A sample classification and regression tree

Figure 1.1 – A sample classification and regression tree

CART models are easy to explain and can handle both categorical and numerical features. However, they can be prone to overfitting. Overfitting is a phenomenon in machine learning where a model performs extremely well on the training data but fails to generalize well to unseen data. Regularization techniques such as pruning can be used to prevent overfitting. Pruning in machine learning refers to the technique of selectively removing unnecessary or less important features from a model to improve its efficiency, reduce its complexity, and prevent overfitting. The following table summarizes the advantages and disadvantages of CART models:

Advantages of CART models

Disadvantages of CART models

Easy to understand and interpret

Prone to overfitting

Relatively fast to train

Sensitive to noise in the data

Can be used for both classification and regression problems

Can be computationally expensive to train, especially for large datasets, because they need to search through all possible splits in the data in order to find the optimal tree structure

Table 1.1 – Advantages and disadvantages of CART models

As seen in the preceding table, overall, CART models are a powerful supervised learning-based tool that can be used for a variety of machine learning tasks. However, they have limitations, and we must take steps to prevent overfitting.

You have been reading a chapter from
XGBoost for Regression Predictive Modeling and Time Series Analysis
Published in: Dec 2024
Publisher: Packt
ISBN-13: 9781805123057
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image