Metrics for Model Evaluations and Comparisons
In this chapter, we’ll measure how well a model is working for your dataset and how to adjust modeling parameters to improve the predictions. First, we will get hands-on experience with scikit-learn and the Python APIs for XGBoost. Then, we will learn about hyperparameter tuning and using metrics to measure how well a model is working. We learned about using some of these model-fitting metrics (such as R2 and RSME) in Chapter 5. We will expand on that understanding to include metrics for evaluating classification models in this chapter. Lastly, we will share cautions on over- and underfitting.
In this chapter, we will cover the following main topics:
- Working with the XGBoost API
- Evaluating model performance for classification and regression
- Tuning XGBoost hyperparameters to improve model fit and efficiency