Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Interpretable Machine Learning with Python

You're reading from   Interpretable Machine Learning with Python Build explainable, fair, and robust high-performance models with hands-on, real-world examples

Arrow left icon
Product type Paperback
Published in Oct 2023
Publisher Packt
ISBN-13 9781803235424
Length 606 pages
Edition 2nd Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Serg Masís Serg Masís
Author Profile Icon Serg Masís
Serg Masís
Arrow right icon
View More author details
Toc

Table of Contents (17) Chapters Close

Preface 1. Interpretation, Interpretability, and Explainability; and Why Does It All Matter? 2. Key Concepts of Interpretability FREE CHAPTER 3. Interpretation Challenges 4. Global Model-Agnostic Interpretation Methods 5. Local Model-Agnostic Interpretation Methods 6. Anchors and Counterfactual Explanations 7. Visualizing Convolutional Neural Networks 8. Interpreting NLP Transformers 9. Interpretation Methods for Multivariate Forecasting and Sensitivity Analysis 10. Feature Selection and Engineering for Interpretability 11. Bias Mitigation and Causal Inference Methods 12. Monotonic Constraints and Model Tuning for Interpretability 13. Adversarial Robustness 14. What’s Next for Machine Learning Interpretability? 15. Other Books You May Enjoy
16. Index

Mission accomplished

The first part of the mission was to understand risk factors for cardiovascular disease, and you’ve determined that the top four risk factors are systolic blood pressure (ap_hi), age, cholesterol, and weight according to the logistic regression model, of which only age is non-modifiable. However, you also realized that systolic blood pressure (ap_hi) is not as meaningful on its own since it relies on diastolic blood pressure (ap_lo) for interpretation. The same goes for weight and height. We learned that the interaction of features plays a crucial role in interpretation, and so does their relationship with each other and the target variable, whether linear or monotonic. Furthermore, the data is only a representation of the truth, which can be wrong. After all, we found anomalies that, left unchecked, can bias our model.

Another source of bias is how the data was collected. After all, you can wonder why the model’s top features were all objective and examination features. Why isn’t smoking or drinking a larger factor? To verify whether there was sample bias involved, you would have to compare with other more trustworthy datasets to check whether your dataset is underrepresenting drinkers and smokers. Or maybe the bias was introduced by the question that asked whether they smoked now, and not whether they had ever smoked for an extended period.

Another type of bias that we could address is exclusion bias—our data might be missing information that explains the truth that the model is trying to depict. For instance, we know through medical research that blood pressure issues such as isolated systolic hypertension, which increases CVD risk, are caused by underlying conditions such as diabetes, hyperthyroidism, arterial stiffness, and obesity, to name a few. The only one of these conditions that we can derive from the data is obesity and not the other ones. If we want to be able to interpret a model’s predictions well, we need to have all relevant features. Otherwise, there will be gaps we cannot explain. Maybe once we add them, they won’t make much of a difference, but that’s what the methods we will learn in Chapter 10, Feature Selection and Engineering for Interpretability, are for.

The second part of the mission was to be able to interpret individual model predictions. We can do this well enough by plotting decision regions. It’s a simple method, but it has many limitations, especially in situations where there are more than a handful of features, and they tend to interact a lot with each other. Chapter 5, Local Model-Agnostic Interpretation Methods, and Chapter 6, Anchors and Counterfactual Explanations, will cover local interpretation methods in more detail. However, the decision region plot method helps illustrate many of the concepts surrounding decision boundaries, which we will discuss in those chapters.

You have been reading a chapter from
Interpretable Machine Learning with Python - Second Edition
Published in: Oct 2023
Publisher: Packt
ISBN-13: 9781803235424
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image