Exploring PDPs for model interpretation
When building machine learning models, understanding how individual features impact the model’s predictions is helpful, especially for more complex models such as gradient boosting. Partial dependence plots (PDPs are a valuable tool for visualizing the relationship between a feature and the target variable, holding all other features constant. PDPs allow you to explore how varying a single feature influences the predictions made by the model.
Why use PDPs?
PDPs help in visualizing the marginal effect of one or two features on the predicted outcome. They’re particularly useful for understanding how specific features affect predictions. They also help with observing whether a relationship between the feature and the target is linear, monotonic, or more complex. Like SHAP, PDPs provide a global view of the feature’s influence on the model across all instances, unlike local interpretability methods such as LIME, which focus...