Implementing LIME for model interpretation
Local Interpretable Model-agnostic Explanations (LIME) uses a different approach from SHAP. While SHAP assigns each feature a contribution score to explain the model’s prediction, LIME approaches interpretability by building a simple, local model around the prediction. LIME adjusts the input features in small, controlled ways and observes how these changes influence the prediction, enabling it to construct a straightforward model that explains the prediction in an interpretable manner.
Why use LIME?
LIME is particularly useful for debugging individual predictions or edge cases. It focuses on local explanations. Additionally, LIME works with any machine learning model, not just tree-based models such as XGBoost.
Using LIME to interpret XGBoost predictions
To use LIME effectively, you’ll need to pass unscaled data to LIME and use a wrapper function to scale the data before prediction. Let’s try it out:
-
...