Summary
Throughout this chapter, you gained insights into how interpretability and explainability fit into the picture of a healthy model and a robust data science workflow. We saw how they are important not just for creating a great model, but also for business, moral, and legal reasons.
We checked back into the algorithms from earlier chapters, such as decision trees, and saw that they have a great advantage not only in accuracy but also in their ability to be interpreted by the data scientists creating them.
Later, we saw how even despite the suggestion that simpler models should be considered first, black box models are quite common, so we should still be able to interpret models such as random forests. With that in mind, you saw how LIME can be a great tool to turn that black box into a more transparent version of itself by assuming that linear relationships can be found when zooming in on the global space.
Finally, we checked out SHAP, which builds on Shapley values...