Summary
As organizations increasingly turn to AI to drive critical business decisions, it is becoming increasingly important to understand how and why these systems are making the predictions they are making. This is known as AI explainability. Not only does AI explainability help to build trust in these systems but it also plays a crucial role in debugging and improving AI models. When we can understand how an AI algorithm works, we can have confidence in its results. However, if we cannot explain the workings of an AI system, we cannot be sure that it is making accurate predictions.
In enterprises and any other business setting in which explainability methods must be applied, there is a constant need to question whether the explainability challenges that come with more complex ML models can be justified, particularly when simpler ML models can do almost as good a job predictively.
In this chapter, we reviewed various methods for explaining AI models, including visualizing data...