Getting started with interpretable methods
In the world of AI and ML, black box models are those that cannot be easily interpreted or understood by humans. This contrasts with white-box ML models, which can be easily interpreted and understood. White-box models are models whose inner logic, functionality, and programming steps are transparent. As a result, the decisions made by them can be understood. The most common white-box models include decision trees, as well as linear regression models, and Bayesian networks. Such models, in particular, linear models and generalized linear models such as logistic regression, have been commonly used within enterprises for well over a decade. While advances in black-box models such as neural networks and XGBoost typically improve on the predictive power of their equivalent logistic regression counterparts, this is at the expense of transparency.
Black-box models are, by definition, hard to look into and interpret. When AI produces insights...