Interpretability and explainability
As graph learning models become increasingly complex and are applied to critical domains such as healthcare, finance, and social sciences, the need for interpretable and explainable models has grown significantly. Here, we explore two key aspects of interpretability and explainability in graph learning.
Explaining GNN decisions
GNNs often act as black boxes, making it challenging to understand why they make certain predictions. This lack of transparency can be problematic in high-stakes applications such as drug discovery or financial fraud detection. To address this, several approaches have been developed to explain GNN decisions:
- One prominent method is GNNExplainer, which identifies important subgraphs and features that influence a model’s predictions. It does this by optimizing a mutual information objective between a conditional distribution of the GNN’s predictions and a simplified explanation.
- Another approach...