Summary: | Graph Classification is a promising area of deep learning, but it has a significant drawback. We need to understand the reasons behind the model’s predicted label of an input graph to trust the prediction, but these reasons are not supplied by Graph Classification models. Hence, Graph Classification Interpretability Methods were conceived. To analyse a new interpretability method, GNNExplainer, on a comparative basis with established methods in our main reference, saliency (also known as CG), GRAD-CAM and DeepLIFT, we develop a bridging algorithm and find the node attribution score of each node in a test graph. The scores of all the nodes in the test graph dataset are then used to produce quantitative metrics (fidelity, contrastivity and sparsity) for comparison.
|