Shrnutí: | We consider a general class of data transformations based on
Graph Neural Networks (GNNs), which can be used for a
wide variety of tasks. An important question in this setting is
characterising the expressive power of these transformations
in terms of a suitable logic-based language. From a practical
perspective, the correspondence of a GNN with a logical theory can be exploited for explaining the model’s predictions
symbolically. In this paper, we introduce a broad family of
GNN-based transformations which can be characterised using Datalog programs with negation-as-failure, which can be
computed from the GNNs after training. This generalises existing approaches based on positive programs by enabling the
learning of nonmonotonic transformations. We show empirically that these GNNs offer good performance for knowledge
graph completion tasks, and that we can efficiently extract
programs for explaining individual predictions.
|