Summary: | Identifying similar network structures is key to capturing graph isomorphisms and learning representations that exploit structural information encoded in graph data. This work shows that ego networks can produce a structural encoding scheme for arbitrary graphs with greater expressivity than the Weisfeiler–Lehman (1-WL) test. We introduce <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi mathvariant="normal" mathsize="normal">I</mi><mi mathvariant="normal" mathsize="small">G</mi><mi mathvariant="normal" mathsize="small">E</mi><mi mathvariant="normal" mathsize="small">L</mi></mrow></semantics></math></inline-formula>, a preprocessing step to produce features that augment node representations by encoding ego networks into sparse vectors that enrich message passing (MP) graph neural networks (GNNs) beyond 1-WL expressivity. We formally describe the relation between <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi mathvariant="normal" mathsize="normal">I</mi><mi mathvariant="normal" mathsize="small">G</mi><mi mathvariant="normal" mathsize="small">E</mi><mi mathvariant="normal" mathsize="small">L</mi></mrow></semantics></math></inline-formula> and 1-WL, and characterize its expressive power and limitations. Experiments show that <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi mathvariant="normal" mathsize="normal">I</mi><mi mathvariant="normal" mathsize="small">G</mi><mi mathvariant="normal" mathsize="small">E</mi><mi mathvariant="normal" mathsize="small">L</mi></mrow></semantics></math></inline-formula> matches the empirical expressivity of state-of-the-art methods on isomorphism detection while improving performance on nine GNN architectures and six graph machine learning tasks.
|