ON THE EXPRESSIVE POWER OF GEOMETRIC GRAPH NEURAL NETWORKS Anonymous authors Paper under double-blind review

Abstract

The expressive power of Graph Neural Networks (GNNs) has been studied extensively through the lens of the Weisfeiler-Leman (WL) graph isomorphism test. Yet, many graphs in scientific and engineering applications come embedded in Euclidean space with an additional notion of geometric isomorphism, which is not covered by the WL framework. In this work, we propose a geometric version of the WL test (GWL) for discriminating geometric graphs while respecting the underlying physical symmetries: permutations, rotation, reflection, and translation. We use GWL to characterise the expressive power of GNNs that are invariant or equivariant to physical symmetries in terms of the classes of geometric graphs they can distinguish. This allows us to formalise the advantages of equivariant GNNs over invariant GNNs: equivariant layers have greater expressive power as they enable propagating geometric information beyond local neighbourhoods, while invariant layers cannot distinguish graphs that are locally similar, highlighting their inability to compute global geometric quantities. Finally, we prove the equivalence between the universal approximation properties of geometric GNNs and our more granular discrimination-based perspective.

1. INTRODUCTION

Systems in biochemistry (Jamasb et al., 2022) , material science (Chanussot et al., 2021) , physical simulations (Sanchez-Gonzalez et al., 2020) , and multiagent robotics (Li et al., 2020) contain both geometry and relational structure. Such systems can be modelled via geometric graphs embedded in Euclidean space. For example, molecules are represented as a set of nodes which contain information about each atom and its 3D spatial coordinates as well as other geometric quantities such as velocity or acceleration. Notably, the geometric attributes transform along with Euclidean transformations of the system, i.e. they are equivariant to symmetry groups of rotations, reflections, and translation. Standard Graph Neural Networks (GNNs) which do not take spatial symmetries into account are ill-suited for geometric graphs, as the geometric attributes would no longer retain their physical meaning and transformation behaviour (Bogatskiy et al., 2022; Bronstein et al., 2021) . Addressed R1.2, R4.1 GNNs specialised for geometric graphs follow the message passing paradigm (Gilmer et al., 2017) where node features are updated in a permutation equivariant manner by aggregating features from local neighbourhoods. Crucially, in addition to permutations, the geometric attributes of the nodes transform along with Euclidean transformations of the system, i.e. they are equivariant to the Lie group of rotations (SO(d)) or rotations and reflections (O(d)). We use G as a generic symbol for these Lie groups. We consider two classes of GNNs for geometric graphs: (1) G-equivariant mod-Addresses R2.1 els, where the intermediate features and propagated messages are equivariant geometric quantities such as vectors or tensors (Thomas et al., 2018; Anderson et al., 2019; Jing et al., 2020; Satorras et al., 2021; Brandstetter et al., 2022) ; and (2) G-invariant models, which only propagate local invariant scalar features such as distances and angles (Schütt et al., 2018; Xie & Grossman, 2018; Gasteiger et al., 2020) . Despite promising empirical results for both classes of architectures, key theoretical questions remain unanswered: (1) How to characterise the expressive power of geometric GNNs? And (2) what is the tradeoff between G-equivariant and G-invariant GNNs? The graph isomorphism problem (Read & Corneil, 1977) and the Weisfeiler-Leman (WL) (Weisfeiler & Leman, 1968 ) test for distinguishing non-isomorphic graphs have become a powerful tool for analysing the expressive power of non-geometric GNNs (Xu et al., 2019; Morris et al., 2019) . 1

