ON THE EXPRESSIVE POWER OF GEOMETRIC GRAPH NEURAL NETWORKS Anonymous authors Paper under double-blind review

Abstract

The expressive power of Graph Neural Networks (GNNs) has been studied extensively through the lens of the Weisfeiler-Leman (WL) graph isomorphism test. Yet, many graphs in scientific and engineering applications come embedded in Euclidean space with an additional notion of geometric isomorphism, which is not covered by the WL framework. In this work, we propose a geometric version of the WL test (GWL) for discriminating geometric graphs while respecting the underlying physical symmetries: permutations, rotation, reflection, and translation. We use GWL to characterise the expressive power of GNNs that are invariant or equivariant to physical symmetries in terms of the classes of geometric graphs they can distinguish. This allows us to formalise the advantages of equivariant GNNs over invariant GNNs: equivariant layers have greater expressive power as they enable propagating geometric information beyond local neighbourhoods, while invariant layers cannot distinguish graphs that are locally similar, highlighting their inability to compute global geometric quantities. Finally, we prove the equivalence between the universal approximation properties of geometric GNNs and our more granular discrimination-based perspective.

1. INTRODUCTION

Systems in biochemistry (Jamasb et al., 2022) , material science (Chanussot et al., 2021) , physical simulations (Sanchez-Gonzalez et al., 2020) , and multiagent robotics (Li et al., 2020) contain both geometry and relational structure. Such systems can be modelled via geometric graphs embedded in Euclidean space. For example, molecules are represented as a set of nodes which contain information about each atom and its 3D spatial coordinates as well as other geometric quantities such as velocity or acceleration. Notably, the geometric attributes transform along with Euclidean transformations of the system, i.e. they are equivariant to symmetry groups of rotations, reflections, and translation. Standard Graph Neural Networks (GNNs) which do not take spatial symmetries into account are ill-suited for geometric graphs, as the geometric attributes would no longer retain their physical meaning and transformation behaviour (Bogatskiy et al., 2022; Bronstein et al., 2021) . Addressed R1.2, R4.1 GNNs specialised for geometric graphs follow the message passing paradigm (Gilmer et al., 2017) where node features are updated in a permutation equivariant manner by aggregating features from local neighbourhoods. Crucially, in addition to permutations, the geometric attributes of the nodes transform along with Euclidean transformations of the system, i.e. they are equivariant to the Lie group of rotations (SO(d)) or rotations and reflections (O(d)). We use G as a generic symbol for these Lie groups. We consider two classes of GNNs for geometric graphs: (1) G-equivariant mod-Addresses R2.1 els, where the intermediate features and propagated messages are equivariant geometric quantities such as vectors or tensors (Thomas et al., 2018; Anderson et al., 2019; Jing et al., 2020; Satorras et al., 2021; Brandstetter et al., 2022) ; and (2) G-invariant models, which only propagate local invariant scalar features such as distances and angles (Schütt et al., 2018; Xie & Grossman, 2018; Gasteiger et al., 2020) . Despite promising empirical results for both classes of architectures, key theoretical questions remain unanswered: (1) How to characterise the expressive power of geometric GNNs? And ( 2) what is the tradeoff between G-equivariant and G-invariant GNNs? The graph isomorphism problem (Read & Corneil, 1977) and the Weisfeiler-Leman (WL) (Weisfeiler & Leman, 1968) test for distinguishing non-isomorphic graphs have become a powerful tool for analysing the expressive power of non-geometric GNNs (Xu et al., 2019; Morris et al., 2019) . The WL framework has been a major driver of progress in graph representation learning (Chen et al., 2019; Maron et al., 2019; Dwivedi et al., 2020; Bodnar et al., 2021b; a) . However, the WL framework does not directly apply to geometric graphs as they exhibit a stronger notion of isomorphism that also takes spatial symmetries into account.

Contributions.

In this work, we study the expressive power of geometric GNNs from the perspective of discriminating non-isomorphic geometric graphs: • In Section 3, we propose a geometric version of the Weisfeiler-Leman graph isomorphism test, termed GWL. We use GWL to formally characterise classes of graphs that can and cannot be distinguished by G-invariant and G-equivariant GNNs. We show how invariant models have limited expressive power as they cannot distinguish graphs where one-hop local neighbourhoods are similar, while equivariant models distinguish a larger class of graphs by propagating geometric vector quantities beyond local neighbourhoods. • In Section 4, we study the design space of geometric GNNs using GWL, highlighting their theoretical limitations in terms of depth and body order, as well as discussing practical implications. We show that G-invariant models cannot compute global geometric properties such as volume, area, centroid, etc. Synthetic experiments in Appendix C supplement our theory and highlight practical challenges in building geometric GNNs. Addresses R1.4, R2.5, R3.1, R4.1 • In Section 5, we follow Chen et al. (2019) and prove an equivalence between a model's ability to discriminate geometric graphs and its ability to universally approximate G-invariant functions. While universality is binary, GWL's discrimination-based perspective provides a more granular and practically insightful lens to study geoemtric GNNs.

2. BACKGROUND

Graph Isomorphism and Weisfeiler-Leman. An attributed graph G = (A, S) with a node set V of size n consists of an n × n adjacency matrix A and a matrix of scalar features S ∈ R n×f . Two attributed graphs G, H are isomorphic if there exists an edge-preserving bijection b : V(G) → V(H) such that s (G) i = s (H) b(i) , where the subscripts index rows and columns in the corresponding matrices. The Weisfeiler-Leman test (WL) is an algorithm for testing whether two (attributed) graphs are isomorphic (Weisfeiler & Leman, 1968) . At iteration zero the algorithm assigns a colour c (0) i ∈ C from a countable space of colours C to each node i. Nodes are coloured the same if their features are the same, otherwise, they are coloured differently. In subsequent iterations t, WL iteratively updates the node colouring by producing a new c , {{c (t-1) j | j ∈ N i }} ,



Figure 1: Geometric Weisfeiler-Leman Test. GWL distinguishes non-isomorphic geometric graphs G 1 and G 2 by injectively assigning colours to distinct neighbourhood patterns, up to global symmetries (here G = O(d)). Each iteration expands the neighbourhood from which geometric information can be gathered (shaded for node i). Example inspired by Schütt et al. (2021).

