GRAPH NEURAL NETWORKS ARE MORE POWERFUL THAN WE THINK

Abstract

Graph Neural Networks (GNNs) are powerful convolutional architectures that have shown remarkable performance in various node-level and graph-level tasks. Despite their success, the common belief is that the expressive power of standard GNNs is limited and that they are at most as discriminative as the Weisfeiler-Lehman (WL) algorithm. In this paper we argue the opposite and show that standard GNNs, with anonymous inputs, produce more discriminative representations than the WL algorithm. In this direction, we derive an alternative analysis that employs linear algebraic tools and characterize the representational power of GNNs with respect to the eigenvalue decomposition of the graph operators. We prove that GNNs are able to generate distinctive outputs from white uninformative inputs, for, at least, all graphs that have different eigenvalues. We also show that simple convolutional architectures with white inputs, produce features that count the closed paths in the graph and are provably more expressive than the WL representations. Thorough experimental analysis on graph isomorphism and graph classification datasets corroborates our theoretical results and demonstrates the effectiveness of the proposed approach.

1. INTRODUCTION

Graph Neural Networks (GNNs) have emerged in the field of machine learning and artificial intelligence as powerful tools that process network structures and network data. Their convolutional architecture allows them to inherit all the favorable properties of convolutional neural networks (CNNs), while they also exploit the graph structure. Despite their remarkable performance, the success of GNNs is still to be demystified. A lot of research has been conducted to theoretically support the experimental developments, focusing on understanding the functionality of GNNs and analyzing their properties. In particular, permutation invariance-equivariance (Maron et al., 2018) , stability to perturbations (Gama et al., 2020) and transferability (Ruiz et al., 2020a; Levie et al., 2021) are properties tantamount to the success of the GNNs. Lately, the research focus has been shifted towards analyzing the expressive power of GNNs, since their universality depends on their ability to produce different outputs for different graphs. The common belief is that standard anonymous GNNs have limited expressive power (Xu et al., 2019) and that it is upper bounded by the expressive power of the Weisfeiler-Lehman (WL) algorithm (Weisfeiler & Leman, 1968) . This induced increased research activity towards building more expressive GNNs by either increasing their complexity, or employ independent graph algorithms to design expressive inputs. In this work we argue the opposite. We prove that standard anonymous graph convolutional structures are able to generate more expressive representations than the WL algorithm. Therefore, resorting to handcrafted features or complex GNNs to break the WL limits is not necessary.

Our work is motivated by the following research problem:

Problem definition: Given a pair of different graphs G, Ĝ and anonymous inputs X, X; is there a GNN ϕ with parameter tensor H such that ϕ (X; G, H) , ϕ X; Ĝ, H are nonisomorphic? As anonymous inputs, we define inputs that are identity and structure agnostic, i.e., they cannot distinguish graphs or nodes of the graph before processing. Why anonymous? Because if the inputs are discriminative prior to processing, concrete conclusions on the discriminative power of GNNs, 1

