LIMITLESS STABILITY FOR GRAPH CONVOLUTIONAL NETWORKS

Abstract

This work establishes rigorous, novel and widely applicable stability guarantees and transferability bounds for graph convolutional networks -without reference to any underlying limit object or statistical distribution. Crucially, utilized graphshift operators (GSOs) are not necessarily assumed to be normal, allowing for the treatment of networks on both directed-and for the first time also undirected graphs. Stability to node-level perturbations is related to an 'adequate (spectral) covering' property of the filters in each layer. Stability to edge-level perturbations is related to Lipschitz constants and newly introduced semi-norms of filters. Results on stability to topological perturbations are obtained through recently developed mathematicalphysics based tools. As an important and novel example, it is showcased that graph convolutional networks are stable under graph-coarse-graining procedures (replacing strongly-connected sub-graphs by single nodes) precisely if the GSO is the graph Laplacian and filters are regular at infinity. These new theoretical results are supported by corresponding numerical investigations.

1. INTRODUCTION

Graph Convolutional Networks (GCNs) (Kipf & Welling, 2017; Hammond et al., 2011; Defferrard et al., 2016) generalize Euclidean convolutional networks to the graph setting by replacing convolutional filters by functional calculus filters; i.e. scalar functions applied to a suitably chosen graph-shift-oprator capturing the geometry of the underlying graph. A key concept in trying to understand the underlying reasons for the superior numerical performance of such networks on graph learning tasks (as well as a guiding principle for the design of new architectures) is the concept of stability. In the Euclidean setting, investigating stability essentially amounts to exploring the variation of the output of a network under non-trivial changes of its input (Mallat, 2012; Wiatowski & Bölcskei, 2018) . In the graph-setting, additional complications are introduced: Not only input signals, but now also the graph shift operators facilitating the convolutions on the graphs may vary. Even worse, there might also occur changes in the topology or vertex sets of the investigated graphs -e.g. when two dissimilar graphs describe the same underlying phenomenon -under which graph convolutional networks should also remain stable. This last stability property is often also referred to as transferability (Levie et al., 2019a) . Previous works investigated stability under changes in graph-shift operators for specific filters (Levie et al., 2019b; Gama et al., 2020) or the effect of graph-rewiring when choosing a specific graph shift operator (Kenlay et al., 2021) . Stability to topological perturbations has been established for (large) graphs discretising the same underlying topological space (Levie et al., 2019a) , the same graphon (Ruiz et al., 2020; Maskey et al., 2021) or for graphs drawn from the same statistical distribution (Keriven et al., 2020; Gao et al., 2021) . Common among all these previous works are two themes limiting practical applicability: First and foremost, the class of filters to which results are applicable is often severely restricted. The same is true for the class of considered graph shift operators; with non-normal operators (describing directed graphs) either explicitly or implicitly excluded. Furthermore -when investigating transferability properties -results are almost exclusively available under the assumption that graphs are large and either discretize the same underlying 'continuous' limit object suffieciently well, or are drawn from the same statistical distributions. While these are of course relevant regimes, they do not allow to draw conclusions beyond such asymptotic settings, and are for example unable to deal with certain spatial graphs, inapplicable to small-to-medium sized social networks and incapable of capturing the inherent multi-scale nature of molecular graphs (as further discussed below). Finally, hardly any

