SIGN AND BASIS INVARIANT NETWORKS FOR SPECTRAL GRAPH REPRESENTATION LEARNING

Abstract

We introduce SignNet and BasisNet-new neural architectures that are invariant to two key symmetries displayed by eigenvectors: (i) sign flips, since if v is an eigenvector then so is -v; and (ii) more general basis symmetries, which occur in higher dimensional eigenspaces with infinitely many choices of basis eigenvectors. We prove that under certain conditions our networks are universal, i.e., they can approximate any continuous function of eigenvectors with the desired invariances. When used with Laplacian eigenvectors, our networks are provably more expressive than existing spectral methods on graphs; for instance, they subsume all spectral graph convolutions, certain spectral graph invariants, and previously proposed graph positional encodings as special cases. Experiments show that our networks significantly outperform existing baselines on molecular graph regression, learning expressive graph representations, and learning neural fields on triangle meshes. Our code is available at https://github.com/cptq/SignNet-BasisNet.

1. INTRODUCTION

Numerous machine learning models process eigenvectors, which arise in various settings including principal component analysis, matrix factorizations, and operators associated to graphs or manifolds. An important example is the use of Laplacian eigenvectors to encode information about the structure of a graph or manifold (Belkin & Niyogi, 2003; Von Luxburg, 2007; Lévy, 2006) . Positional encodings that involve Laplacian eigenvectors have recently been used to generalize Transformers to graphs (Kreuzer et al., 2021; Dwivedi & Bresson, 2021) , and to improve the expressive power and empirical performance of graph neural networks (GNNs) (Dwivedi et al., 2022) . Furthermore, these eigenvectors are crucial for defining spectral operations on graphs that are foundational to graph signal processing and spectral GNNs (Ortega et al., 2018; Bruna et al., 2014) . However, there are nontrivial symmetries that should be accounted for when processing eigenvectors, as has been noted in many fields (Eastment & Krzanowski, 1982; Rustamov et al., 2007; Bro et al., 2008; Ovsjanikov et al., 2008) . For instance, if v is an eigenvector, then so is -v, with the same eigenvalue. More generally, if an eigenvalue has higher multiplicity, then there are infinitely many unit-norm eigenvectors that can be chosen. Indeed, a full set of linearly independent eigenvectors is only defined up to a change of basis in each eigenspace. In the case of sign invariance, for any k eigenvectors there are 2 k possible choices of sign. Accordingly, prior works on graph positional encodings randomly flip eigenvector signs during training in order to approximately learn sign invariance (Kreuzer et al., 2021; Dwivedi et al., 2020; Kim et al., 2022) . However, learning all 2 k invariances is challenging and limits the effectiveness of Laplacian eigenvectors for encoding positional information. Sign invariance is a special case of basis invariance when all eigenvalues are distinct, but general basis invariance is even more difficult to deal with. In Appendix C.2, we show that higher dimensional eigenspaces are abundant in real datasets; for instance, 64% of molecule graphs in the ZINC dataset have a higher dimensional eigenspace.

