SPECFORMER: SPECTRAL GRAPH NEURAL NETWORKS MEET TRANSFORMERS

Abstract

Spectral graph neural networks (GNNs) learn graph representations via spectraldomain graph convolutions. However, most existing spectral graph filters are scalar-to-scalar functions, i.e., mapping a single eigenvalue to a single filtered value, thus ignoring the global pattern of the spectrum. Furthermore, these filters are often constructed based on some fixed-order polynomials, which have limited expressiveness and flexibility. To tackle these issues, we introduce Specformer, which effectively encodes the set of all eigenvalues and performs self-attention in the spectral domain, leading to a learnable set-to-set spectral filter. We also design a decoder with learnable bases to enable non-local graph convolution. Importantly, Specformer is equivariant to permutation. By stacking multiple Specformer layers, one can build a powerful spectral GNN. On synthetic datasets, we show that our Specformer can better recover ground-truth spectral filters than other spectral GNNs. Extensive experiments of both node-level and graph-level tasks on real-world graph datasets show that our Specformer outperforms state-ofthe-art GNNs and learns meaningful spectrum patterns.

1. INTRODUCTION

Graph neural networks (GNNs), firstly proposed in (Scarselli et al., 2008) , become increasingly popular in the field of machine learning due to their empirical successes. Depending on how the graph signals (or features) are leveraged, GNNs can be roughly categorized into two classes, namely spatial GNNs and spectral GNNs. Spatial GNNs often adopt a message passing framework (Gilmer et al., 2017; Battaglia et al., 2018) , which learns useful graph representations via propagating local information on graphs. Spectral GNNs (Bruna et al., 2013; Defferrard et al., 2016) instead perform graph convolutions via spectral filters (i.e., filters applied to the spectrum of the graph Laplacian), which can learn to capture non-local dependencies in graph signals. Although spatial GNNs have achieved impressive performances in many domains, spectral GNNs are somewhat under-explored. There are a few reasons why spectral GNNs have not been able to catch up. First, most existing spectral filters are essentially scalar-to-scalar functions. In particular, they take a single eigenvalue as input and apply the same filter to all eigenvalues. This filtering mechanism could ignore the rich information embedded in the spectrum, i.e., the set of eigenvalues. For example, we know from the spectral graph theory that the algebraic multiplicity of the eigenvalue 0 tells us the number of connected components in the graph. However, such information can not be captured by scalarto-scalar filters. Second, the spectral filters are often approximated via fixed-order (or truncated) orthonormal bases, e.g., Chebyshev polynomials (Defferrard et al., 2016; He et al., 2022) and graph wavelets (Hammond et al., 2011; Xu et al., 2019) , in order to avoid the costly spectral decomposition of the graph Laplacian. Although the orthonormality is a nice property, this truncated approximation is less expressive and may severely limit the graph representation learning. Therefore, in order to improve spectral GNNs, it is natural to ask: how can we build expressive spectral filters that can effectively leverage the spectrum of graph Laplacian? To answer this question, * Co-corresponding authors

availability

Code and data are available at https://github.com/bdy9527/Specformer.

