LEARNING PARAMETRISED GRAPH SHIFT OPERATORS

Abstract

In many domains data is currently represented as graphs and therefore, the graph representation of this data becomes increasingly important in machine learning. Network data is, implicitly or explicitly, always represented using a graph shift operator (GSO) with the most common choices being the adjacency, Laplacian matrices and their normalisations. In this paper, a novel parametrised GSO (PGSO) is proposed, where specific parameter values result in the most commonly used GSOs and message-passing operators in graph neural network (GNN) frameworks. The PGSO is suggested as a replacement of the standard GSOs that are used in state-of-the-art GNN architectures and the optimisation of the PGSO parameters is seamlessly included in the model training. It is proved that the PGSO has real eigenvalues and a set of real eigenvectors independent of the parameter values and spectral bounds on the PGSO are derived. PGSO parameters are shown to adapt to the sparsity of the graph structure in a study on stochastic blockmodel networks, where they are found to automatically replicate the GSO regularisation found in the literature. On several real-world datasets the accuracy of state-of-theart GNN architectures is improved by the inclusion of the PGSO in both nodeand graph-classification tasks.

1. INTRODUCTION

Real-world data and applications often involve significant structural complexity and as a consequence graph representation learning attracts great research interest (Hamilton et al., 2017b; Wu et al., 2020) . The topology of the observations plays a central role when performing machine learning tasks on graph structured data. A variety of supervised, semi-supervised or unsupervised graph learning algorithms employ different forms of operators that encode the topology of these observations. The most commonly used operators are the adjacency matrix, the Laplacian matrix and their normalised variants. All of these matrices belong to a general set of linear operators, the Graph Shift Operators (GSOs) (Sandryhaila & Moura, 2013; Mateos et al., 2019) . Graph Neural Networks (GNNs), the main application domain in this paper, are representative cases of algorithms that use chosen GSOs to encode the graph structure, i.e., to encode neighbourhoods used in the aggregation operators. Several GNN models (Kipf & Welling, 2017; Hamilton et al., 2017a; Xu et al., 2019) choose different variants of normalised adjacency matrices as GSOs. Interestingly, in a variety of tasks and datasets, the incorporation of explicit structural information of neighbourhoods into the model is found to improve results (Pei et al., 2020; Zhang & Chen, 2018; You et al., 2019) , leading us to conclude that the chosen GSO is not entirely capturing the information of the data topology. In most of these approaches, the GSO is chosen without an analysis of the impact of this choice of representation. From this observation arise our two research questions. Question 1: Is there a single optimal representation to encode graph structures or is the optimal representation task-and data-dependent? On different tasks and datasets, the choice between the different representations encoded by the different graph shift operator matrices has shown to be a consequential decision. Due to the past * Equal contribution. 1

