TELEPORT GRAPH CONVOLUTIONAL NETWORKS

Abstract

We consider the limitations in message-passing graph neural networks. In message-passing operations, each node aggregates information from its neighboring nodes. To enlarge the receptive field, graph neural networks need to stack multiple message-passing graph convolution layers, which leads to the over-fitting issue and over-smoothing issue. To address these limitations, we propose a teleport graph convolution layer (TeleGCL) that uses teleport functions to enable each node to aggregate information from a much larger neighborhood. For each node, teleport functions select relevant nodes beyond the local neighborhood, thereby resulting in a larger receptive field. To apply our structure-aware teleport function, we propose a novel method to construct structural features for nodes in the graph. Based on our TeleGCL, we build a family of teleport graph convolutional networks. The empirical results on graph and node classification tasks demonstrate the effectiveness of our proposed methods.

1. INTRODUCTION

Graph neural networks (GNNs) have shown great capability in solving challenging tasks on graph data such as node classification (Grover & Leskovec, 2016; Kipf & Welling, 2017; Veličković et al., 2017; Gao et al., 2018 ), graph classification (Xu et al., 2018; Gao & Ji, 2019; You et al., 2019) , and link prediction (Zhang & Chen, 2018; Chen et al., 2019; Zhou et al., 2019) . Most graph convolutional networks are based on message-passing operations, in which each node aggregates information from its neighboring nodes. To enable a larger receptive field (Chen et al., 2016) , GNNs need to stack multiple layers, which is straightforward but can result in several issues. Firstly, stacking multiple layers involves massive trainable parameters, which consequently increases the risk of over-fitting. Secondly, message-passing operations mostly use averaging to combine the aggregated features, which significantly reduces the distinguishability of network embeddings. From this point, GNNs that are based on message-passing operations can not use deep network architecture due to these limitations. Some works such as Geom- GCN (Pei et al., 2020) try to solve these issues by involving more nodes in the feature aggregation process. However, Geom-GCN doesn't consider the original graph topology information when generating the additional set of nodes for aggregation, which can neglect some relevant nodes from a structural perspective. To address the above limitations and increase the receptive field effectively, we propose a teleport graph convolution layer (TeleGCL) that uses teleport functions to select highly-relevant nodes at the global scope. A teleport function computes relevances between the center node and other nodes beyond the local neighborhood. The nodes with particular relevances are teleported for the center node. Here, the selection of teleported nodes is not restricted by the graph topology. This enables the center node to gather information from a larger neighborhood without going deep, which helps to avoid over-fitting and over-smoothing issues. In particular, we propose two teleport functions; those are structure-aware and feature-aware teleport functions. They compute the nodes' relevances from graph structural perspective and node features perspective, respectively. Based on our TeleGCL, we build a family of teleport graph convolutional networks. The empirical results on graph and node classification tasks demonstrate the effectiveness of our proposed methods.

2. BACKGROUND AND RELATED WORK

In this section, we describe message-passing operations on graph data and geometric graph convolutional networks. Graph neural networks (Fan et al., 2019; Wu et al., 2019; Morris et al., 2019; Wu 1 

