AFFINITY-AWARE GRAPH NETWORKS

Abstract

Graph Neural Networks (GNNs) have emerged as a powerful technique for learning on relational data. Owing to the relatively limited number of message passing steps they perform-and hence a smaller receptive field-there has been significant interest in improving their expressivity by incorporating structural aspects of the underlying graph. In this paper, we explore the use of affinity measures as features in graph neural networks, in particular measures arising from random walks, including effective resistance, hitting and commute times. We propose message passing networks based on these features and evaluate their performance on a variety of node and graph property prediction tasks. Our architecture has low computational complexity, while our features are invariant to the permutations of the underlying graph. The measures we compute allow the network to exploit the connectivity properties of the graph, thereby allowing us to outperform relevant benchmarks for a wide variety of tasks, often with significantly fewer message passing steps. On one of the largest publicly available graph regression datasets, OGB-LSC-PCQM4Mv1, we obtain the best known single-model validation MAE at the time of writing.



Despite the predictive power of GNNs, it is known that the expressive power of standard GNNs is limited by the 1-Weisfeiler-Lehman (1-WL) test Xu et al. (2018) . Intuitively, GNNs possess at most the same power in terms of distinguishing between non-isomorphic (sub-)graphs, while having the added benefit of adapting to the given data distribution. For some architectures, two nodes with different local structures have the same computational graph, thus thwarting distinguishability in a standard GNN. Even though some attempts have been made to address this limitation with higherorder GNNs Morris et al. ( 2019), most traditional GNN architectures fail to distinguish between such nodes. A common approach to improving the expressive power of GNNs involves encoding richer structural/positional properties. For example, distance-based approaches form the basis for works such as Position-aware Graph Neural Networks You et al. (2019) , which capture positions/locations of nodes with respect to a set of anchor nodes, as well as Distance Encoding Networks Li et al. (2020) , which use the first few powers of the normalized adjacency matrix as node features associated with a set of target nodes. Here, we take an approach that is inspired by this line of work but departs from it in some crucial ways: we seek to capture both distance and connectivity information using general-purpose node and edge features without the need for specifying any anchor or target nodes.



(GNNs) constitute a powerful tool for learning meaningful representations in non-Euclidean domains. GNN models have achieved significant successes in a wide variety of node prediction Hamilton et al. (2017); Luan et al. (2019), link prediction Zhang & Chen (2018); You et al. (2019), and graph prediction Duvenaud et al. (2015); Ying et al. (2019) tasks. These tasks naturally emerge in a wide range of applications, including autonomous driving Chen et al. (2019), neuroimaging Parisot et al. (2018), combinatorial optimization Gasse et al. (2019); Nair et al. (2020), and recommender systems Ying et al. (2018), while they have enabled significant scientific advances in the fields of biomedicine Wang et al. (2021a), structural biology Jumper et al. (2021), molecular chemistry Stokes et al. (2020) and physics Bapst et al. (2020).

