GRAPH NEURAL NETWORKS ARE INHERENTLY GOOD GENERALIZERS: INSIGHTS BY BRIDGING GNNS AND MLPS

Abstract

Graph neural networks (GNNs), as the de-facto model class for representation learning on graphs, are built upon the multi-layer perceptrons (MLP) architecture with additional message passing layers to allow features to flow across nodes. While conventional wisdom commonly attributes the success of GNNs to their advanced expressivity, we conjecture that this is not the main cause of GNNs' superiority in node-level prediction tasks. This paper pinpoints the major source of GNNs' performance gain to their intrinsic generalization capability, by introducing an intermediate model class dubbed as P(ropagational)MLP, which is identical to standard MLP in training, but then adopts GNN's architecture in testing. Intriguingly, we observe that PMLPs consistently perform on par with (or even exceed) their GNN counterparts, while being much more efficient in training. Codes are available at https://github.com/chr26195/PMLP. This finding provides a new perspective for understanding the learning behavior of GNNs, and can be used as an analytic tool for dissecting various GNN-related research problems including expressivity, generalization, over-smoothing and heterophily. As an initial step to analyze PMLP, we show its essential difference to MLP at infinite-width limit lies in the NTK feature map in the post-training stage. Moreover, through extrapolation analysis (i.e., generalization under distribution shifts), we find that though most GNNs and their PMLP counterparts cannot extrapolate non-linear functions for extreme out-of-distribution data, they have greater potential to generalize to testing data near the training data support as natural advantages of the GNN architecture used for inference.

1. INTRODUCTION

In the past decades, Neural Networks (NNs) have achieved great success in many areas. As a classic NN architecture, Multi-Layer Perceptrons (MLPs) (Rumelhart et al., 1986) stack multiple Feed-Forward (FF) layers with nonlinearity to universally approximate functions. Later, Graph Neural Networks (GNNs) (Scarselli et al., 2008b; Bruna et al., 2014; Gilmer et al., 2017; Kipf & Welling, 2017; Veličković et al., 2017; Hamilton et al., 2017; Klicpera et al., 2019; Wu et al., 2019) 



build themselves upon the MLP architecture, e.g., by inserting additional Message Passing (MP) operations amid FF layers (Kipf & Welling, 2017) to accommodate the interdependence between instance pairs. Two cornerstone concepts lying in the basis of deep learning research are model's representation and generalization power. While the former is concerned with what function class NNs can approximate and to what extent they can minimize the empirical risk R(•), the latter instead focuses on the inductive bias in the learning procedure, asking how well the learned function can generalize to unseen in-and out-of-distribution samples, reflected by the generalization gap R(•) -R(•). There exist a number of works trying to dissect GNNs' representational power (e.g., Scarselli et al. (2008a); Xu et al. (2018a); Maron et al. (2019); Oono & Suzuki (2019)), while their generalizability and connections with MLP are far less well-understood.

funding

* Correspondence author is Junchi Yan who is also affiliated with Shanghai AI Laboratory. The work was in part supported by National Key Research and Development Program of China (2020AAA0107600), NSFC (62222607), and STCSM (22511105100).

