GRAPH NEURAL NETWORKS ARE INHERENTLY GOOD GENERALIZERS: INSIGHTS BY BRIDGING GNNS AND MLPS

Abstract

Graph neural networks (GNNs), as the de-facto model class for representation learning on graphs, are built upon the multi-layer perceptrons (MLP) architecture with additional message passing layers to allow features to flow across nodes. While conventional wisdom commonly attributes the success of GNNs to their advanced expressivity, we conjecture that this is not the main cause of GNNs' superiority in node-level prediction tasks. This paper pinpoints the major source of GNNs' performance gain to their intrinsic generalization capability, by introducing an intermediate model class dubbed as P(ropagational)MLP, which is identical to standard MLP in training, but then adopts GNN's architecture in testing. Intriguingly, we observe that PMLPs consistently perform on par with (or even exceed) their GNN counterparts, while being much more efficient in training. Codes are available at https://github.com/chr26195/PMLP. This finding provides a new perspective for understanding the learning behavior of GNNs, and can be used as an analytic tool for dissecting various GNN-related research problems including expressivity, generalization, over-smoothing and heterophily. As an initial step to analyze PMLP, we show its essential difference to MLP at infinite-width limit lies in the NTK feature map in the post-training stage. Moreover, through extrapolation analysis (i.e., generalization under distribution shifts), we find that though most GNNs and their PMLP counterparts cannot extrapolate non-linear functions for extreme out-of-distribution data, they have greater potential to generalize to testing data near the training data support as natural advantages of the GNN architecture used for inference.

1. INTRODUCTION

In the past decades, Neural Networks (NNs) have achieved great success in many areas. As a classic NN architecture, Multi-Layer Perceptrons (MLPs) (Rumelhart et al., 1986) stack multiple Feed-Forward (FF) layers with nonlinearity to universally approximate functions. Later, Graph Neural Networks (GNNs) (Scarselli et al., 2008b; Bruna et al., 2014; Gilmer et al., 2017; Kipf & Welling, 2017; Veličković et al., 2017; Hamilton et al., 2017; Klicpera et al., 2019; Wu et al., 2019) (Empirical Results and Implications) According to experiments across sixteen node classification benchmarks and additional discussions on different architectural choices (i.e., layer number, hidden size), model instantiations (i.e., FF/MP layer implementation) and data characteristics (i.e., data split, amount of structural information), we identify two-fold intriguing empirical phenomenons: • Phenomenon 1: PMLP significantly outperforms MLP. Despite that PMLP shares the same weights (i.e., trained model parameters) with a vanilla MLP, it tends to yield lower generalization gap and thereby outperforms MLP by a large margin in testing, as illustrated in Fig. 1(c ) and (b) respectively. This observation suggests that the message passing / graph convolution modules in GNNs can inherently improve model's generalization capability for handling unseen samples. The word "inherently" underlines that such particular generalization effects are implicit in the GNN architectures (with message passing mechanism) used in inference, but isolated from factors in the training process, such as: larger hypothesis space for representing a rich set of "graph-aware" functions (Scarselli et al., 2008a; Xu et al., 2018a) , more suitable inductive biases in model selection that prioritize those functions capable of relational reasoning (Battaglia et al., 2018) , etc. • Phenomenon 2: PMLP performs on par with or even exceed GNNs. PMLP achieves close testing performance to its GNN counterpart in inductive node classification tasks, and can even outperform GNN by a large margin in some cases (i.e., removing self-loops and adding noisy edges). Given that the only difference between GNN and PMLP is the model architecture used in training and the representation power of PMLP is exactly the same with MLP before testing, this observation suggests that the major (but not only) source of performance improvement of GNNs over MLP in node classification stems from the aforementioned inherent generalization capability of GNNs. ( 



build themselves upon the MLP architecture, e.g., by inserting additional Message Passing (MP) operations amid FF layers (Kipf & Welling, 2017) to accommodate the interdependence between instance pairs. Two cornerstone concepts lying in the basis of deep learning research are model's representation and generalization power. While the former is concerned with what function class NNs can approximate and to what extent they can minimize the empirical risk R(•), the latter instead focuses on the inductive bias in the learning procedure, asking how well the learned function can generalize to unseen in-and out-of-distribution samples, reflected by the generalization gap R(•) -R(•). There exist a number of works trying to dissect GNNs' representational power (e.g., Scarselli et al. (2008a); Xu et al. (2018a); Maron et al. (2019); Oono & Suzuki (2019)), while their generalizability and connections with MLP are far less well-understood.

Figure 1: (a) Model illustration for MLP, GNN (in GCN-style) and PMLP. (b) Learning curves for node classification on Cora that depicts a typical empirical phenomenon. (c) Intrinsic generalizability of GNN reflected by close generalization performance of GNN and PMLP. (d) Extrapolation illustration: both MLP and PMLP linearize outside the training data support (•: train sample, •: test sample), while PMLP transits more smoothly and exhibits larger tolerance for OoD testing sample. In this work, we bridge GNNs and MLPs by introducing an intermediate model class called Propagational MLPs (PMLPs). During training, PMLPs are exactly the same as a standard MLP (e.g., same architecture, data for training, initialization, loss function, optimization algorithm). In the testing phase, PMLPs additionally insert non-parametric MP layers amid FF layers, as shown in Fig. 1(a), to align with various GNN architectures including (but not limited in) GCN (Kipf & Welling, 2017), SGC (Wu et al., 2019) and APPNP (Klicpera et al., 2019).

Significance) We also highlight that PMLP, as a novel class of models (using MLP architecture in training and GNN architecture in inference), can be used for broader analysis purpose or applied as a simple, flexible and very efficient graph encoder model for scalable training. ⋄ PMLP as an analytic tool. PMLPs can be used for dissecting various GNN-related problems such as over-smoothing and heterophily (see Sec. 3.3 for preliminary explorations), and in a broader sense can potentially bridge theoretical research in two areas by enabling us to conveniently leverage well-established theoretical frameworks for MLPs to enrich those for GNNs. ⋄ PMLP as efficient graph encoders. While being as effective as GNNs in many cases, PMLPs are significantly more efficient in training (5 ∼ 17× faster on large datasets, and 65× faster for very deep GNNs with more than 100 MP layers). In fact, PMLPs are equivalent to GNNs with all edges dropped in training, which itself (Rong et al., 2020) is a widely recognized way (i.e., DropEdge) for accelerating GNN training. Moreover, PMLPs are more robust against noisy edges, can be trivially

funding

* Correspondence author is Junchi Yan who is also affiliated with Shanghai AI Laboratory. The work was in part supported by National Key Research and Development Program of China (2020AAA0107600), NSFC (62222607), and STCSM (22511105100).

