NEURAL POOLING FOR GRAPH NEURAL NETWORKS

Abstract

Tasks such as graph classification, require graph pooling to learn graph-level representations from constituent node representations. In this work, we propose two novel methods using fully connected neural network layers for graph pooling, namely Neural Pooling Method 1 and 2. Our proposed methods have the ability to handle variable number of nodes in different graphs, and are also invariant to the isomorphic structures of graphs. In addition, compared to existing graph pooling methods, our proposed methods are able to capture information from all nodes, collect second-order statistics, and leverage the ability of neural networks to learn relationships among node representations, making them more powerful. We perform experiments on graph classification tasks in the bio-informatics and social network domains to determine the effectiveness of our proposed methods. Experimental results show that our methods lead to an absolute increase of upto 1.2% in classification accuracy over previous works and a general decrease in standard deviation across multiple runs indicating greater reliability. Experimental results also indicate that this improvement in performance is consistent across several datasets.

1. INTRODUCTION

Over the past several years, there is a growing number of applications where data is generated from non-Euclidean domains and is represented as graphs with complex relationships and interdependency between entities. Deep learning generalised from grid-like data to the graph domain has led to the development of the remarkably successful Graph Neural Networks (GNNs) (Fan et al., 2019; Gao et al., 2019; Ma et al., 2019a; Wang et al., 2019b) and its numerous variants such the Graph Convolutional Network (GCN) (Kipf & Welling, 2017 ), GraphSAGE (Hamilton et al., 2017) , graph attention network (GAT) (Veličković et al., 2018) , jumping knowledge network (JK) (Xu et al., 2018) , and graph isomorphism networks (GINs) (Xu et al., 2019) , etc. Pooling is a common operation in deep learning on grid-like data, such as images. Pooling layers provide an approach to down sampling feature maps by summarizing the presence of features in patches of the feature map. It reduces dimensionality and also provides local translational invariance. In the case of graph data, pooling is used to obtain a representation of a graph using its constituent node representations. However, it is challenging to develop graph pooling methods due to the some special properties of graph data such as the variable number of nodes in different graphs and the isomorphic structures of graphs. Firstly, the number of nodes varies in different graphs, while the graph representations are usually required to have the same fixed size to fit into other downstream machine learning models where they are used for tasks such as classification. Therefore, graph pooling should be capable of handling the variable number of node representations as inputs and producing fixed-sized graph representations. Secondly, unlike images and texts where we can order pixels and words according to the spatial structural information, there is no inherent ordering relationship among nodes in graphs. Therefore, isomorphic graphs should have the same graph representation, and hence, graph pooling should give the same output by taking node representations in any order as inputs. Our main contributions in this work are two novel graph pooling methods, Neural Pooling Method 1 and 2. These new pooling methods allow us to do the following,i) produce the same dimensional graph representation for graphs with variable number of nodes, ii) remain invariant to the isomorphic structures of graphs, iii) collect second-order statistics, iv) leverage trainable parameters in the form 1

