WEIGHTED LINE GRAPH CONVOLUTIONAL NETWORKS

Abstract

Line graphs have shown to be effective in improving feature learning in graph neural networks. Line graphs can encode topology information of their original graphs and provide a complementary representational perspective. In this work, we show that the encoded information in line graphs is biased. To overcome this issue, we propose a weighted line graph that corrects biases in line graphs by assigning normalized weights to edges. Based on our weighted line graphs, we develop a weighted line graph convolution layer that takes advantage of line graph structures for better feature learning. In particular, it performs message passing operations on both the original graph and its corresponding weighted line graph. To address efficiency issues in line graph neural networks, we propose to use an incidence matrix to accurately compute the adjacency matrix of the weighted line graph, leading to dramatic reductions in computational resource usage. Experimental results on both real and simulated datasets demonstrate the effectiveness and efficiency of our proposed methods.

1. INTRODUCTION

Graph neural networks (Gori et al., 2005; Scarselli et al., 2009; Hamilton et al., 2017) have shown to be competent in solving challenging tasks in the field of network embedding. Many tasks have been significantly advanced by graph deep learning methods such as node classification tasks (Kipf & Welling, 2017; Veličković et al., 2017; Gao et al., 2018) , graph classification tasks (Ying et al., 2018; Zhang et al., 2018) , link prediction tasks (Zhang & Chen, 2018; Zhou et al., 2019) , and community detection tasks (Chen et al., 2019) . Currently, most graph neural networks capture the relationships among nodes through message passing operations. Recently, some works (Chen et al., 2019) use extra graph structures such as line graphs to enhance message passing operations in graph neural networks from different graph perspectives. A line graph is a graph that is derived from an original graph to represent connectivity between edges in the original graph. Since line graphs can encode the topology information, message passing operations on line graphs can enhance network embeddings in graph neural networks. However, graph neural networks that leverage line graph structures need to deal with two challenging issues; those are bias and inefficiency. Topology information in original graphs is encoded in line graphs but in a biased way. In particular, node features are either overstated or understated depending on their degrees. Besides, line graphs can be much bigger graphs than original graphs depending on the graph density. Message passing operations of graph neural networks on line graphs lead to significant use of computational resources. In this work, we propose to construct a weighted line graph that can correct biases in encoded topology information of line graphs. To this end, we assign each edge in a line graph a normalized weight such that each node in the line graph has a weighted degree of 2. In this weighted line graph, the dynamics of node features are the same as those in its original graph. Based on our weighted line graph, we propose a weighted line graph convolution layer (WLGCL) that performs a message passing operation on both original graph structures and weighted line graph structures. To address inefficiency issues existing in graph neural networks that use line graph structures, we further propose to implement our WLGCL via an incidence matrix, which can dramatically reduce the usage of computational resources. Based on our WLGCL, we build a family of weighted line graph convolutional networks (WLGCNs). We evaluate our methods on graph classification tasks and show that WLGCNs consistently outperform previous state-of-the-art models. Experiments on simulated data demonstrate the efficiency advantage of our implementation. 

2. BACKGROUND AND RELATED WORK

In graph theory, a line graph is a graph derived from an undirected graph. It represents the connectivity among edges in the original graph. Given a graph G, the corresponding line graph L(G) is constructed by using edges in G as vertices in L(G). Two nodes in L(G) are adjacent if they share a common end node in the graph G (Golumbic, 2004) . Note that the edges (a, b) and (b, a) in an undirected graph G correspond to the same vertex in the line graph L(G). The Whitney graph isomorphism theorem (Thatte, 2005) stated that a line graph has a one-to-one correspondence to its original graph. This theorem guarantees that the line graph can encode the topology information in the original graph. Recently, some works (Monti et al., 2018; Chen et al., 2019; Bandyopadhyay et al., 2019; Jiang et al., 2019) proposes to use the line graph structure to enhance the message passing operations in graph neural networks. Since the line graph can encode the topology information, the message passing on the line graph can enhance the network embeddings in graph neural networks. In graph neural networks that use line graph structures, features are passed and transformed in both the original graph structures and the line graph structures, thereby leading to better feature learnings and performances.

3. WEIGHTED LINE GRAPH CONVOLUTIONAL NETWORKS

In this work, we propose the weighted line graph to address the bias in the line graph when encoding graph topology information. Based on our weighted line graph, we propose the weighted line graph convolution layer (WLGCL) for better feature learning by leveraging line graph structures. Besides, graph neural networks using line graphs consume excessive computational resources. To solve the inefficiency issue, we propose to use the incidence matrix to implement the WLGCL, which can dramatically reduce the usage of computational resources.

3.1. BENEFIT AND BIAS OF LINE GRAPH REPRESENTATIONS

In this section, we describe the benefit and bias of using line graph representations. Benefit In message-passing operations, edges are usually given equal importance and edge features are not well explored. This can constrain the capacity of GNNs, especially on graphs with edge features. In the chemistry domain, a compound can be converted into a graph, where atoms are nodes and chemical bonds are edges. On such kinds of graphs, edges have different properties and thus different importance. However, message-passing operations underestimate the importance of edges. To address this issue, the line graph structure can be used to leverage edge features and different edge importance (Jiang et al., 2019; Chen et al., 2019; Zhu et al., 2019) . The line graph, by its nature, enables graph neural networks to encode and propagate edge features in the graph. The line graph neural networks that take advantage of line graph structures have shown to be promising on graph-related tasks (Chen et al., 2019; Xiong et al., 2019; Yao et al., 2019) . By encoding node and edge features simultaneously, line graph neural networks enhance the feature learning on graphs. Bias According to the Whitney graph isomorphism theorem, the line graph L(G) encodes the topology information of the original graph G, but the dynamics and topology of G are not correctly represented in L(G) (Evans & Lambiotte, 2009) . As described in the previous section, each edge in the graph G corresponds to a vertex in the line graph L(G). The features of each edge contain features of its two end nodes. A vertex with a degree d in the original graph G will generate



Figure 1: Illustrations of an graph (a), its corresponding line graph (b), and its incidence graph (c).

