Vectorial Graph Convolutional Network

Abstract

Graph Convolutional Networks (GCN) have drawn considerable attention recently due to their outstanding performance in processing graphstructured data. However, GCNs still limited to the undirected graph because they theoretically require a symmetric matrix as the basis for the Laplacian transform. This causes the isotropic problem of the operator and reduced sensitivity in response to different information. In order to solve the problem, we generalize the spectral convolution operator to directed graphs by field extension, which improves the edge representations from scalars to vectors. Therefore, it brings in the concept of direction. That is to say, and even homogeneous information can become distinguishable by its differences in directions. In this paper, we propose the Vectorial Graph Convolutional Network(VecGCN) and the experimental evidence showing the advantages of a variety of directed graph node classification and link prediction tasks.



However, the vast majority of these researches are based on undirected graphs, and even the original graphs are naturally directed. This phenomenon will take the risk of discarding potentially important information Kawamoto et al. (2018); Zhang et al. (2021) . For example, you may have heard of a celebrity, but he/she doesn't know you. From the GATs' perspective, it is easy to understand that the attention values from node i to node j and node j to node i are not necessarily equal, which means the information is not symmetric on the edges. 2021), the object matrix of the kernels needs to be positive semi-definite and symmetric because the decomposition of a such matrix is orthogonal that can be taken as Fourier transform basis. It, in turn, requires the graph to be undirected to satisfy the above two conditions, or the eigenvalues of A can not be solved in the real number field. Thus, extending spectral methods to directed graphs is not straightforward Zhang et al. (2021) . Therefore, one of the key challenges lies in defining a symmetric adjacency matrix on a directed graph. 2021). These studies have proposed different approaches to solve the problem. However, the original purpose of constructing directed GCNs is to keep more information from the graph. While these previous studies view the direction as a one-dimensional scalar, it is supposed to be a vector that shares the same dimension with the node vector. From a Principal Component Analysis(PCA) perspective Abdi & Williams (2010), The n-dimensional node vectors is decomposed into k principal components (k ≤ n), GCNs preserved the 1st component on edge and directed GCNs preserved the 1st and 2st components. This shows that some of the information is lost. To address these issues, we proposed VecGCN. We overcome the symmetric problem and the information loss problem at the same time by using Field Extension, which is the main research object of field theory in abstract algebra. The basic idea is to start from a base field and somehow construct a "larger" field that contains it. And we construct a high dimensional field according to distances between nodes. Firstly, the distance matrix is symmetric. Secondly, a high dimensional field does not cause information loss. The main contributions are summarized as follows: 1. Replace the adjacency matrix with the distance matrix. Replace the adjacency matrix with the distance matrix. Replace the adjacency matrix with the distance matrix. The advantages of replacing the adjacency matrix with the distance matrix include two main aspects. On the one hand, the distance matrix is symmetric. On the other hand, the topology of the distance matrix is the same as that of the adjacency matrix. If the distance between two adjacent nodes is 1, then the distance matrix and the adjacency matrix are equal. Therefore, we can consider the distance matrix as a generalization of the adjacency matrix. Not only that, but the distance matrix is also satisfied by the theory of GCN. 2. The concept of direction is proposed. The concept of direction is proposed. The concept of direction is proposed. Since the adjacency matrix is binary, 0 indicates that there is no edge between two nodes, while 1 is the opposite. Therefore the adjacency matrix does not imply the concept of direction, which makes the model unable to distinguish the importance of neighboring nodes, which manifests isotropy. The above problem can be solved by improving the elements of the adjacency matrix from scalars to vectors using the field extension method. 3. Propose the VecGCN. Propose the VecGCN. Propose the VecGCN. Our extensive experiments on a series of datasets clearly show that VecGCN's performance exceeds most other methods.

2. Related Work

Most graph neural network structures can be categorized as either spectral or spatial. Neighborhoods in spatial networks such as Veličković et al. ( 2017 2015) are well-defined even when their adjacency matrices are not symmetric. Although spatial methods typically have natural extensions to directed graphs, they may ignore important information in the directed graph, as we discussed before. Spectral approaches also suffer from this problem. In this section, we review related work on constructing neural networks for directed graphs, and describe the development of the problem in detail as well as the various solutions. We refer the reader to Wu et al. (2020); Zhang et al. (2020) for more background information.

2.1. Notations and Preliminaries.

Given a simple and connected undirected graph G = (V, E) with n nodes and m edges. Let A denote the adjacency matrix and D the diagonal degree matrix. Spectral-based GCNs are based on the Laplacian matrix, and the graph Laplacian matrix is defined as L = D-A. The normalized format of Laplacian matrix is defined as L sym = D -1 2 LD -1 2 = I-D -1 2 AD -1 2 , where I is an identity matrix that has same shape with A. L sym is a matrix representation



a ubiquitous data structure where entities are vertices and edges are their pairwise relationships. Most Graph Neural Networks(GNNs) fall into one of two categories: spectral Defferrard et al. (2016); Kipf & Welling (2016) or spatial networks Hamilton et al. (2017a); Veličković et al. (2017); Backstrom & Leskovec (2011). Spatial approaches are based on a localized averaging operator with learnable weights that iteratively traverse the entire graph. Spectral approaches based on eigen-decomposition of graph Laplacian and smooth those signals through Fourier transform Zhou et al. (2020); Wu et al. (2020). The application domains ranging from social networks Chen et al. (2012) to quantum chemistry Liao et al. (2019) and text classification Yao et al. (2019), etc. One of the key techniques is Graph Convolutional Networks (GCNs) Defferrard et al. (2016); Kipf & Welling (2016); Xu et al. (2018a), it's the variant of Convolutional Neural Networks (CNNs) Mallat (2016) on graphs, that learns the representations from both vertices and edges. It is particularly important to apply representations to downstream tasks Hamilton et al. (2017b), e.g., node classification and link prediction Hu et al. (2020).

Adjacency matrix A is the topological edge set. Unless graph G is undirected, A is not symmetric. Unfortunately, GCNs are developed from spectral theory Kipf & Welling (2016); Xu et al. (2018a); Gilmer et al. (2017) and limited to symmetric convolutional kernels Beaini et al. (

Recently, there has been a surge of interest in directed GCNs Tong et al. (2020b;a); Monti et al. (2018); Beaini et al. (2021); Zhang et al. (

); Hamilton et al. (2017a); Atwood & Towsley (2016); Duvenaud et al. (

