SRBGCN: TANGENT SPACE-FREE LORENTZ TRANS-FORMATIONS FOR GRAPH FEATURE LEARNING

Abstract

Hyperbolic graph convolutional networks have been successfully applied to represent complex graph data structures. However, optimization on Riemannian manifolds is nontrivial thus most of the existing hyperbolic networks build the network operations on the tangent space of the manifold, which is a Euclidean local approximation. This distorts the learnt features, limits the representation capacity of the network and makes it hard to optimize the network. In this work, we introduce a fully hyperbolic graph convolutional network (GCN), referred to as SRBGCN, which performs neural computations such as feature transformation and aggregation directly on the manifold, using manifold-preserving Lorentz transformations that include spatial rotation (SR) and boost (B) operations. Experiments conducted on static graph datasets for node classification and link prediction tasks validate the performance of the proposed method.

1. INTRODUCTION

Graph convolutional networks (GCNs) were proposed to make use of the graph topology and model the spatial relationship between graph nodes, hence generalizing the convolution operation to graph data [Kipf & Welling (2017) (2019) . Hyperbolic spaces are more suitable for representing graph data as the space volume is increasing exponentially which is perfect for embedding tree-like data structures that also grow exponentially with the depth of the tree whereas the space grows polynomially for the Euclidean space. Motivated by this, recent works built GCNs in the hyperbolic space to take advantage of the hyperbolic geometry properties Chami et al. (2019); Liu et al. (2019) . The hyperbolic graph convolutional networks (HGCNs) achieved better performance than the corresponding Euclidean ones which shows the effectiveness of using the hyperbolic space to model hierarchical data structures and graph data. However, these works performed the network operations in the tangent space of the manifold which is a Euclidean local approximation to the manifold at a point. The Euclidean network operations such as feature transformation and feature aggregation are not manifold-preserving and can not be directly applied on the manifold, that is why these methods resort to the tangent space. However, using a tangent space may limit the representation capabilities of the hyperbolic networks which is caused by distortion specially as most of these works used the tangent space at the origin. In this work, we propose a full manifold-preserving Lorentz feature transformations using both boost and spatial rotation operations to build SRBGCN fully in the hyperbolic space without resorting to the tangent space. Experiments conducted on node classification and link prediction tasks on static graph datasets show the effectiveness of our proposed method. SRBGCN has a good physical interpretation and can be used to build deep networks with more representation capacity and less distorted features. Chami et al. (2019) proposed HGCNs where networks operations are performed in the tangent space of the manifold. They were able to achieve better performance than the Euclidean analogs on node



; Defferrard et al. (2016)]. Initially, the proposed models were built in the Euclidean space Hamilton et al. (2017); Zhang et al. (2018); Velickovic et al. (2019) which is not the natural space for embedding graph data and produces distorted feature representations Nickel & Kiela (2018); Chami et al.

