RANDOM LAPLACIAN FEATURES FOR LEARNING WITH HYPERBOLIC SPACE

Abstract

Due to its geometric properties, hyperbolic space can support high-fidelity embeddings of tree-and graph-structured data, upon which various hyperbolic networks have been developed. Existing hyperbolic networks encode geometric priors not only for the input, but also at every layer of the network. This approach involves repeatedly mapping to and from hyperbolic space, which makes these networks complicated to implement, computationally expensive to scale, and numerically unstable to train. In this paper, we propose a simpler approach: learn a hyperbolic embedding of the input, then map once from it to Euclidean space using a mapping that encodes geometric priors by respecting the isometries of hyperbolic space, and finish with a standard Euclidean network. The key insight is to use a random feature mapping via the eigenfunctions of the Laplace operator, which we show can approximate any isometry-invariant kernel on hyperbolic space. Our method can be used together with any graph neural networks: using even a linear graph model yields significant improvements in both efficiency and performance over other hyperbolic baselines in both transductive and inductive tasks.

1. INTRODUCTION

Real-world data contains various structures that resemble non-Euclidean spaces: for example, data with tree-or graph-structure such as citation networks (Sen et al., 2008) , social networks (Hoff et al., 2002) , biological networks (Rossi & Ahmed, 2015) , and natural language (e.g., taxonomies and lexical entailment) where latent hierarchies exist (Nickel & Kiela, 2017) . Graph-style data features in a range of problems-including node classification, link prediction, relation extraction, and text classification. It has been shown both theoretically and empirically (Bowditch, 2006; Nickel & Kiela, 2017; 2018; Chien et al., 2022) that hyperbolic space-the geometry with constant negative curvature-is naturally suited for representing (i.e. embedding) such data and capturing implicit hierarchies, outperforming Euclidean baselines. For example, Sala et al. (2018) shows that hyperbolic space can embed trees without loss of information (arbitrarily low distortion), which cannot be achieved by Euclidean space of any dimension (Chen et al., 2013; Ravasz & Barabási, 2003) . Presently, most well-known and -established deep neural networks are built in Euclidean space. The standard approach is to pass the input to a Euclidean network and hope the model can learn the features and embeddings. But this flat-space approach can encode the wrong prior in tasks for which we know the underlying data has a different geometric structure, such as the hyperbolic-space structure implicit in tree-like graphs. Motivated by this, there is an active line of research on developing ML models in hyperbolic space H n . Starting from hyperbolic neural networks (HNN) by Ganea et al. These hyperbolic networks adopt hyperbolic geometry at every layer of the model. Since hyperbolic space is not a vector space, operations such as addition and multiplication are not well-defined; neither are matrix-vector multiplication and convolution, which are key components of a deep model



(2018), a variety of hyperbolic networks were proposed for different applications, including HNN++ (Shimizu et al., 2020), hyperbolic variational auto-encoders (HVAE, Mathieu et al. (2019)), hyperbolic attention networks (HATN, Gulcehre et al. (2018)), hyperbolic graph convolutional networks (HGCN, Chami et al. (2019)), hyperbolic graph neural networks (HGNN, Liu et al. (2019)), and hyperbolic graph attention networks (HGAT Zhang et al. (2021a)). The strong empirical results of HGCN and HGNN in particular on node classification, link prediction, and molecular-and-chemicalproperty prediction show the power of hyperbolic geometry for graph learning.

