DYNAMIC GRAPH REPRESENTATION LEARNING WITH FOURIER TEMPORAL STATE EMBEDDING

Abstract

Static graph representation learning has been applied in many tasks over the years thanks to the invention of unsupervised graph embedding methods and more recently, graph neural networks (GNNs). However, in many cases, we are to handle dynamic graphs where the structures of graphs and labels of the nodes are evolving steadily with time. This has posed a great challenge to existing methods in time and memory efficiency. In this work, we present a new method named Fourier Temporal State Embedding (FTSE) to address the temporal information in dynamic graph representation learning. FTSE offered time and memory-efficient solution through applying signal processing techniques to the temporal graph signals. We paired the Fourier Transform with an efficient edge network and provided a new prototype of modeling dynamic graph evolution with high precision. FTSE can also prevent the 'history explosion' that exists in sequential models. The empirical study shows that our proposed approach achieves significantly better performance than previous approaches on public datasets across multiple tasks.

1. INTRODUCTION

Graph Representation Learning learns the graphs with low-dimensional vectors at nodes and graphs level. (Perozzi et al., 2014; Tang et al., 2015; Wang et al., 2016; Cao et al., 2015; Ou et al., 2016) In recent years, deep neural networks (DNNs) are extended to process graphical data and have been utilized in a plethora of real-time cases. (Estrach et al., 2014; Duvenaud et al., 2015; Defferrard et al., 2016; Li et al., 2015; Gilmer et al., 2017; Kipf & Welling, 2016; Hamilton et al., 2017; Jin et al., 2017; Chen et al., 2018; Veličković et al., 2018; Gao & Ji, 2019) . One of the most popular networks is Graph Convolution Neural Networks (GCNs) which originated from Spectral Graph Theory but developed into spatial-based varieties. GCNs are natural extensions of Convolution Neural Networks (CNNs) which has been widely studied and used in many applications in different fields of research. Traditional GCNs have achieved commendable performance on static graphs. However, many applications involved dynamic graphs where nodes and edges evolved steadily over time. For example, a social network is updated on a day-to-day basis as people developed new friends. The dynamic graph represented users' evolving social relationships. In financial networks, transactions between nodes naturally adopt time-stamps as temporal features. Transactions of different nature may perform differently in a financial network where our main focus is to find malicious parties. Learning the evolving nature of graphs is an important task where we predict future graphical features and classify nodes based on their past behaviors. Learning evolving graphs poses great challenges on traditional GCN as temporal features can not be easily incorporated into learning algorithms. The simple way of concatenating GCNs with RNNs is straight forward in handling dynamic graphs, but it suffered from many drawbacks. We can summarize them as three folds: Firstly, the embedding vector of each node is not static and will be evolving with time. Models need to be capable of capturing the evolving nature. Secondly, the memory and computation cost for batch training is huge to keep multiple graphs from different timesteps in the memory at the same time. Finally, the large number of timesteps within a single batch brings difficulties to high precision modeling. There is also a focus on using Deep Neural Networks to generate the graph embedding recently (Trivedi et al., 2018; Pareja et al., 2020; Xu et al., 2020) as another direction compared with traditional unsupervised dynamic graph embedding approaches (Nguyen et al., 2018; Li et al., 2018;  

