GRAPH AUTOENCODERS WITH DECONVOLUTIONAL NETWORKS

Abstract

Recent studies have indicated that Graph Convolutional Networks (GCNs) act as a low pass filter in spectral domain and encode smoothed node representations. In this paper, we consider their opposite, namely Graph Deconvolutional Networks (GDNs) that reconstruct graph signals from smoothed node representations. We motivate the design of Graph Deconvolutional Networks via a combination of inverse filters in spectral domain and de-noising layers in wavelet domain, as the inverse operation results in a high pass filter and may amplify the noise. Based on the proposed GDN, we further propose a graph autoencoder framework that first encodes smoothed graph representations with GCN and then decodes accurate graph signals with GDN. We demonstrate the effectiveness of the proposed method on several tasks including unsupervised graph-level representation , social recommendation and graph generation.

1. INTRODUCTION

Autoencoders have demonstrated excellent performance on tasks such as unsupervised representation learning (Bengio, 2009) and de-noising (Vincent et al., 2010) . Recently, several studies (Zeiler & Fergus, 2014; Long et al., 2015) have demonstrated that the performance of autoencoders can be further improved by encoding with Convolutional Networks and decoding with Deconvolutional Networks (Zeiler et al., 2010) . Notably, Noh et al. (2015) present a novel symmetric architecture that provides a bottom-up mapping from input signals to latent hierarchical feature space with {convolution, pooling} operations and then maps the latent representation back to the input space with {deconvolution, unpooling} operations. While this architecture has been successful when processing features with structures existed in the Euclidean space (e.g., images), recently there has been a surging interest in applying such a framework on non-Euclidean data like graphs. However, extending this autoencoder framework to graph-structured data requires Graph Deconvolutional operations, which remains open-ended and hasn't been well-studied as opposed to the large body of works that have already been proposed for Graph Convolutional Networks (Defferrard et al., 2016; Kipf & Welling, 2017) . In this paper, we study the characteristics of Graph Deconvolutional Networks (GDNs), and observe de-noising to be the key for effective deconvolutional operations. Therefore, we propose a wavelet-based module (Hammond et al., 2011) that serves as a de-noising mechanism after the signals reconstructed in the spectral domain (Shuman et al., 2013) for deconvolutional networks. Most GCNs proposed by prior arts, e.g., Cheby-GCN (Defferrard et al., 2016) and GCN (Kipf & Welling, 2017) , exploit spectral graph convolutions (Shuman et al., 2013) and Chebyshev polynomials (Hammond et al., 2011) to retain coarse-grained information and avoid explicit eigendecomposition of the graph Laplacian. Until recently, Wu et al. (2019) and Donnat et al. (2018) have noticed that GCN acts as a low pass filter in spectral domain and retains smoothed representations. Inspired by prior arts in the domain of signal deconvolution (Kundur & Hatzinakos, 1996) , we propose to design a GDN by using high pass filters as the counterpart of low pass filters embodied in GCNs. Due to the nature of signal deconvolution being ill-posed, several prior arts (Donoho & Johnstone, 1994; Figueiredo & Nowak, 2003) rely on transforming these signals into another domain (e.g., spectral domain) where the problem can be better posed and resolved. Furthermore, Neelamani et al. ( 2004) observe inverse filters in spectral domain may amplify the noise, and we observe the same phenomenon for GDNs. Therefore, inspired by their proposed hybrid spectralwavelet method-inverse signal reconstruction in spectral domain followed by a de-noising step in wavelet domain-we introduce a spectral-wavelet GDN to decode the smoothed representations into the input graph signals. The proposed spectral-wavelet GDN employs spectral graph convolutions with a high pass filter to obtain inversed signals and then de-noises the inversed signals in wavelet domain. In addition, we apply Maclaurin series as a fast approximation technique to compute both high pass filters and wavelet kernels (Donnat et al., 2018) . With the proposed spectral-wavelet GDN, we further propose a graph autoencoder (GAE) framework that resembles the symmetric fashion of architectures (Noh et al., 2015) . We then evaluate the effectiveness of the proposed GAE framework with three popular and important tasks: unsupervised graph-level representation (Sun et al., 2020) , social recommendation (Jamali & Ester, 2010) and graph generation. In the first task, the proposed GAE outperforms the state-of-the-arts on graph classification in an unsupervised fashion, along with a significant improvement on running time. In the second task, the performance of our proposed GAE is on par with the state-of-the-arts on the recommendation accuracy; at the meantime, the proposed GAE demonstrates strong robustness against rating noises and achieves the best recommendation diversification (Ziegler et al., 2005) . In the third task, our proposed GDN can enhance the generation performance of popular variational autoencoder frameworks including VGAE (Kipf & Welling, 2016) and Graphite (Grover et al., 2019) .

Deconvolutional networks

The area of signal deconvolution (Kundur & Hatzinakos, 1996) has a long history in the signal processing community and is about the process of estimating the true signals given the degraded or smoothed signal characteristics (Banham & Katsaggelos, 1997) . Later deep learning studies (Zeiler et al., 2010; Noh et al., 2015) Graph autoencoders Since the introduction of Graph Neural Networks (GNNs) (Kipf & Welling, 2017; Defferrard et al., 2016) and autoencoders (AEs), many studies (Kipf & Welling, 2016; Grover et al., 2019) have used GNNs and AEs to encode to and decode from latent representations. Recently graph pooling has emerged as a research topic that also contributes to the development of graph autoencoders. Common practices include DIFFPOOL (Ying et al., 2018 ), SAGPool (Lee et al., 2019) , MinCut-Pool (Bianchi et al., 2020) . Although some encouraging progress has been achieved, there is still no work about graph deconvolution that can up-sample latent feature maps to restore their original resolutions (Gao & Ji, 2019) . In this regard, current graph autoencoders bypass the difficulty via (1) non-parameterized decoders (Kipf & Welling, 2016; Deng et al., 2020; Li et al., 2020) , (2) GCN decoders (Grover et al., 2019; Gao & Ji, 2019) , and (3) multilayer perceptron (MLP) decoders (Simonovsky & Komodakis, 2018) .

3. GRAPH AUTOENCODER FRAMEWORK

Formally, we are given an undirected, unweighted graph G = (V, A, X). V is the node set and N = |V | denotes the number of nodes. The adjacency matrix A ∈ R N ×N represents the graph structure. The feature matrix X ∈ R N ×d represents the node attributes. Our goal is to learn an encoder and a decoder to map between the space of graph G and their latent factors G pool = (V pool , A pool , Z). We show a schematic diagram of our proposed framework in Figure 1 .



consider deconvolutional networks as the opposite operation for Convolutional Neural Networks (CNNs) and have mainly focused on Euclidean structures, e.g., image. Some work (Dumoulin & Visin, 2016) notices Zeiler et al. (2010) is in essence a transposed convolution network as it differs from what is used in the signal processing community. For deconvolutional networks in non-Euclidean structures like graphs, the study is still sparse. Feizi et al. (2013) propose the network deconvolution as inferring the true network given partially observed structure. It relies on explicit eigen-decomposition and cannot be used as the counterpart for GCN. Yang & Segarra (2018) formulate the deconvolution as a pre-processing step on the observed signals, in order to improve classification accuracy. Zhang et al. (2020) consider recovering graph signals from the latent representation. However, it just adopts the filter design used in GCN and sheds little insight into the internal operation of GDN.

