GRAPH AUTOENCODERS WITH DECONVOLUTIONAL NETWORKS

Abstract

Recent studies have indicated that Graph Convolutional Networks (GCNs) act as a low pass filter in spectral domain and encode smoothed node representations. In this paper, we consider their opposite, namely Graph Deconvolutional Networks (GDNs) that reconstruct graph signals from smoothed node representations. We motivate the design of Graph Deconvolutional Networks via a combination of inverse filters in spectral domain and de-noising layers in wavelet domain, as the inverse operation results in a high pass filter and may amplify the noise. Based on the proposed GDN, we further propose a graph autoencoder framework that first encodes smoothed graph representations with GCN and then decodes accurate graph signals with GDN. We demonstrate the effectiveness of the proposed method on several tasks including unsupervised graph-level representation , social recommendation and graph generation.

1. INTRODUCTION

Autoencoders have demonstrated excellent performance on tasks such as unsupervised representation learning (Bengio, 2009) and de-noising (Vincent et al., 2010) . Recently, several studies (Zeiler & Fergus, 2014; Long et al., 2015) have demonstrated that the performance of autoencoders can be further improved by encoding with Convolutional Networks and decoding with Deconvolutional Networks (Zeiler et al., 2010) . Notably, Noh et al. (2015) present a novel symmetric architecture that provides a bottom-up mapping from input signals to latent hierarchical feature space with {convolution, pooling} operations and then maps the latent representation back to the input space with {deconvolution, unpooling} operations. While this architecture has been successful when processing features with structures existed in the Euclidean space (e.g., images), recently there has been a surging interest in applying such a framework on non-Euclidean data like graphs. However, extending this autoencoder framework to graph-structured data requires Graph Deconvolutional operations, which remains open-ended and hasn't been well-studied as opposed to the large body of works that have already been proposed for Graph Convolutional Networks (Defferrard et al., 2016; Kipf & Welling, 2017) . In this paper, we study the characteristics of Graph Deconvolutional Networks (GDNs), and observe de-noising to be the key for effective deconvolutional operations. Therefore, we propose a wavelet-based module (Hammond et al., 2011) that serves as a de-noising mechanism after the signals reconstructed in the spectral domain (Shuman et al., 2013) for deconvolutional networks. Most GCNs proposed by prior arts, e.g., Cheby-GCN (Defferrard et al., 2016) and GCN (Kipf & Welling, 2017) , exploit spectral graph convolutions (Shuman et al., 2013) and Chebyshev polynomials (Hammond et al., 2011) to retain coarse-grained information and avoid explicit eigendecomposition of the graph Laplacian. Until recently, Wu et al. (2019) and Donnat et al. (2018) have noticed that GCN acts as a low pass filter in spectral domain and retains smoothed representations. Inspired by prior arts in the domain of signal deconvolution (Kundur & Hatzinakos, 1996) , we propose to design a GDN by using high pass filters as the counterpart of low pass filters embodied in GCNs. Due to the nature of signal deconvolution being ill-posed, several prior arts (Donoho & Johnstone, 1994; Figueiredo & Nowak, 2003) rely on transforming these signals into another domain (e.g., spectral domain) where the problem can be better posed and resolved. Furthermore, Neelamani et al. ( 2004) observe inverse filters in spectral domain may amplify the noise, and we observe the same phenomenon for GDNs. Therefore, inspired by their proposed hybrid spectralwavelet method-inverse signal reconstruction in spectral domain followed by a de-noising step in 1

