A FRAMEWORK FOR DIFFERENTIABLE DISCOVERY OF GRAPH ALGORITHMS Anonymous authors Paper under double-blind review

Abstract

Recently there is a surge of interests in using graph neural networks (GNNs) to learn algorithms. However, these works focus more on imitating existing algorithms, and are limited in two important aspects: the search space for algorithms is too small and the learned GNN models are not interpretable. To address these issues, we propose a novel framework which enlarge the search space using cheap global information from tree decomposition of the graphs, and can explain the structures of the graph leading to the decision of learned algorithms. We apply our framework to three NP-complete problems on graphs and show that the framework is able to discover effective and explainable algorithms.

1. INTRODUCTION

Many graph problems such as maximum cut and minimum vertex cover are NP-hard. The classical algorithm design paradigm often requires significant efforts from domain experts to understand and exploit problem structures, in order to come up with effective procedures. However, for more complex problems and in the presence of a family of problem instances, it is becoming increasingly challenging for human to identify the problem structures and tailor algorithms. Thus there is a surge of interests in recent years to use learning and differentiable search to discover graph algorithms. In this context, GNNs have been widely used for representing and learning graph algorithms (Dai et al., 2018; Li et al., 2018) . However, directly using a GNN model to define the algorithm search space may not be enough for discovering an algorithm better than existing greedy ones. Hella et al. 2019) derived the approximation ratios of the algorithms that can be learned by GNNs, which are much worse than those of some simple algorithms (Johnson, 1974; Chlebík & Chlebíková, 2008) . Intuitively, GNNs can only capture local graph patterns, but miss out the global information, which fundamentally restricts their expressiveness power. To enhance the capacity of GNNs and allow for a larger search space, we incorporate global information about the graph as additional features, and augment them with other node/edge features. The idea of incorporating additional features to improve the expressiveness of GNNs has been deployed in serval existing models, by adding either unique node identifiers (Donnat et al., 2018; Seo et al., 2019; Zhang et al., 2020) , the information of port numbering (Sato et al., 2019) , or randomness (Sato et al., 2020) . However, these features are added mainly to break the local symmetry of similar graph patterns, but do not add much information about the global graph properties. Another important aspect which have largely been ignored in previous work is explaining the learned algorithm encoded in GNN. Many previous works focus on the ability of GNNs to imitate existing graph algorithms, without showing new algorithms are being learned. One exception is that Khalil et al. (2017) experimentally showed that GNN has discovered a new algorithm for minimum vertex cover problem where the node selection policy balances between node degree and the connectivity of the graph. However, this phenomenon was just mentioned in passing, and a systematic explanation of the graph patterns leading to the algorithm decision is missing. Therefore, there is an urgent need to develop explainable graph models to understand the learned algorithm. In this paper, we propose a new framework for differentiable graph algorithm discovery (DAD), focusing on two important aspects of the discovery process: designing a larger search space, and an effective explainer model. More specifically, we design a search space for graph algorithms by



(2015); Sato et al. (2019) have theoretically discussed the limitations of GNNs for expressing more powerful algorithms, by bridging the connection between GNNs and distributed local algorithms. Sato et al. (

