ONE VERTEX ATTACK ON GRAPH NEURAL NETWORKS-BASED SPATIOTEMPORAL FORECASTING Anonymous authors Paper under double-blind review

Abstract

Spatiotemporal forecasting plays an essential role in intelligent transportation systems (ITS) and numerous applications, such as route planning, navigation, and automatic driving. Deep Spatiotemporal Graph Neural Networks, which capture both spatial and temporal patterns, have achieved great success in traffic forecasting applications. Though Deep Neural Networks (DNNs) have been proven to be vulnerable to carefully designed perturbations in multiple domains like objection classification and graph classification, these adversarial works cannot be directly applied to spatiotemporal GNNs because of their causality and spatiotemporal mechanism. There is still a lack of studies on the vulnerability and robustness of spatiotemporal GNNs. Particularly, if spatiotemporal GNNs are vulnerable in real-world traffic applications, a hacker can easily cause serious traffic congestion and even a city-scale breakdown. To fill this gap, we design One Vertex Attack to break deep spatiotemporal GNNs by attacking a single one vertex. To achieve this, we apply the genetic algorithm with a universal attack method as the evaluation function to locate the weakest vertex; then perturbations are generated by solving an optimization problem with the inverse estimation. Empirical studies prove that perturbations in one vertex can be diffused into most of the graph when spatiotemporal GNNs are under One Vertex Attack.

1. INTRODUCTION

Spatiotemporal traffic forecasting has been a long-standing research topic and a fundamental application in intelligent transportation systems (ITS). For instance, with better prediction of future traffic states, navigation apps can help drivers avoid traffic congestion, and traffic signals can manage traffic flows to increase network capacity. Essentially, traffic forecasting can be modeled as a multivariate time series prediction problem for a network of connected sensors based on the topology of road networks. Given the complex spatial and temporal patterns governed by traffic dynamics and road network structure (Roddick & Spiliopoulou, 1999) , recent studies have developed various Graph Neural Networks-based traffic forecasting models (Yu et al., 2018; Wu et al., 2019; Li et al., 2017; Guo et al., 2019) . These deep learning models have achieved superior performance compared with traditional multivariate time series forecasting models such as vector autoregression (VAR). However, recent research has shown that deep learning frameworks are very vulnerable to carefully designed attacks (Kurakin et al., 2016b; Goodfellow et al., 2014; Papernot et al., 2016a; Tramèr et al., 2017; Kurakin et al., 2016a) . This raises a critical concern about the application of spatiotemporal GNNbased models for real-world traffic forecasting, in which robustness and reliability are of ultimate importance. For example, with a vulnerable forecasting model, a hacker can manipulate the predicted traffic states. Feeding these manipulated values into the downstream application can cause severe problems such as traffic congestion and even city-scale breakdown. However, it remains unclear how vulnerable these GNN-based spatiotemporal forecasting models are. Particularly, previous adversarial works cannot be directly applied to fool GNN-based spatiotemporal forecasting models because of their causality and spatiotemporal mechanism, which is detailed in Section 2. The goal of this paper is to understand and examine the vulnerability and robustness of GNN-based spatiotemporal forecasting models. In doing so, we design a One Vertex Attack (OVA) framework to break these forecasting models by manipulating only one vertex in the graph. We first propose a universal attack method against spatiotemporal GNNs by applying the inverse estimation to avoid using future ground truth. Then, we utilize the genetic algorithm, whose evaluation function is composed of the proposed universal attack method, to locate the "weakest" vertex. Here the weakest vertex refers to the vertex where attacking it will cause maximum damage to the forecasting models. Finally, we generate perturbations by solving an optimization problem. It should be noted that poisoning all vertices even multiple vertices in real-world applications is impossible, because the large scale of graph. For instance, the graph of traffic forecasting applications generally covers 1000 square kilometers, and it is unrealistic to organize Harker vehicles to poison all vertices in such a large scale road network. Hence, the proposed one-vertex attack is a realistic solution to evaluate the robustness and vulnerability of spatiotemporal forecasting models deployed in real-world applications. To prove the effectiveness of the proposed OVA method, we test it in two spatiotemporal traffic datasets with three different Spatiotemporal GNNs. The proposed method can cause at least 15% accuracy drop, and there are about 10% vertices severely impacted with the boundary of speed variation limited to 15km/h. The contribution of this paper can be summarized as follows. • First, to the best of our knowledge, this is the first study on attacking Spatiotemporal GNNs by poisoning only one vertex. • Second, we proposed a novel OVA method that is able to find the weakest vertex and generate optimal adversarial perturbations. • Third, we empirically study the effectiveness of the proposed method with multiple experiments on real-world datasets.

2. RELATED WORK

Adversarial Attacks against Time Series Analysis. Some previous works (Chen et al., 2019; Zhou et al., 2019; Alfeld et al., 2016; Karim et al., 2019) proposed adversarial attack methods against Autoregressive models or time series classification models. The above works only consider univariate time series. Different from these works, we focus on complex spatiotemporal domains. The input of spatiotemporal GNNs is the temporal dynamic graph rather than regular matrices or sequences. We take the spatial correlation into consideration while the above works didn't. Adversarial Attacks against Graph Neural Networks. Many studies (Dai et al., 2018; Zugner & Gunnemann, 2019; Chang et al., 2020; Tang et al., 2020) utilized Reinforcement Learning (RL), meta learning, or genetic algorithm to fool GNNs in node, edge, and graph classification domains by tuning the graph topology. All these studies involve no temporal variation in their graphs, and they mainly focus on the spatial pattern. These cannot be applied to fool spatiotemporal forecasting models because of the lack of temporal correlation. Particularly, attacking spatiotemporal forecasing models deployed in real-world applications by graph topology-based attack methods (Zugner & Gunnemann, 2019; Chang et al., 2020) are unrealistic, because tuning the graph topology represents tuning the sensor network that collects spatiotemporal data continuously and any modification on sensors can be easily sensed by the sensor network manager. Adversarial Attacks against Recurrent Neural Network. Recent studies (Rosenberg et al., 2019; Papernot et al., 2016b; Hu & Tan, 2017) demonstrated RNN classifiers were vulnerable to adversarial sequences. These adversarial works require the ground truth to compute adversarial sequences. Because of the forecasting applications' causality, the future ground truth is unavailable. Besides, these works focus on regular vectors or matrices, rather than irregular graphs. Hence these adversarial sequence generation models cannot be directly applied to attack spatiotemporal GNN-based forecasting models. 



One Pixel Attack for Fooling Deep Neural Networks.Su et al. (2019)  utilized Differential Evolution (DE) to generate the perturbation to poison one pixel in images, and then fool CNNs. Similar to one pixel attack, we only poison one vertex in graphs. However, images are regular-structured,

