ROBUST MULTIVARIATE TIME-SERIES FORECASTING: ADVERSARIAL ATTACKS AND DEFENSE MECHANISMS

Abstract

This work studies the threats of adversarial attack on multivariate probabilistic forecasting models and viable defense mechanisms. Our studies discover a new attack pattern that negatively impact the forecasting of a target time series via making strategic, sparse (imperceptible) modifications to the past observations of a small number of other time series. To mitigate the impact of such attack, we have developed two defense strategies. First, we extend a previously developed randomized smoothing technique in classification to multivariate forecasting scenarios. Second, we develop an adversarial training algorithm that learns to create adversarial examples and at the same time optimizes the forecasting model to improve its robustness against such adversarial simulation. Extensive experiments on real-world datasets confirm that our attack schemes are powerful and our defense algorithms are more effective compared with baseline defense mechanisms.

1. INTRODUCTION

Understanding the robustness for time-series models has been a long-standing issue with applications across many disciplines such as climate change (Mudelsee, 2019) , financial market analysis (Andersen et al., 2005; Hallac et al., 2017) , down-stream decision systems in retail (Böse et al., 2017) , resource planning for cloud computing (Park et al., 2019; 2020) , and optimal control of vehicles (Kim et al., 2020) . In particular, the notion of robustness defines how sensitive the model output is when authentic data is (potentially) perturbed with noises. In practice, as observation data are often corrupted by measurement noises, it is important to develop statistical forecasting models that are less sensitive to such noises (Brown, 1957; Brockwell & Davis, 2009; Taylor & Letham, 2018) or more stable against outliers that might arise from such corruption (Connor et al., 1994; Gelper et al., 2010; Liu & Zhang, 2021; Wang & Tsay, 2021) . However, these approaches have not considered the possibility of adversarial noises which are strategically created to mislead the model rather than being sampled from a known distribution. As a matter of fact, vulnerabilities against such adversarial noises have been previously pointed out (Szegedy et al., 2013; Goodfellow et al., 2014b) in classification. In practice, it has been shown that human-imperceptible adversarial perturbation can alter classification outcomes of a deep learning (DL) model, revealing a severe threat to many safety-critical systems . As such a risk is associated with the high capacity to fit complex data pattern of DL, we postulate that similar threats might also occur in forecasting where modern DL-based forecasting models (Rangapuram et al., 2018; Salinas et al., 2020; Lim et al., 2020; Wang et al., 2019; Park et al., 2022) have become the dominant approach. For example, to mislead the forecasting of a particular stock, the adversaries might attempt to alter some features external to the stock's financial valuation to maximize the gap between predictions of its values on authentic and altered features. The feasibility of such an adversarial attack has been recently demonstrated with tweet messages (Xie et al., 2022) on a text-based stock forecasting. Motivated by these real scenarios, we propose to investigate such adversarial threats on more practical forecasting models whose predictions are based on more precise features, e.g. valuations of other stock indices. Intuitively, rather than releasing adverse information to alter the sentiment about the target stock on social media, the adversaries can instead invest hence change the valuation adversely

