TARGETED ATTACKS ON TIMESERIES FORECASTING

Abstract

Real-world deep learning models developed for Time Series Forecasting are used in several critical applications ranging from medical devices to the security domain. Many previous works have shown how deep learning models are prone to adversarial attacks and studied their vulnerabilities. However, the vulnerabilities of time series models for forecasting due to adversarial inputs are not extensively explored. While attack on a forecasting model might aim to to deteriorate the performance of the model, it is more effective, if the attack is focused on a specific impact on the model's output. In this paper, we propose a novel formulation of Directional, Amplitudinal, and Temporal targeted adversarial attacks on time series forecasting models. These targeted attacks create a specific impact on the amplitude and direction of thre output prediction. We use the existing adversarial attack techniques from the computer vision domain and adapt them for time series. Additionally, we propose a modified version of the Auto Projected Gradient Descent attack for targeted attacks. We examine the impact of the proposed targeted attacks verses untargeted attacks. We use KS-Tests to statistically demonstrate the impact of the attack. Our experimental results demonstrate how targeted attacks on time series models are viable and are more powerful in terms of statistical similarity. It is, hence difficult to detect through statistical methods. We believe that this work opens a new paradigm in the time series forecasting domain and represents an important consideration for developing better defenses.

1. INTRODUCTION

Time Series Forecasting (TSF) tasks are seen in many real-world problems across several domains. The wide range of domains of applications include demand forecasting Carbonneau et al. (2008) , anomaly detection Laptev et al. (2017) , stock price prediction jae Kim (2003) , electrical pricing Crespo Cuaresma et al. (2004) and weather forecasting Grover et al. (2015) . Improved availability of data and computation resources has reflected in the recent efforts (Rasul et al. (2020) For a given classification model, the goal of the adversary could be either targeted or untargeted. In targeted attacks, the adversary tries to misguide the model to a particular class other than the true class. In an untargeted attack, the adversary tries to misguide the model to predict any of the incorrect classes. The definition of targeted are well-defined for classification tasks and have been used in several previous works (Goodfellow et al. (2014a ), Kurakin et al. (2016 ), Croce & Hein (2020) ). These definitions are not applicable for regression tasks such as TSF. In the adversarial machine learning domain, time series tasks have received significantly less attention as compared to those of computer vision. Also, the adversarial attacks and defenses studied in the computer vision domain are not always useful for time series, requiring specific adaptations and re-definitions. In this paper, we address the above-mentioned shortcomings by providing a formulation for targeted attacks on TSF. To do this, we extend the popularly known adversarial attacks from the computer vision domain to time series forecasting. Together with popular attacks such as Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD), we also propose a modified variant of Auto PGD attacks. We perform KS-tests on the loss attributes of the output forecasts (predictions) to study the statistical properties of the proposed targeted attacks. 1. We define and formalize targeted attacks on deep learning time series forecasting. 2. We propose a modified Auto PGD attack for Time Series Forecasting (mAPGD-TSF), an extension of the AutoPGD algorithm for targeted time series attacks, which can be extended to any regression task. 3. Through statistical tests, we show that inputs with targeted perturbations are much more indistinguishable than untargeted attacks through empirical studies on the Google Stock and Household electric power consumption datasets. 2020) considers an adversarial setting in a probabilistic framework on auto-regressive forecasting models. This work uses Monte-Carlo estimation in approximating the gradient of expectation and addresses the challenge of effectively differentiating through Monte-Carlo estimation using reparametrization and score-function estimators. This also proposes an under-estimation attack and an over-estimation attack on electricity consumption prediction for reparametrization and score-function estimators.



, Wen et al. (2017), Oreshkin et al. (2019)) of applying deep learning techniques for forecasting tasks. The wide applications of such deep learning models have led to threat due to adversaries and hence also the work towards exploration and prevention (Rathore et al. (2021), Cao & Gong (2017), Li & Li (2020)) of such adversarial attacks.

Most of the approaches on adversarial attacks were first started on image classification in the deep learning domain.Szegedy et al. (2013)  proposed adversarial examples for image recognition, which initiated a direction to investigate adversarial attacks in various domains.Goodfellow et al. (2014b)   proposed the Fast Gradient Sign Method (FGSM) which is a single-step attack. In a similar line,Kurakin et al. (2018)  presented an iterative version of FGSM called the Basic Iterative MethodPialla et al. (2022)  introduced Smooth Gradient Method (SGM) attack based on a gradient method and shows how adversarial training is a good way to improve a time series classifier's (TSC) robustness against smoothed perturbations by enforcing a smoothness condition on generated perturbations that contains spike and sawtooth patterns. The workKarim et al. (2019), takes into consideration that the time series models are sensitive to abnormal perturbations in the input and stringent requirements on perturbations. To address this, the work crafts time series adversarial based on the importance of measurement. The adversarial inputs are subjected to models performing time series prediction tasks such as LSTNet, CNN-, RNN-and MHANET-based models. An importance-based adversarial attack needs much smaller perturbations compared to other existing adversarial attacks. The work, however, does not formulate or address the targeted attacks in time series forecasting.Another work Mode & Hoque (2020) on time series forecasting, explores the vulnerabilities of deep learning multi-time series regression models to adversarial samples. The work also focuses on gradient-based white box attacks on deep learning models such as CNNs, Gated-Recurrent Units (GRU), and Long-Short Term Memory (LSTM) models. The vulnerabilities are shown to be transferable and have the ultimate consequences. This work also focuses on untargeted adversarial attacks with an aim to increase the error of the deep learning model's output.Dang-Nhu et al. (

