TARGETED ATTACKS ON TIMESERIES FORECASTING

Abstract

Real-world deep learning models developed for Time Series Forecasting are used in several critical applications ranging from medical devices to the security domain. Many previous works have shown how deep learning models are prone to adversarial attacks and studied their vulnerabilities. However, the vulnerabilities of time series models for forecasting due to adversarial inputs are not extensively explored. While attack on a forecasting model might aim to to deteriorate the performance of the model, it is more effective, if the attack is focused on a specific impact on the model's output. In this paper, we propose a novel formulation of Directional, Amplitudinal, and Temporal targeted adversarial attacks on time series forecasting models. These targeted attacks create a specific impact on the amplitude and direction of thre output prediction. We use the existing adversarial attack techniques from the computer vision domain and adapt them for time series. Additionally, we propose a modified version of the Auto Projected Gradient Descent attack for targeted attacks. We examine the impact of the proposed targeted attacks verses untargeted attacks. We use KS-Tests to statistically demonstrate the impact of the attack. Our experimental results demonstrate how targeted attacks on time series models are viable and are more powerful in terms of statistical similarity. It is, hence difficult to detect through statistical methods. We believe that this work opens a new paradigm in the time series forecasting domain and represents an important consideration for developing better defenses.

1. INTRODUCTION

Time Series Forecasting (TSF) tasks are seen in many real-world problems across several domains. The wide range of domains of applications include demand forecasting Carbonneau et al. (2008) For a given classification model, the goal of the adversary could be either targeted or untargeted. In targeted attacks, the adversary tries to misguide the model to a particular class other than the true class. In an untargeted attack, the adversary tries to misguide the model to predict any of the incorrect classes. The definition of targeted are well-defined for classification tasks and have been used in several previous works (Goodfellow et al. (2014a ), Kurakin et al. (2016) , Croce & Hein (2020)). These definitions are not applicable for regression tasks such as TSF. In the adversarial machine learning domain, time series tasks have received significantly less attention as compared to those of computer vision. Also, the adversarial attacks and defenses studied in the computer vision domain are not always useful for time series, requiring specific adaptations and re-definitions. In this paper, we address the above-mentioned shortcomings by providing a formulation for targeted attacks on TSF. To do this, we extend the popularly known adversarial attacks from the computer vision domain to time series forecasting. Together with popular attacks such as Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD), we also propose a modified variant of Auto PGD attacks. We perform KS-tests on the loss attributes of the output forecasts (predictions) to study the statistical properties of the proposed targeted attacks.



, anomaly detection Laptev et al. (2017), stock price prediction jae Kim (2003), electrical pricing Crespo Cuaresma et al. (2004) and weather forecasting Grover et al. (2015). Improved availability of data and computation resources has reflected in the recent efforts (Rasul et al. (2020), Wen et al. (2017), Oreshkin et al. (2019)) of applying deep learning techniques for forecasting tasks. The wide applications of such deep learning models have led to threat due to adversaries and hence also the work towards exploration and prevention (Rathore et al. (2021), Cao & Gong (2017), Li & Li (2020)) of such adversarial attacks.

