MIA: A Framework for Certifiably Robust Time-Series Classification and Forecasting Against Temporally-Localized Perturbations

Abstract

Recent literature demonstrates that times-series forecasting/classification are sensitive to input perturbations. However, the defenses for time-series models are relatively under-explored. In this paper, we propose Masking Imputing Aggregation (MIA), a plug-and-play framework to provide an arbitrary deterministic timeseries model with certified robustness against temporally-localized perturbations (also known as ℓ 0 -norm localized perturbations), which is to our knowledge the first ℓ 0 -norm defense for time-series models. Our main insight is to let an occluding mask move across the input series, guaranteeing that, for an arbitrary localized perturbation there must exist at least one mask that completely occlude the perturbed area, so that the prediction on this masked series is certifiably unaffected. MIA is flexible as it still works even if we only have the query access to the pretrained model. To further validate the superior effectiveness of MIA, we specifically compare MIA to two baselines extended from prior randomized smoothing approaches. Extensive experiments show that MIA yields stronger robustness.

1. Introduction

Time series forecasting/classification (TSF/TSC) have been widely applied to help businesses make informed decisions and plans (Miyato et al., 2017; Zhou et al., 2019; Schlegl et al., 2019; Park et al., 2018) . However, a wide range of literature demonstrate that time-series models are vulnerable to adversarial input perturbations (Connor et al., 1994; Gelper et al., 2010; Ding et al., 2022; Yang et al., 2020; Dang-Nhu et al., 2020; Oregi et al., 2018; Han et al., 2020) , e.g., an elaborately designed imperceptible perturbation could control the prediction (Karim et al., 2020; Fawaz et al., 2019) . So far related literature is mainly focusing on detecting the outliers (Ruff et al., 2018; Yairi et al., 2017) , the adversarial robustness of time-series models is relatively under-explored, especially ℓ 0 -norm robustness, e.g., (Yoon et al., 2022) only explore the ℓ 2 -norm adversarial robustness for probabilistic forecasting models. In the present work, we focus on the robustness against temporally-localized perturbations, as we notice there already exists corresponding powerful attacks (Yang et al., 2022) . Generally, defenses can be divided into two types, heuristic defenses and certified defenses. Heuristic defense can yield better empirical robustness but lack robustness guarantees. From the experience on image classification (Athalye et al., 2018; Carlini & Wagner, 2017; Athalye & Carlini, 2018) , the heuristic defenses would be useless when confronted with the newly designed adaptive attacks, e.g., Athalye et al. (2018) leverage Backward Pass Differentiable Approximation technique to successfully circumvent almost all the heuristic defenses at that time. To end such a "cat and mouse" game between the adaptive attacks and the heuristic defenses, the concept of certified defense is proposed, with unbreakable robustness certificates. Current certified defenses can produce robustness certificates but often require the user to retrain the base model from scratch, e.g., Yoon et al. (2022); Li et al. (2020) ; Cohen et al. ( 2019) retrain the base model as these defenses do perform poorly on naturally-trained models. The requirement for retraining could bring additional challenges when it comes to the real-world deployments. In addition, the certified defenses on sequence-based data are quite under-explored, since almost all the certified defenses are focusing on matrix-based data (e.g. image).

