LOCALLY INVARIANT EXPLANATIONS: TOWARDS STABLE AND UNIDIRECTIONAL EXPLANATIONS THROUGH LOCAL INVARIANT LEARNING

Abstract

Locally interpretable model agnostic explanations (LIME) method is one of the most popular methods used to explain black-box models at a per example level. Although many variants have been proposed, few provide a simple way to produce high fidelity explanations that are also stable and intuitive. In this work, we provide a novel perspective by proposing a model agnostic local explanation method inspired by the invariant risk minimization (IRM) principle -originally proposed for (global) out-of-distribution generalization -to provide such high fidelity explanations that are also stable and unidirectional across nearby examples. Our method is based on a game theoretic formulation where we theoretically show that our approach has a strong tendency to eliminate features where the gradient of the black-box function abruptly changes sign in the locality of the example we want to explain, while in other cases it is more careful and will choose a more conservative (feature) attribution, a behavior which can be highly desirable for recourse. Empirically, we show on tabular, image and text data that the quality of our explanations with neighborhoods formed using random perturbations are much better than LIME and in some cases even comparable to other methods that use realistic neighbors sampled from the data manifold. This is desirable given that learning a manifold to either create realistic neighbors or to project explanations is typically expensive or may even be impossible. Moreover, our algorithm is simple and efficient to train, and can ascertain stable input features for local decisions of a black-box without access to side information such as a (partial) causal graph as has been seen in some recent works.

1. INTRODUCTION

Deployment and usage of neural black-box models has significantly grown in industry over the last few years creating the need for new tools to help users understand and trust models (Gunning, 2017) . Even well-studied application domains such as image recognition require some form of prediction understanding in order for the user to incorporate the model into any important decisions (Simonyan et al., 2013; Lapuschkin et al., 2016) . An example of this could be a doctor who is given a cancer diagnosis based on an image scan. Since the doctor holds responsibility for the final diagnosis, the model must provide sufficient reason for its prediction. Even new text categorization tasks (Feng et al., 2018) are becoming important with the growing need for social media companies to provide better monitoring of public content. Twitter recently began monitoring tweets related to COVID-19 in order to label tweets containing misleading information, disputed claims, or unverified claims (Roth & Pickles, 2020). Laws will likely emerge requiring explanations for why red flags were or were not raised in many examples. In fact, the General Data Protection and Regulation (GDPR) (Yannella & Kagan, 2018) act passed in Europe already requires automated systems that make decisions affecting humans to be able to explain them. Given this acute need, a number of methods have been proposed to explain local decisions (i.e. example specific decisions) of classifiers (Ribeiro et al., 2016; Lundberg & Lee, 2017; Simonyan et al., 2013; Lapuschkin et al., 2016; Dhurandhar et al., 2018a) . Locally interpretable model-agnostic explanations (LIME) is arguably the most well-known local explanation method that requires only query (or black-box) access to the model. Although LIME is a popular method, it is known to be sensitive to certain design choices such as i) (random) sampling to create the (perturbation) neighborhoodfoot_0 , ii) the size of this neighborhood (number of samples) and iii) (local) fitting procedure to learn the explanation model (Molnar, 2019; Zhang et al., 2019b) . The first, most serious issue could lead to nearby examples having drastically different explanations making effective recourse a challenge. One possible mitigation is to increase the neighborhood size, however, one cannot arbitrarily do so as it not only leads to higher computational cost, but in today's cloud computing-driven world it could have direct monetary implications where every query to a black-box model has an associated cost (Dhurandhar et al., 2019) . 

Coefficient inconsistency for LINEX

Figure 1 : Above we visualize for the IRIS dataset the Coefficient Inconsistency (CI) (see Section 5 for exact definition and setup details) between the explanation (top two features) for an example and its nearest neighbor in the dataset. Each circle denotes an example and a rainbow colormap depicts the degree of inconsistency w.r.t. its nearest neighbor where red implies least inconsistency, while violet implies the most. As can be seen LINEX explanations are much more consistent than LIME's. There have been variants suggested to overcome some of these limitations (Botari et al., 2020; Shrotri et al., 2021; Plumb et al., 2018) primarily through mechanisms that create realistic neighborhoods or through adversarial training (Lakkaraju et al., 2020), however, their efficacy is restricted to certain settings and modalities based on their assumptions and training strategies. In this paper we introduce a new method called Locally INvariant EXplanations (LINEX) inspired by the invariant risk minimization (IRM) principle (Arjovsky et al., 2019) , that produces explanations in the form of feature attributions that are robust to neighborhood sampling and can recover faithful (i.e. mimic black-box behavior), stable (i.e. similar for closeby examples) and unidirectional (i.e. same sign attributions a.k.a. feature importances) for closeby examples, see section 4.1) explanations across tabular, image, and text modalities. In particular, we show that our method performs better than the competitors for random as well as realistic neighborhood generation, where in some cases even with the prior strategy our explanation quality is close to methods that employ the latter. Qualitatively, our method highlights (local) features as important that in the particular locality i) have consistently high gradient with respect to (w.r.t.) the black-box function and ii) where the gradient does not change significantly, especially in sign. Such stable behavior for LINEX is visualized in Figure 1 , where we get similar explanations for nearby examples in the IRIS dataset. The (in)fidelity of LINEX is still similar to LIME (see Table 2 ), but of course our explanations are much more stable.

2. RELATED WORK

Posthoc explanations can typically be partitioned into two broad categories global and local. Global explainability avers to trying to understand a black-box model at a holistic level where the typical tact is knowledge transfer (Hinton et al., 2015; Dhurandhar et al., 2018b; 2020) where (soft/hard) labels of the black-box model are used to train an interpretable model such as a decision tree or rule list (Rudin, 2019) . Local explanations on the other hand avers to understanding individual decisions. These explanations are typically in two forms, either exemplar based or feature based. For exemplar based as the name suggests similar but diverse examples (Kim et al., 2016; Gurumoorthy et al., 2019) are provided as explanations for the input in question. While for feature based (Ribeiro et al., 2016; Lundberg & Lee, 2017; Dhurandhar et al., 2018a; Lapuschkin et al., 2016; Zhao et al., 2021) , which is the focus of this work, important features are returned as being important for the decision made for the input. There are some methods that do both (Plumb et al., 2018) . Moreover, there are methods which provide explanations that are local, global as well as at a group level (Ramamurthy et al., 2020) . All of these methods though may not still provide stable and robust local feature based explanations which can be desirable in practice (Ghorbani et al., 2019) .



By perturbation neighborhood -referred to as simply neighborhood -we mean neighborhoods generated for local explanations. By exemplar neighborhood, we mean nearest examples in a dataset to a given example.



4.5 5.0 5.5 6.0 6.5 7.0 7.5 8.0 sepal length (cm)

