LEARNING "WHAT-IF" EXPLANATIONS FOR SEQUENTIAL DECISION-MAKING

Abstract

Building interpretable parameterizations of real-world decision-making on the basis of demonstrated behavior-i.e. trajectories of observations and actions made by an expert maximizing some unknown reward function-is essential for introspecting and auditing policies in different institutions. In this paper, we propose learning explanations of expert decisions by modeling their reward function in terms of preferences with respect to "what if" outcomes: Given the current history of observations, what would happen if we took a particular action? To learn these costbenefit tradeoffs associated with the expert's actions, we integrate counterfactual reasoning into batch inverse reinforcement learning. This offers a principled way of defining reward functions and explaining expert behavior, and also satisfies the constraints of real-world decision-making-where active experimentation is often impossible (e.g. in healthcare). Additionally, by estimating the effects of different actions, counterfactuals readily tackle the off-policy nature of policy evaluation in the batch setting, and can naturally accommodate settings where the expert policies depend on histories of observations rather than just current states. Through illustrative experiments in both real and simulated medical environments, we highlight the effectiveness of our batch, counterfactual inverse reinforcement learning approach in recovering accurate and interpretable descriptions of behavior.

1. INTRODUCTION

Consider the problem of explaining sequential decision-making on the basis of demonstrated behavior. In healthcare, an important goal lies in being able to obtain an interpretable parameterization of the experts' behavior (e.g in terms of how they assign treatments) such that we can quantify and inspect policies in different institutions and uncover the trade-offs and preferences associated with expert actions (James & Hammond, 2000; Westert et al., 2018; Van Parys & Skinner, 2016; Jarrett & van der Schaar, 2020) . Moreover, modeling the reward function of different clinical practitioners can be revealing as to their tendencies to treat various diseases more/less aggressively (Rysavy et al., 2015) , which -in combination with patient outcomes-has the potential to inform and update clinical guidelines. In many settings, such as medicine, decision-makers can be modeled as reasoning about "what-if" patient outcomes: Given the available information about the patient, what would happen if we took a particular action? (Djulbegovic et al., 2018; McGrath, 2009) . As treatments often affect several patient covariates, by having both benefits and side-effects, decision-makers often make choices based on their preferences over these counterfactual outcomes. Thus, in our case, an interpretable explanation of a policy is one where the reward signal for (sequential) actions is parameterized on the basis of preferences over (sequential) counterfactuals (i.e. "what-if" patient outcomes).

Z

< l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > w z < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > Our goal: recover preferences over counterfactual outcomes.Preference magnitude< l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t >< l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > U < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t >< l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t >< l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > w u < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " ( n u l l ) " > ( n u l l ) < / l a t e x i t > Given the observations and actions made by an expert, inverse reinforcement learning (IRL) offers a principled way for modeling their behavior by recovering the (unknown) reward function being maximized (Ng et al., 2000; Abbeel & Ng, 2004; Choi & Kim, 2011) . Standard solutions operate by iterating on candidate reward functions, solving the associated (forward) reinforcement learning problem at each step. In many real-world problems, however, we are specifically interested in the challenge of offline learning-that is, where further experimentation is not possible-such as in medicine. In this batch setting, we only have access to trajectories sampled from the expert policy in the form of an observational dataset-such as in electronic health records.Batch IRL. By their nature, classic IRL algorithms require interactive access to the environment, or full knowledge of the environment's dynamics (Ng et al., 2000; Abbeel & Ng, 2004; Choi & Kim, 2011) . While batch IRL solutions have been proposed by way of off-policy evaluation (Klein et al., 2011; 2012; Lee et al., 2019) , they suffer from two disadvantages. First, they are limited by the assumption that state dynamics are fully-observable and Markovian. This is hardly true in medicine: treatment assignments generally depend on how patient covariates have evolved over time (Futoma et al., 2020) . Second, rewards are often parameterized as uninterpretable representations of neural network hidden states and consequently cannot be used to explain sequential decision making."What-if" Explanations. To address these shortcomings and to obtain a parameterizable interpretation of the expert's behavior, we propose explicitly incorporating counterfactual reasoning into batch IRL. In particular, we focus on "what if" explanations for modeling decision-making, while simultaneously accounting for the partially-observable nature of patient histories. Under the max-margin apprenticeship framework (Abbeel & Ng, 2004; Klein et al., 2011; Lee et al., 2019) , we learn a parameterized reward function R(h t , a t ) that is defined as a weighted sum over potential outcomes (Rubin, 2005) for taking action a t given history h t .As highlighted in Figure 1 , consider the decision making process of assigning a binary action given the tumour volume (U ) and side effects (Z). Letbe the counterfactual outcomes for the two covariates when action a t is taken given the history h t of covariates and previous actions. We define the reward as the weighted sum of these counterfactuals:to take into account the effect of actions and to directly model the preferences of the expert. The ideal scenario is when both the tumour volume and the side effects are zero, so the reward weights of a doctor aiming for this are both negative. However, recovering that |w u | > |w z |, it means that the doctor is treating more aggressively, as they are focusing more on reducing the tumour volume rather than on the side effects of treatments. Alternatively, |w u | < |w z | indicates that the side effects are more important and the expert is treating less aggressively. Our motivation for using counterfactuals to define the reward comes from the idea that rational decision making considers the potential effects of actions (Djulbegovic et al., 2018) .Contributions. Exploring the synergy between counterfactual reasoning and batch IRL for understanding sequential decision making confers multiple advantages. First, it offers a principled approach for parameterizing reward functions in terms of preferences over what-if patient outcomes, which enables us to explain the cost-benefit tradeoffs associated with an expert's actions. Second, by estimating the effects of different actions, counterfactuals readily tackle the off-policy nature of

