ACTIONABLE RECOURSE GUIDED BY USER PREFERENCE

Abstract

The growing popularity of machine learning models has led to their increased application in domains directly impacting human lives. In critical fields such as healthcare, banking, and criminal justice, tools that ensure trust and transparency are vital for the responsible adoption of these models. One such tool is actionable recourse (AR) for negatively impacted users. AR describes recommendations of cost-efficient changes to a user's actionable features to help them obtain favorable outcomes. Existing approaches for providing recourse optimize for properties such as proximity, sparsity, validity, and distance-based costs. However, an oftenoverlooked but crucial requirement for actionability is a consideration of User Preference to guide the recourse generation process. Moreover, existing works considering a user's preferences require users to precisely specify their costs for taking actions. This requirement raises questions about the practicality of the corresponding solutions due to the high cognitive loads imposed. In this work, we attempt to capture user preferences via soft constraints in three simple forms: i) scoring continuous features, ii) bounding feature values and iii) ranking categorical features. We propose an optimization framework that is sensitive to user preference and a gradient-based approach to identify User Preferred Actionable Recourse (UP-AR). We empirically demonstrate the proposed approach's superiority in adhering to user preference while maintaining competitive performance in traditional metrics with extensive experiments.

1. INTRODUCTION

Actionable Recourse (AR) (Ustun et al., 2019) is the ability of an individual to obtain the desired outcome from a fixed Machine Learning (ML) model. Several domains such as lending (Siddiqi, 2012) , insurance (Scism, 2019), resource allocation (Chouldechova et al., 2018; Shroff, 2017) and hiring decisions (Ajunwa et al., 2016) are required to suggest recourses to ensure the trust of the decision system in place; in such scenarios, it is critical to ensure actionability in recourse (otherwise the suggestions are pointless). Consider an individual named Alice who applies for a loan, and the bank, which uses an ML-based classifier, denies it. Naturally, Alice asks -What can I do to get the loan? The inherent question is what action she must take to obtain the loan in the future. Counterfactual explanation introduced in Wachter (Wachter et al., 2017) provides a what-if scenario to alter the model's decision. AR further aims to provide Alice with a feasible action. A feasible action is both actionable by Alice (meaning she can reasonably execute the directed plan) and suggests as low-cost modifications as possible. While some features (such as age or sex) are inherently inactionable, Alice's personalized constraints may also limit her ability to take action on the suggested recourse (such as a strong reluctance to secure a co-applicant). We call these localized constraints User Preferences, synonymous to userlevel constraints introduced as local feasibility by Mahajan et al. (2019) . Figure 1 illustrates the motivation behind UP-AR. Notice how similar individuals can prefer contrasting recourse. Actionability, as we consider it, is centered explicitly around individual preferences, and similar recourses provided to two individuals (Alice and Bob) with identical feature vectors may not necessarily be equally actionable. Most existing methods of finding actionable recourse are restricted to omission of features from the actionable feature set which Alice does not prefer to act upon, and box constraints (Mothilal et al., 2020) in the form of bounds on feature actions.

