COVARIANCE-ROBUST MINIMAX PROBABILITY MA-CHINES FOR ALGORITHMIC RECOURSE

Abstract

Algorithmic recourse is rising as a prominent technique to promote the explainability and transparency of the predictive model in ethical machine learning. Existing approaches to algorithmic recourse often assume an invariant predictive model; however, this model, in reality, is usually updated temporally upon the input of new data. Thus, a recourse that is valid respective to the present model may become invalid for the future model. To resolve this issue, we propose a pipeline to generate a model-agnostic recourse that is robust to model shifts. Our pipeline first estimates a linear surrogate of the nonlinear (black-box) model using covariance-robust minimax probability machines (MPM); then, the recourse is generated with respect to this robust linear surrogate. We show that the covariance-robust MPM recovers popular regularization schemes, including 2 -regularization and class-reweighting. We also show that our covariance-robust MPM pushes the decision boundary in an intuitive manner, which facilitates an interpretable generation of a robust recourse. The numerical results demonstrate the usefulness and robustness of our pipeline.

1. INTRODUCTION

The recent prevalence of machine learning (ML) in supporting consequential decisions involving humans such as loan approval (Moscato et al., 2021) , job hiring (Cohen et al., 2019; Schumann et al., 2020) , and criminal justice (Brayne & Christin, 2021) urges the need of transparent ML systems with explanations and feedback to users (Doshi-Velez & Kim, 2017; Miller, 2019) . One popular and emerging approach to providing feedback is the algorithmic recourse (Ustun et al., 2019) . A recourse suggests how the input instance should be modified to alter the outcome of a predictive model. Consider a specific scenario in which an individual is rejected from receiving a loan by a financial institution's ML model. Recently, it has become a legal necessity to provide explanations and recommendations to the individual so that they can improve their situation and obtain a loan in the future (GDPR, Voigt & Von dem Bussche (2017)). For example, an explanation can be "increase the income to $5000" or "reduce the debt/asset ratio to below 20%". Leveraging the recourses, financial institutions can assess the reliability of their ML predictive models and increase user engagement through actionable feedback and acceptance guarantee if they fulfill the requirements. To construct plausible and meaningful recourses, one must assess and strike a balance between conflicting criteria. They can be: (1) validity, a recourse should effectively reverse the unfavorable prediction of the model into a favorable one, (2) proximity, recourse should be close to the original input instance to alleviate the efforts required, and thus to encourage the adoption of the recourse, (3) actionability, prescribed modifications should follow causal laws of our society (Ustun et al., 2019; Karimi et al., 2021) ; for example, one can not modify their race or decrease their age. Various techniques were proposed to devise algorithmic recourses for a given predictive model, extensive surveys are provided in (Karimi et al., 2020a; Stepin et al., 2021; Pawelczyk et al., 2021; Verma et al., 2020) . Wachter et al. (2017) introduced the definition of counterfactual explanations and proposed a gradient-based approach to find the nearest instance that yields a favorable outcome. Ustun et al. (2019) proposed a mixed integer programming formulation (AR) that can find recourses for a linear classifier with a flexible design of the actionability constraints. Alternatively, Karimi et al. (2021; 2020b) investigated the nearest recourse through the lens of minimal intervention to take causal relationships between features into account. Recent works including Russell (2019) 

