Title: SafeAR: Enhancing Algorithmic Recourse with Risk-Aware Policies
Date: 23 Aug 2023
Authors: Haochen Wu and colleagues
Abstract:
As the use of machine learning (ML) models in critical domains like finance and healthcare continues to grow, it is crucial to provide recourse for individuals who are negatively affected by the decisions made by these models. The concept of algorithmic recourse, which suggests a series of actions to improve one’s situation, has been explored in previous research. However, existing approaches fail to consider the uncertainties and risks associated with these actions. This paper introduces SafeAR (Safer Algorithmic Recourse), a method that incorporates risk considerations into the computation and evaluation of recourse. By empowering individuals to choose recourse based on their risk tolerance, SafeAR aims to prevent scenarios where recourse leads to a worse situation or requires a high cost for recovery. The paper discusses the limitations of current recourse approaches, presents a method to compute risk-aware recourse policies, and connects the algorithmic recourse literature with risk-sensitive reinforcement learning. The paper also introduces risk measures from the financial literature to summarize risk concisely, such as “Value at Risk” and “Conditional Value at Risk.” The proposed method is applied to two real-world datasets, and different recourse policies with varying levels of risk-aversion are compared using risk measures and other recourse desiderata like sparsity and proximity.
Submission history:
– Wed, 23 Aug 2023 18:12:11 UTC (7,515 KB) – Submitted by Sriram Gopalakrishnan.