Optimal Strategies for Reject Option Classifiers

Vojtech Franc, Daniel Prusa, Vaclav Voracek; 24(11):1−49, 2023.

Abstract

In the context of classification with a reject option, classifiers have the ability to abstain from making predictions in uncertain cases. Traditional cost-based models of reject option classifiers require the explicit definition of a rejection cost. However, alternative models such as the bounded-improvement model and the bounded-abstention model do not rely on the notion of a reject cost. The bounded-improvement model aims to find a classifier with a guaranteed selective risk and maximum coverage, while the bounded-abstention model seeks a classifier with guaranteed coverage and minimal selective risk. Interestingly, we prove that despite their different formulations, all three rejection models lead to the same prediction strategy: the Bayes classifier with a randomized Bayes selection function. To construct the randomized Bayes selection function, we introduce the concept of a proper uncertainty score, which serves as a scalar summary of the prediction uncertainty. Additionally, we propose two algorithms that can learn the proper uncertainty score from examples for any black-box classifier. We demonstrate the Fisher consistency of both algorithms in estimating the proper uncertainty score and showcase their efficiency in various prediction problems, including classification, ordinal regression, and structured output classification.

[abs]

[pdf][bib]