Minimax Risk Classifiers with 0-1 Loss

Authors: Santiago Mazuelas, Mauricio Romero, Peter Grunwald; 24(208):1−48, 2023.

Abstract

Supervised classification techniques utilize training samples to learn a classification rule that minimizes the expected 0-1 loss (error probability). Traditional methods achieve tractable learning and out-of-sample generalization by employing surrogate losses instead of the 0-1 loss and considering specific families of rules (hypothesis classes). This article introduces minimax risk classifiers (MRCs) that minimize the worst-case 0-1 loss with respect to uncertainty sets of distributions that can include the underlying distribution, with an adjustable confidence level. We demonstrate that MRCs can offer tight performance guarantees during the learning process and are strongly universally consistent when using feature mappings provided by characteristic kernels. The article also proposes efficient optimization techniques for MRC learning and demonstrates that the presented methods can yield accurate classifications along with tight performance guarantees in practical scenarios.

[abs]

[pdf][bib]