Artificial Intelligence (AI) plays a crucial role in modern decision support systems (DSSs). However, the lack of transparency in the best-performing AI models used in DSSs is a significant challenge. Explainable Artificial Intelligence (XAI) addresses this challenge by aiming to develop AI systems that can explain their reasoning to human users. In XAI, local explanations are used to provide information about the factors that contribute to individual predictions. One limitation of existing local explanation methods is their inability to quantify the uncertainty associated with the importance of each factor. This paper introduces an extension of a feature importance explanation method called Calibrated Explanations (CE). Initially designed for classification, CE now also supports standard regression and probabilistic regression, which involves determining the probability that the target value exceeds a certain threshold. The extension for regression retains the benefits of CE, such as confidence intervals for prediction calibration, uncertainty quantification of feature importance, and the ability to provide both factual and counterfactual explanations. CE for standard regression offers fast, reliable, stable, and robust explanations. CE for probabilistic regression introduces a novel approach to generating probabilistic explanations from any ordinary regression model, with the flexibility to dynamically select thresholds. The performance of CE for probabilistic regression in terms of stability and speed is comparable to that of LIME. The method is model agnostic and employs easily understandable conditional rules. A Python implementation of CE is freely available on GitHub and can be easily installed using pip, ensuring replicability of the results presented in this paper.
Regression: Enhanced Explanations with Calibration. (arXiv:2308.16245v1 [cs.LG])
by instadatahelp | Sep 1, 2023 | AI Blogs