Quantus: A Toolkit for Evaluating Neural Network Explanations and Beyond
Anna Hedström, Leander Weber, Daniel Krakowczyk, Dilyara Bareeva, Franz Motzkus, Wojciech Samek, Sebastian Lapuschkin, Marina M.-C. Höhne; 24(34):1−11, 2023.
Abstract
The evaluation of explanation methods is a research topic that has not been extensively explored. However, since explainability is intended to enhance trust in artificial intelligence, it is necessary to systematically review and compare explanation methods to ensure their accuracy. Until now, there has been no comprehensive tool focused on evaluating explainability in AI that allows researchers to efficiently evaluate the performance of explanations for neural network predictions. To promote transparency and reproducibility in the field, we have developed Quantus—a comprehensive evaluation toolkit in Python. It includes a well-organized collection of evaluation metrics and tutorials for evaluating explainable methods. The toolkit has undergone extensive testing and is available under an open-source license on PyPi (or at https://github.com/understandable-machine-intelligence-lab/Quantus/).
[abs]
[code]