The content of the given text is as follows:

Title: Unsupervised discovery of Interpretable Visual Concepts

Author: Caroline Mazini Rodrigues

Abstract: This paper proposes two methods, Maximum Activation Groups Extraction (MAGE) and Multiscale Interpretable Visualization (Ms-IV), to enhance the interpretability of deep-learning models for non-experts. These methods aim to explain the model’s decision and provide global interpretability. MAGE identifies combinations of features in a Convolutional Neural Network (CNN) that form semantic meanings, called concepts. These concepts are then visualized through Ms-IV, which incorporates Occlusion and Sensitivity analysis to evaluate the most important image regions according to the model’s decision space. The proposed approach is compared to other explainable AI (xAI) methods such as LIME and Integrated Gradients, and experimental results show higher localization and faithfulness values for Ms-IV. Additionally, qualitative evaluation demonstrates the ability of humans to agree on the decision of cluster concepts based on the visualization, as well as the ability to detect bias among a given set of networks.

Submission history: The paper was submitted by Caroline Mazini Rodrigues on August 31, 2023. The initial version of the paper can be accessed via the provided link.