Expected goal models have become popular in recent years, but their interpretability is often limited, particularly when trained using black-box methods. To address this issue, explainable artificial intelligence tools have been developed to enhance model transparency and extract descriptive knowledge for individual observations or for all observations collectively. However, in certain domains, it may be more beneficial to explain black-box models for specific groups of observations.

To bridge the gap between local and global levels of explanation, this paper introduces glocal explanations for expected goal models. These explanations enable performance analysis at both the team and player levels by utilizing aggregated versions of SHAP values and partial dependence profiles. By extracting knowledge from the expected goal model for a player or team rather than just a single shot, a more comprehensive understanding can be obtained.

To demonstrate the usefulness of aggregated SHAP values and profiles, real-data applications were conducted. The results further highlight the potential of these explanations for performance analysis in soccer analytics.

In summary, the glocal explanations proposed in this paper provide a valuable tool for enhancing the interpretability of expected goal models in soccer analytics, allowing for more comprehensive performance analysis at both the team and player levels.