Calendar06 October 2023

Publication: A Protocol for Continual Explanation of SHAP Publication: A Protocol for Continual Explanation of SHAP

Continual Learning (CL) studies the challenges of training models in dynamic environments, where data distribution drifts over time. When trained on such non-stationary data, neural networks have been shown to forget previous knowledge.

Explaining neural network predictions is an inherently difficult task at the centre of eXplainable Artificial Intelligence (XAI) and can be even harder when models are updated on drifting distributions. How will the explanations change in a CL scenario? How the effect of non-stationarity can be shown in an explanation?

Continual Learning trains models on a stream of data, with the aim of learning new information without forgetting previous knowledge. Given the dynamic nature of such environments, explaining the predictions of these models can be challenging.

In this paper, EMERGE partners from the University of Pisa tackle XAI for Continual Learning by quantitatively and qualitatively assessing the behavior of one of the most used XAI techniques called SHAP (SHapley Additive exPlanations), in a supervised CL scenario (Class-Incremental) where new classes are encountered over time.

The authors observe that, while Replay strategies enforce the stability of SHAP values in feedforward/convolutional models, they are not able to do the same with fully trained recurrent models. The group show that alternative recurrent approaches, like randomized recurrent models, are more effective in keeping the explanations stable over time.

Read the paper: https://doi.org/10.14428/esann/2023.ES2023-41

Access EMERGE publications in the link below.