11 November 2024


Deep Neural Networks (DNNs) have emerged in various fields like medical diagnosis and financial forecasting, yet their adoption is limited by their opacity, which affects the trust in their outputs and understanding of their decision processes. To address these issues, Explainable Artificial Intelligence (XAI) aims to make the workings of complex DNNs transparent and understandable. XAI methods help clarify how DNNs operate, fostering transparency, accountability, and compliance, which is vital in sensitive applications. While traditionally focused on improving model transparency and interpretability, recent studies suggest XAI can also enhance learning processes in these models.
Recurrent Neural Networks are effective for analyzing temporal data, such as time series, but they often require costly and time-intensive training. Echo State Networks simplify the training process by using a fixed recurrent layer, the reservoir, and a trainable output layer, the readout. In sequence classification problems, the readout typically receives only the final state of the reservoir. However, averaging all states can sometimes be beneficial.
In this work, EMERGE partners from the University of Pisa assess whether a weighted average of hidden states can enhance the Echo State Network performance. To this end, the authors propose a gradient-based, explainable technique to guide the contribution of each hidden state towards the final prediction. They show that the approach outperforms the naive average, as well as other baselines, in time series classification, particularly on noisy data.
Read the paper in the link below.
-
Next Article
European Commission’s CORDIS website highlights EMERGE’s recent publication -
Previous Article
EMERGE consortium meeting in Munich - View All Articles