23 September 2025
Reservoir Computing (RC) enables efficiently-trained deep Recurrent Neural Networks (RNNs) by removing the need to train the hierarchy of representations of the input sequences.
In this work, EMERGE partners from the University of Pisa analyse the performance and the dynamical behaviour of RC models, specifically Deep Bidirectional Echo State Networks (Deep-BiESNs), applied to Natural Language Processing (NLP) tasks. They compare the performance of Deep-BiESNs against fully-trained NLP baseline models on six common NLP tasks. Next, they adapt the class activation mapping technique for explainability to analyse the dynamical properties of these deep RC models, highlighting how the hierarchy of representations in Deep-BiESNs layers contributes to forming the class prediction in the different NLP tasks. Investigating time scales in deep RNN layers is highly relevant for NLP because language inherently involves dependencies that occur over various temporal horizons.
Read the paper in the link below.

