HEBBIAN AND GRADIENT-BASED PLASTICITY EN-ABLES ROBUST MEMORY AND RAPID LEARNING IN RNNS

Abstract

Rapidly learning from ongoing experiences and remembering past events with a flexible memory system are two core capacities of biological intelligence. While the underlying neural mechanisms are not fully understood, various evidence supports that synaptic plasticity plays a critical role in memory formation and fast learning. Inspired by these results, we equip Recurrent Neural Networks (RNNs) with plasticity rules to enable them to adapt their parameters according to ongoing experiences. In addition to the traditional local Hebbian plasticity, we propose a global, gradient-based plasticity rule, which allows the model to evolve towards its self-determined target. Our models show promising results on sequential and associative memory tasks, illustrating their ability to robustly form and retain memories. In the meantime, these models can cope with many challenging few-shot learning problems. Comparing different plasticity rules under the same framework shows that Hebbian plasticity is well-suited for several memory and associative learning tasks; however, it is outperformed by gradient-based plasticity on fewshot regression tasks which require the model to infer the underlying mapping.

1. INTRODUCTION

Biological neural networks can dynamically adjust their synaptic weights when faced with various real-world tasks. The ability of synapses to change their strength over time is called synaptic plasticity, a critical mechanism that underlies animals' memory and learning (Abbott & Regehr, 2004; Stuchlik, 2014; Abraham et al., 2019; Magee & Grienberger, 2020) . For example, synaptic plasticity is essential for memory formation and retrieval in the hippocampus (Martin et al., 2000; Neves et al., 2008; Rioult-Pedotti et al., 2000; Kim & Cho, 2017; Nabavi et al., 2014; Nakazawa et al., 2004) . Furthermore, recent results show that some forms of synaptic plasticity could be induced within seconds, enabling animals to form memory quickly and do one-shot learning (Bittner et al., 2017; Magee & Grienberger, 2020; Milstein et al., 2021) . To test whether plasticity rules could also aid the memory performance and few-shot learning ability in artificial models, we incorporate plasticity rules into Recurrent Neural Networks (RNNs). These plastic RNNs work like the vanilla ones, except that a learned plasticity rule would update network weights according to ongoing experiences at each time step. Historically, Hebb's rule is a classic model for long-term synaptic plasticity; it states that a synapse is strengthened when there is a positive correlation between the pre-and post-synaptic activity (Hebb, 1949) . Several recent papers utilize generalized versions of Hebb's rule and apply it to Artificial Neural Networks (ANNs) in different settings (Miconi et al., 2018; Najarro & Risi, 2020; Limbacher & Legenstein, 2020; Tyulmankov et al., 2022; Rodriguez et al., 2022) . With a redesigned framework, we apply RNNs with neuromodulated Hebbian plasticity to a range of memory and few-shot learning tasks. Consistent with the understanding in neuroscience (Magee & Grienberger, 2020; Martin et al., 2000; Neves et al., 2008) , we find these plastic RNNs excel in memory and few-shot learning tasks.

availability

//github.com/yuvenduan/PlasticRNNs.

