TOWARDS BIOLOGICALLY PLAUSIBLE DREAMING AND PLANNING in recurrent spiking networks Anonymous

Abstract

Humans and animals can learn new skills after practicing for a few hours, while current reinforcement learning algorithms require a large amount of data to achieve good performances. Recent model-based approaches show promising results by reducing the number of necessary interactions with the environment to learn a desirable policy. However, these methods require biological implausible ingredients, such as the detailed storage of older experiences, and long periods of offline learning. The optimal way to learn and exploit word-models is still an open question. Taking inspiration from biology, we suggest that dreaming might be an efficient expedient to use an inner model. We propose a two-module (agent and model) spiking neural network in which "dreaming" (living new experiences in a model-based simulated environment) significantly boosts learning. We also explore "planning", an online alternative to dreaming, that shows comparable performances. Importantly, our model does not require the detailed storage of experiences, and learns online the world-model and the policy. Moreover, we stress that our network is composed of spiking neurons, furhter increasing the biological plausibility and implementability in neuromorphic hardware.

1. INTRODUCTION

Humans can learn a new ability after practicing a few hours (e.g., driving or playing a game), while to solve the same task artificial neural networks require millions of reinforcement learning trials in virtual environments. And even then, their performances might be not comparable to human's ability. Humans and animals, have developed an understanding of the world that allow them to optimize learning. This relies on the building of an inner model of the world. 2020) have shown to reduce the amount of data required for learning. However, these approaches do not provide insights on biological intelligence since they require biologically implausible ingredients (storing detailed information of experiences to train models, long off-line learning periods, expensive Monte Carlo three search to correct the policy). Moreover, the storage of long sequences is highly problematic on neuromorphic and FPGA platforms, where memory resources are scarce, and the use of an external memory would imply large latencies. The optimal way to learn and exploit the inner-model of the world is still an open question. Taking inspiration from biology, we explore an intriguing idea that a learned model can be used when the neural network is offline. In particular, during deep-sleep, dreaming, and day-dreaming. Sleep is known to be essential for awake performances, but the mechanisms underlying its cognitive functions are still to be clarified. A few computational models have started to investigate the interaction between sleep (both REM and NREM) and plasticity González-Rueda et al. (2018); Wei et al. (2016; 2018) ; Korcsak-Gorzo et al. (2020); Golosio et al. (2021) showing improved performances, and reorganized memories, in the after sleep network. A wake-sleep learning algorithm has shown the possibility to extend the acquired knowledge with new symbolic abstractions and to train the neural network on imagined and replayed problems Ellis et al. (2020) . However, a clear and coherent understanding of the mechanisms that induce generalized beneficial effects is missing. The idea that dreams might be useful to refine learned skill is fascinating and requires to be explored experimentally and in theoretical and computational models.



Model-based reinforcement learning Ye et al. (2021); Abbeel et al. (2006); Schrittwieser et al. (2020); Ha & Schmidhuber (2018); Kaiser et al. (2019); Hafner et al. (

