HETEROGENEOUS NEURONAL AND SYNAPTIC DYNAM-ICS FOR SPIKE-EFFICIENT UNSUPERVISED LEARNING: THEORY AND DESIGN PRINCIPLES

Abstract

This paper shows that the heterogeneity in neuronal and synaptic dynamics reduces the spiking activity of a Recurrent Spiking Neural Network (RSNN) while improving prediction performance, enabling spike-efficient (unsupervised) learning. We analytically show that the diversity in neurons' integration/relaxation dynamics improves an RSNN's ability to learn more distinct input patterns (higher memory capacity), leading to improved classification and prediction performance. We further prove that heterogeneous Spike-Timing-Dependent-Plasticity (STDP) dynamics of synapses reduce spiking activity but preserve memory capacity. The analytical results motivate Heterogeneous RSNN design using Bayesian optimization to determine heterogeneity in neurons and synapses to improve E, defined as the ratio of spiking activity and memory capacity. The empirical results on time series classification and prediction tasks show that optimized HRSNN increases performance and reduces spiking activity compared to a homogeneous RSNN.

1. INTRODUCTION

Spiking neural networks (SNNs) (Ponulak & Kasinski, 2011) use unsupervised bio-inspired neurons and synaptic connections, trainable with either biological learning rules such as spike-timingdependent plasticity (STDP) (Gerstner & Kistler, 2002) or supervised statistical learning algorithms such as surrogate gradient (Neftci et al., 2019) . Empirical results on standard SNNs also show good performance for various tasks, including spatiotemporal data classification, (Lee et al., 2017; Khoei et al., 2020) , sequence-to-sequence mapping (Zhang & Li, 2020), object detection (Chakraborty et al., 2021; Kim et al., 2020) , and universal function approximation (Gelenbe et al., 1999; Iannella & Back, 2001 ). An important motivation for the application of SNN in machine learning (ML) is the sparsity in the firing (activation) of the neurons, which reduces energy dissipation during inference (Wu et al., 2019) . Many prior works have empirically shown that SNN has lower firing activity than artificial neural networks and can improve energy efficiency (Kim et al., 2022; Srinivasan & Roy, 2019) . However, there are very few analytical studies on how to reduce the spiking activity of an SNN while maintaining its learning performance. Understanding and optimizing the relations between spiking activity and performance will be key to designing energy-efficient SNNs for complex ML tasks. In this paper, we derive analytical results and present design principles from optimizing the spiking activity of a recurrent SNN (RSNN) while maintaining prediction performance. Most SNN research in ML considers a simplified network model with a homogeneous population of neurons and synapses (homogeneous RSNN (MRSNN)) where all neurons have uniform integration/relaxation dynamics, and all synapses use the same long-term potentiation (LTP) and long-term depression (LTD) dynamics in STDP learning rules. On the contrary, neurobiological studies have shown that a brain has a wide variety of neurons and synapses with varying firing and plasticity dynamics, respectively (Destexhe & Marder, 2004; Gouwens et al., 2019; Hansel et al., 1995; Prescott et al., 2008) . We show that optimizing neuronal and synaptic heterogeneity will be key to simultaneously reducing spiking activity while improving performance.

