ROBUST AND ACCELERATED SINGLE-SPIKE SPIKING NEU-RAL NETWORK TRAINING WITH APPLICABILITY TO CHAL-LENGING TEMPORAL TASKS

Abstract

Spiking neural networks (SNNs), particularly the single-spike variant in which neurons spike at most once, are considerably more energy efficient than standard artificial neural networks (ANNs). However, single-spike SSNs are difficult to train due to their dynamic and non-differentiable nature, where current solutions are either slow or suffer from training instabilities. These networks have also been critiqued for their limited computational applicability such as being unsuitable for time-series datasets. We propose a new model for training single-spike SNNs which mitigates the aforementioned training issues and obtains competitive results across various image and neuromorphic datasets, with up to a 13.98× training speedup and up to an 81% reduction in spikes compared to the multi-spike SNN. Notably, our model performs on par with multispike SNNs in challenging tasks involving neuromorphic time-series datasets, demonstrating a broader computational role for single-spike SNNs than previously believed.

1. INTRODUCTION

Artificial neural networks (ANNs) have achieved impressive feats over recent years, obtaining human-level performance on visual and auditory tasks (Hinton et al., 2012; He et al., 2016) , natural language processing (Brown et al., 2020) and challenging games (Mnih et al., 2015; Silver et al., 2017; Vinyals et al., 2019) . However, as the difficulty and complexity of the tasks increase, so has the size of the networks required to solve them, demanding a substantial and unsustainable amount of energy (Strubell et al., 2019; Schwartz et al., 2020) . Inspired by the extreme energy efficiency of the brain (Sokoloff, 1960) , spiking neural networks (SNNs) emulated on neuromorphic computers attempt to solve this dilemma, requiring significantly less energy than ANNs (Wunderlich et al., 2019) . These networks are of growing interest, obtaining noteworthy results on visual (Fang et al., 2021; Zhou & Li, 2021) , auditory (Yin et al., 2020; Yao et al., 2021) and reinforcement learning problems (Patel et al., 2019; Tang et al., 2020; Bellec et al., 2020) . A particular class of SNNs in which individual neurons respond with at most one spike aims to further amplify the energy and scaling advantages of standard SNNs and ANNs. Inspired by the sparse spike processing shown to exist at least for certain stimuli in the auditory and visual systems (Heil, 2004; Gollisch & Meister, 2008) , and forming a class of universal function approximator (Comsa et al., 2020) , these networks obtain extreme energy efficiency due to their singlespike nature (Oh et al., 2021; Liang et al., 2021) . Although providing a promising path toward building very large and energy-efficient networks, we are yet to understand how to properly train these SNNs. The success of the backprop training algorithm in ANNs does not naturally transfer to single-and multi-spike SNNs due to their non-differentiable activation function. Current attempts at training are either slow (as time is sequentially simulated) or suffer from training instabilities (e.g. the dead neuron problem) and idiosyncrasies (e.g. requiring particular regularisation) (Eshraghian et al., 2021) . Additionally, it has been argued that single-spike networks have limited applicability and are not suited for temporal problems, as recently pointed out by Eshraghian et al. (2021) : "[...] it enforces stringent priors upon the network (e.g., each neuron must fire only once) that are incompatible with dynamically changing input data" and Zenke

