OPTIMAL CONVERSION OF CONVENTIONAL ARTIFI-CIAL NEURAL NETWORKS TO SPIKING NEURAL NET-WORKS

Abstract

Spiking neural networks (SNNs) are biology-inspired artificial neural networks (ANNs) that comprise of spiking neurons to process asynchronous discrete signals. While more efficient in power consumption and inference speed on the neuromorphic hardware, SNNs are usually difficult to train directly from scratch with spikes due to the discreteness. As an alternative, many efforts have been devoted to converting conventional ANNs into SNNs by copying the weights from ANNs and adjusting the spiking threshold potential of neurons in SNNs. Researchers have designed new SNN architectures and conversion algorithms to diminish the conversion error. However, an effective conversion should address the difference between the SNN and ANN architectures with an efficient approximation of the loss function, which is missing in the field. In this work, we analyze the conversion error by recursive reduction to layer-wise summation and propose a novel strategic pipeline that transfers the weights to the target SNN by combining threshold balance and soft-reset mechanisms. This pipeline enables almost no accuracy loss between the converted SNNs and conventional ANNs with only ∼ 1/10 of the typical SNN simulation time. Our method is promising to get implanted onto embedded platforms with better support of SNNs with limited energy and memory. Codes are available at https://github.com/Jackn0/snn optimal conversion pipeline.

1. INTRODUCTION

Spiking neural networks (SNNs) are proposed to imitate the biological neural networks (Hodgkin & Huxley, 1952a; McCulloch & Pitts, 1943) with artificial neural models that simulate biological neuron activity, such as Hodgkin-Huxley (Hodgkin & Huxley, 1952b ), Izhikevich (Izhikevich, 2003) , and Resonate-and-Fire (Izhikevich, 2001) models. The most widely used neuron model for SNN is the Integrate-and-Fire (IF) model (Barbi et al., 2003; Liu & Wang, 2001) , where a neuron in the network emits a spike only when the accumulated input exceeds the threshold voltage. This setting makes SNNs more similar to biological neural networks. The past two decades have witnessed the success of conventional artificial neural networks (named as ANNs for the ease of comparison with SNNs), especially with the development of convolutional neural networks including AlexNet (Krizhevsky et al., 2012 ), VGG (Simonyan & Zisserman, 2014) and ResNet (He et al., 2016) . However, this success highly depends on the digital transmission of information in high precision and requires a large amount of energy and memory. So the traditional ANNs are infeasible to deploy onto embedded platforms with limited energy and memory.Distinct from conventional ANNs, SNNs are event-driven with spiking signals, thus more efficient in the energy and memory consumption on embedded platforms (Roy et al., 2019) . By far, SNNs have been implemented for image (Acciarito et al., 2017; Diehl & Cook, 2014; Yousefzadeh et al., 2017) and voice (Pei et al., 2019) recognition. Corresponding author 1

