A UNIFIED OPTIMIZATION FRAMEWORK OF ANN-SNN CONVERSION: TOWARDS OPTIMAL MAPPING FROM ACTIVATION VALUES TO FIRING RATES Anonymous authors Paper under double-blind review

Abstract

Spiking Neural Networks (SNNs) have attracted great attention as a primary candidate for running large-scale deep artificial neural networks (ANNs) in real-time due to their distinctive properties of energy-efficient and event-driven fast computation. Training an SNN directly from scratch is usually difficult because of the discreteness of spikes. Converting an ANN to an SNN, i.e., ANN-SNN conversion, is an alternative method to obtain deep SNNs. The performance of the converted SNN is determined by both the ANN performance and the conversion error. The existing ANN-SNN conversion methods usually redesign the ANN with a new activation function instead of the regular ReLU, train the tailored ANN and convert it to an SNN. The performance loss between the regular ANN with ReLU and the tailored ANN has never been considered, which will be inherited to the converted SNN. In this work, we formulate the ANN-SNN conversion as a unified optimization problem which considers the performance loss between the regular ANN and the tailored ANN, as well as the conversion error simultaneously. Following the unified optimization framework, we propose the SlipReLU activation function to replace the regular ReLU activation function in the tailored ANN. The SlipReLU is a weighted sum of the threhold-ReLU and the step function, which improves the performance of either as an activation function alone. The SlipReLU method covers a family of activation functions mapping from activation values in source ANNs to firing rates in target SNNs; most of the state-ofthe-art optimal ANN-SNN conversion methods are special cases of our proposed SlipReLU method. We demonstrate through two theorems that the expected conversion error between SNNs and ANNs can theoretically be zero on a range of shift values δ ∈ [-1 2 , 1 2 ] rather than a fixed shift term 1 2 , enabling us to achieve converted SNNs with high accuracy and ultra-low latency. We evaluate our proposed SlipReLU method on CIFAR-10/100 and Tiny-ImageNet datasets, and the results show that the SlipReLU outperforms the state-of-the-art ANN-SNN conversion methods and directly trained SNNs in both accuracy and latency. To our knowledge, this is the first work to explore high-performance ANN-SNN conversion method considering the ANN performance and the conversion error simultaneously, with ultra-low latency, especially for 1 time-step (T = 1).

1. INTRODUCTION

Spiking neural networks (SNNs) are biologically-inspired neural networks based on biological plausible spiking neuron models to process real-time signals (Hodgkin & Huxley, 1952; Izhikevich, 2003) . With the significant advantages of low power consumption and fast inference on neuromorphic hardware (Roy et al., 2019) , SNNs are therefore becoming a primary candidate to run large-scale deep artificial neural networks (ANNs) in real-time. The most commonly used neuron model in SNNs is the Integrate-and-Fire (IF) neuron model (Liu & Wang, 2001) . Each neuron in the SNNs emits a spike only when its accumulated membrane potential exceeds the threshold voltage, otherwise, it stays inactive in the current time-step. This setting makes SNNs more similar to biological neural networks. Compared to ANNs, event-driven SNNs have binarized/spiking activation values, resulting in low energy consumption when implemented on specialized neuromorphic hardware. Another significant property of SNNs is the pseudo-simultaneity of their inputs and outputs

