SYNERGISTIC NEUROMORPHIC FEDERATED LEARNING WITH ANN-SNN CONVERSION FOR PRIVACY PROTEC-TION

Abstract

Federated Learning (FL) has been widely researched for the growing public data privacy issues, where only model parameters, instead of private data, are communicated. However, recent studies debunk the privacy protection of FL, showing that private data can be leaked from the communicated gradients or parameter updates. In this paper, we propose a framework called Synergistic Neuromorphic Federated Learning (SNFL) that enhances privacy during FL. Before uploading the updates of the client model, SNFL first converts clients' Artificial Neural Networks (ANNs) to Spiking Neural Networks (SNNs) via calibration algorithms. In a way that not only loses almost no accuracy but also encrypts the client model's parameters, SNFL manages to obtain a more performant model with high privacy. After the aggregation of various SNNs parameters, the server distributes the parameters back to the clients. This design offers a smooth convergence to continue the model training under the ANN architecture. The proposed framework is demonstrated to be private, introducing a lightweight overhead as well as yielding prominent performance boosts. Extensive experiments with different kinds of datasets have demonstrated the efficacy and the practicability of our method. In most of our experimental IID and not extreme Non-IID scenarios, the SNFL technique has significantly enhanced the model performance. For instance, SNFL improves the accuracy of FedAvg on Tiny-ImageNet by 13.79%. Besides, the original image cannot be reconstructed after 280 iterations of attacks with the SNFL method, whereas it can be reconstructed after just 70 iterations with FedAvg.

1. INTRODUCTION

Recent advancements in machine learning, particularly deep learning, rely heavily on large data sets to obtain decent inference performance. Due to the growing demand for data, it is now necessary to feed models with information from multiple entities. However, this transfer, exchange, and trade of data among entities may violate the General Data Protection Regulation (GDPR) and get punished by the Act (Wachter, 2018) , posing an unprecedented challenge to the field of machine learning. Federated learning (McMahan et al., 2017) then emerges and flourishes as a privacy-preserving approach by training a shared model collaboratively while keeping data locally. Despite that the data are stored locally, clients that join the federated learning need to transmit their local gradients to the server to update the shared model. Recent studies Zhu & Han (2020); Zhao et al. (2020); Huang et al. (2021) have revealed that sensitive local data could be leaked from these transmitted local gradients via model inversion attack Zhu & Han (2020) . To defend against such kind of attack and prevent privacy leakage, defense strategies including differential privacy (Geyer et al., 2017) , secure multi-party computation (Byrd & Polychroniadou, 2020), and MixUp (Zhang et al., 2017) have been developed. In exchange for privacy, the cost is then either severe computational overheads (Hardy et al., 2017) or unavoidable accuracy losses (Kim et al., 2021) . What's the intrinsic source of privacy in these defense strategies? If we consider this question from an information theory perspective, it is indeed the asymmetry of entropy in the encryption and decryption steps for clients and servers when partial encryption information is kept locally only. From this standpoint, as long as an encryption method is capable of inevitability between clients and servers while still allowing for effective aggregation, it would be feasible to improve the privacy for federated learning. Recent progression in neuromorphic computing, especially the conversion from traditional artificial neural networks (ANNs) to spiking neural networks (SNNs) (Deng & Gu, 2020), provides a pair of source ANN and target SNN that both achieve high accuracy, with the source ANN not recoverable from the resulted SNN (Li et al., 2021b) . This property fits naturally with the demand for privacy protection in federated learning. Indeed, if we train ANNs on clients and only send the converted SNNs with partial parameters to the server for aggregation, we can then expect to obtain a feasible privacy-protected FL algorithm with an effective parameter-sharing paradigm. In addition, such an ANN-SNN conversion is lightweight and performance-preserving (or even performance-improving) by careful design. Fig. 1 illustrates the pipeline of our proposed method. Besides the natural feasibility of SNNs (Esser et al., 2016; Kim et al., 2019) , this synergistic framework also brings two additional benefits that are special for federated learning. First, in contrast to existing noise injection methods (e.g., differential privacy Geyer et al. ( 2017)), our ANN-SNN conversion process is optimized to improve performance by fine-tuning SNN's weights rather than trading off performance drop versus noise level. As a result, our method is able to achieve even better performance against standard federated learning. Second, the SNN emits discrete spikes and is not differentiable, thus the induced synergistic FL could be more robust to small perturbations and adversarial attacks like white-box attacks (Liang et al., 2021) . Our contributions are summarized as follows: • Innovation/Privacy: We design a federated learning framework where the server and clients run two different models in a privacy-preserving manner as a new solution. To the best of our knowledge, our work is one of the first to train different types of neural network models on server and clients. • Accuracy: Compared to the conventional approach, extensive experiments validate SNFL can deliver similar or superior accuracy relative to other common methods. • Effectiveness: Based on the SNFL framework, we analyze the backdoor attack and develop a method to simply detect it through abnormal SNN thresholds.

2. RELATED WORK

2.1 FEDERATED LEARNING (FL) In federated learning, each client computes a model update, i.e. gradient, on its local data. While sharing gradients was assumed to leak little information about the client's private data, recent papers (Zhu & Han, 2020; Zhao et al., 2020; Huang et al., 2021) devised "gradient inversion attack" in which an attacker listening to one client's communications with the server can begin to reconstruct the client's private data. To defend against this, methods such as gradient clipping (Sun et al., 2019) ,



Figure 1: Right: Workflow of Synergistic Neuromorphic Federated Learning with ANN-SNN Conversion (SNFL). Left: Each client communicates parameters ∇W A generated by the model trained on private local data. The attacker updates randomized dummy input and label to minimize the gradient distance ||∇W A -∇W ′ ||. When the optimization is complete, the attacker can obtain the training set from the client. However, in SNFL, the client's model has been converted and calibrated to SNN before communication.

