SYNERGISTIC NEUROMORPHIC FEDERATED LEARNING WITH ANN-SNN CONVERSION FOR PRIVACY PROTEC-TION

Abstract

Federated Learning (FL) has been widely researched for the growing public data privacy issues, where only model parameters, instead of private data, are communicated. However, recent studies debunk the privacy protection of FL, showing that private data can be leaked from the communicated gradients or parameter updates. In this paper, we propose a framework called Synergistic Neuromorphic Federated Learning (SNFL) that enhances privacy during FL. Before uploading the updates of the client model, SNFL first converts clients' Artificial Neural Networks (ANNs) to Spiking Neural Networks (SNNs) via calibration algorithms. In a way that not only loses almost no accuracy but also encrypts the client model's parameters, SNFL manages to obtain a more performant model with high privacy. After the aggregation of various SNNs parameters, the server distributes the parameters back to the clients. This design offers a smooth convergence to continue the model training under the ANN architecture. The proposed framework is demonstrated to be private, introducing a lightweight overhead as well as yielding prominent performance boosts. Extensive experiments with different kinds of datasets have demonstrated the efficacy and the practicability of our method. In most of our experimental IID and not extreme Non-IID scenarios, the SNFL technique has significantly enhanced the model performance. For instance, SNFL improves the accuracy of FedAvg on Tiny-ImageNet by 13.79%. Besides, the original image cannot be reconstructed after 280 iterations of attacks with the SNFL method, whereas it can be reconstructed after just 70 iterations with FedAvg.

1. INTRODUCTION

Recent advancements in machine learning, particularly deep learning, rely heavily on large data sets to obtain decent inference performance. Due to the growing demand for data, it is now necessary to feed models with information from multiple entities. However, this transfer, exchange, and trade of data among entities may violate the General Data Protection Regulation (GDPR) and get punished by the Act (Wachter, 2018), posing an unprecedented challenge to the field of machine learning. 2021) have revealed that sensitive local data could be leaked from these transmitted local gradients via model inversion attack Zhu & Han (2020). To defend against such kind of attack and prevent privacy leakage, defense strategies including differential privacy (Geyer et al., 2017) , secure multi-party computation (Byrd & Polychroniadou, 2020), and MixUp (Zhang et al., 2017) have been developed. In exchange for privacy, the cost is then either severe computational overheads (Hardy et al., 2017) or unavoidable accuracy losses (Kim et al., 2021) .



What's the intrinsic source of privacy in these defense strategies? If we consider this question from an information theory perspective, it is indeed the asymmetry of entropy in the encryption and decryption steps for clients and servers when partial encryption information is kept locally only. From this standpoint, as long as an encryption method is capable of inevitability between clients and servers while still allowing for effective aggregation, it would be feasible to improve the privacy 1



Federated learning (McMahan et al., 2017) then emerges and flourishes as a privacy-preserving approach by training a shared model collaboratively while keeping data locally. Despite that the data are stored locally, clients that join the federated learning need to transmit their local gradients to the server to update the shared model. Recent studies Zhu & Han (2020); Zhao et al. (2020); Huang et al. (

