ON THE IMPORTANCE AND APPLICABILITY OF PRE-TRAINING FOR FEDERATED LEARNING

Abstract

Pre-training is prevalent in nowadays deep learning to improve the learned model's performance. However, in the literature on federated learning (FL), neural networks are mostly initialized with random weights. These attract our interest in conducting a systematic study to explore pre-training for FL. Across multiple visual recognition benchmarks, we found that pre-training can not only improve FL, but also close its accuracy gap to the counterpart centralized learning, especially in the challenging cases of non-IID clients' data. To make our findings applicable to situations where pre-trained models are not directly available, we explore pre-training with synthetic data or even with clients' data in a decentralized manner, and found that they can already improve FL notably. Interestingly, many of the techniques we explore are complementary to each other to further boost the performance, and we view this as a critical result toward scaling up deep FL for real-world applications. We conclude our paper with an attempt to understand the effect of pre-training on FL. We found that pre-training enables the learned global models under different clients' data conditions to converge to the same loss basin, and makes global aggregation in FL more stable. Nevertheless, pre-training seems to not alleviate local model drifting, a fundamental problem in FL under non-IID data.

1. INTRODUCTION

The increasing attention to data privacy and protection has attracted significant research interests in federated learning (FL) (Li et al., 2020a; Kairouz et al., 2019) . In FL, data are kept separate by individual clients. The goal is thus to learn a "global" model in a decentralized way. Specifically, one would hope to obtain a model whose accuracy is as good as if it were trained using centralized data. FEDAVG (McMahan et al., 2017) is arguably the most widely used FL algorithm, which assumes that every client is connected to a server. FEDAVG trains the global model in an iterative manner, between parallel local model training at the clients and global model aggregation at the server. FEDAVG is easy to implement and enjoys theoretical guarantees of convergence (Zhou & Cong, 2017; Stich, 2019; Haddadpour & Mahdavi, 2019; Li et al., 2020c; Zhao et al., 2018) . Its performance, however, can degrade drastically when clients' data are not IID -clients' data are often collected individually and doomed to be non-IID. That is, the accuracy of the federally learned global model can be much lower than its counterpart trained with centralized data. To alleviate this issue, existing literature has explored better approaches for local training (Li et al., 2020b; Karimireddy et al., 2020b; Acar et al., 2021) and global aggregation (Wang et al., 2020a; Hsu et al., 2019; Chen & Chao, 2021) . In this paper, we explore a different and rarely studied dimension in FL -model initialization. In the literature on FL, neural networks are mostly initialized with random weights. Yet in centralized learning, model initialization using weights pre-trained on large-scale datasets (Hendrycks et al., 2019; Devlin et al., 2018) has become prevalent, as it has been shown to improve accuracy, generalizability, robustness, etc. We are thus interested in 1) whether model pre-training is applicable in the context of FL and 2) whether it can likewise improve FEDAVG, especially in alleviating the non-IID issue. We conduct the very first systematic study in these aspects, using visual recognition as the running example. We consider multiple application scenarios, with the aim to make our study comprehensive. First, assuming pre-trained weights (e.g., on ImageNet (Deng et al., 2009) ) are available, we systematically compare FEDAVG initialized with random and pre-trained weights, under different FL settings

