ACHIEVING LINEAR SPEEDUP WITH PARTIAL WORKER PARTICIPATION IN NON-IID FEDERATED LEARNING

Abstract

Federated learning (FL) is a distributed machine learning architecture that leverages a large number of workers to jointly learn a model with decentralized data. FL has received increasing attention in recent years thanks to its data privacy protection, communication efficiency and a linear speedup for convergence in training (i.e., convergence performance increases linearly with respect to the number of workers). However, existing studies on linear speedup for convergence are only limited to the assumptions of i.i.d. datasets across workers and/or full worker participation, both of which rarely hold in practice. So far, it remains an open question whether or not the linear speedup for convergence is achievable under non-i.i.d. datasets with partial worker participation in FL. In this paper, we show that the answer is affirmative. Specifically, we show that the federated averaging (FedAvg) algorithm (with two-sided learning rates) on non-i.i.d. datasets in non-convex settings achieves a convergence rate O( 1

√

mKT + 1 T ) for full worker participation and a convergence rate O( √ K √ nT + 1 T ) for partial worker participation, where K is the number of local steps, T is the number of total communication rounds, m is the total worker number and n is the worker number in one communication round if for partial worker participation. Our results also reveal that the local steps in FL could help the convergence and show that the maximum number of local steps can be improved to T /m in full worker participation. We conduct extensive experiments on MNIST and CIFAR-10 to verify our theoretical results.

1. INTRODUCTION

Federated Learning (FL) is a distributed machine learning paradigm that leverages a large number of workers to collaboratively learn a model with decentralized data under the coordination of a centralized server. Formally, the goal of FL is to solve an optimization problem, which can be decomposed as: min x∈R d f (x) := 1 m m i=1 F i (x), where F i (x) E ξi∼Di [F i (x, ξ i )] is the local (non-convex) loss function associated with a local data distribution D i and m is the number of workers. FL allows a large number of workers (such as edge devices) to participate flexibly without sharing data, which helps protect data privacy. However, it also introduces two unique challenges unseen in traditional distributed learning algorithms that are used typically for large data centers: • Non-independent-identically-distributed (non-i.i.d.) datasets across workers (data heterogeneity): In conventional distributed learning in data centers, the distribution for each worker's local dataset can usually be assumed to be i.i.d., i.e., D i = D, ∀i ∈ {1, ..., m}. Unfortunately, this assumption rarely holds for FL since data are generated locally at the workers based on their circumstances, i.e., D i = D j , for i = j. It will be seen later that the non-i.i.d assumption imposes significant challenges in algorithm design for FL and their performance analysis.

