New Bounds For Distributed Mean Estimation and Variance Reduction

Abstract

We consider the problem of distributed mean estimation (DME), in which n machines are each given a local d-dimensional vector x v ∈ R d , and must cooperate to estimate the mean of their inputs µ = 1 n n v=1 x v , while minimizing total communication cost. DME is a fundamental construct in distributed machine learning, and there has been considerable work on variants of this problem, especially in the context of distributed variance reduction for stochastic gradients in parallel SGD. Previous work typically assumes an upper bound on the norm of the input vectors, and achieves an error bound in terms of this norm. However, in many real applications, the input vectors are concentrated around the correct output µ, but µ itself has large norm. In such cases, previous output error bounds perform poorly. In this paper, we show that output error bounds need not depend on input norm. We provide a method of quantization which allows distributed mean estimation to be performed with solution quality dependent only on the distance between inputs, not on input norm, and show an analogous result for distributed variance reduction. The technique is based on a new connection with lattice theory. We also provide lower bounds showing that the communication to error trade-off of our algorithms is asymptotically optimal. As the lattices achieving optimal bounds under 2 -norm can be computationally impractical, we also present an extension which leverages easy-to-use cubic lattices, and is loose only up to a logarithmic factor in d. We show experimentally that our method yields practical improvements for common applications, relative to prior approaches.

1. Introduction

Several problems in distributed machine learning and optimization can be reduced to variants distributed mean estimation problem, in which n machines must cooperate to jointly estimate the mean of their d-dimensional inputs µ = 1 n n v=1 x v as closely as possible, while minimizing communication. In particular, this construct is often used for distributed variance reduction: here, each machine receives as input an independent probabilistic estimate of a d-dimensional vector ∇, and the aim is for all machines to output a common estimate of ∇ with lower variance than the individual inputs, minimizing communication. Without any communication restrictions, the ideal output would be the mean of all machines' inputs. While variants of these fundamental problems have been considered since seminal work by Tsitsiklis & Luo (1987) , the task has seen renewed attention recently in the context

