DOES LEARNING FROM DECENTRALIZED NON-IID UNLABELED DATA BENEFIT FROM SELF SUPERVISION?

Abstract

The success of machine learning relies heavily on massive amounts of data, which are usually generated and stored across a range of diverse and distributed data sources. Decentralized learning has thus been advocated and widely deployed to make efficient use of distributed datasets, with an extensive focus on supervised learning (SL) problems. Unfortunately, the majority of real-world data are unlabeled and can be highly heterogeneous across sources. In this work, we carefully study decentralized learning with unlabeled data through the lens of self-supervised learning (SSL), specifically contrastive visual representation learning. We study the effectiveness of a range of contrastive learning algorithms under a decentralized learning setting, on relatively large-scale datasets including ImageNet-100, MS-COCO, and a new real-world robotic warehouse dataset. Our experiments show that the decentralized SSL (Dec-SSL) approach is robust to the heterogeneity of decentralized datasets, and learns useful representation for object classification, detection, and segmentation tasks, even when combined with the simple and standard decentralized learning algorithm of Federated Averaging (FedAvg). This robustness makes it possible to significantly reduce communication and to reduce the participation ratio of data sources with only minimal drops in performance. Interestingly, using the same amount of data, the representation learned by Dec-SSL can not only perform on par with that learned by centralized SSL which requires communication and excessive data storage costs, but also sometimes outperform representations extracted from decentralized SL which requires extra knowledge about the data labels. Finally, we provide theoretical insights into understanding why data heterogeneity is less of a concern for Dec-SSL objectives, and introduce feature alignment and clustering techniques to develop a new Dec-SSL algorithm that further improves the performance, in the face of highly non-IID data. Our study presents positive evidence to embrace unlabeled data in decentralized learning, and we hope to provide new insights into whether and why decentralized SSL is effective and/or even advantageous. 1

1. INTRODUCTION

The success of machine learning hinges heavily on the access to large-scale and diverse datasets. In practice, most data are generated from different locations, devices, and embodied agents, and stored in a distributed fashion. Examples include a fleet of self-driving cars collecting a massive amount of streaming images under various road and weather conditions during everyday driving, or individuals using mobile devices to take photos of objects and scenery all over the world. Besides being largescale, these datasets have two salient features: they are heterogeneous across data sources, and mostly unlabeled. For instance, images of road conditions, which are expensive to label, vary across cars driving on highways vs. rural areas, and under sunny vs. snowy weather conditions (Figure 19 ). Methods that can make the best use of these large-scale distributed datasets can significantly advance the performance of current machine learning algorithms and systems. This has thus motivated a surge of research in decentralized learning/learning from decentralized datafoot_1 (Konečnỳ et al., 2016; Hsieh et al., 2017; McMahan et al., 2017; Kairouz et al., 2021; Nedic, 2020) , where usually a global model is trained on the distributed datasets using communication between the local data sources and a centralized server, or sometimes even only among the local data sources. The goal is typically to reduce or eliminate the exchanges of local raw data to save communication costs and protect data privacy. How to mitigate the effect of data heterogeneity remains one of the most important research questions in this area (Zhao et al., 2018; Hsieh et al., 2020; Karimireddy et al., 2020; Ghosh et al., 2020; Li et al., 2021a) , as it can heavily downgrade the performance of decentralized learning. Moreover, most existing decentralized learning studies focused on supervised learning (SL) problems that require data labels (McMahan et al., 2017; Jeong et al., 2020; Hsieh et al., 2020) . Hence, it remains unclear whether and how decentralized learning can benefit from large-scale, heterogeneous, and especially unlabeled datasets typically encountered in the real world. On the other hand, people have developed effective methods of learning purely from unlabeled data and demonstrated impressive results. Self-supervised learning (SSL), a technique that learns representations by generating supervision signals from the data itself, has unleashed the power of unlabeled data and achieved tremendous successes for a wide range of downstream tasks in computer vision (He et al., 2020; Chen et al., 2020; He et al., 2021b) , natural language processing (Devlin et al., 2018; Sarzynska-Wawer et al., 2021) , and embodied intelligence (Sermanet et al., 2018; Florence et al., 2018) . These SSL algorithms, however, are usually trained in a centralized fashion by pooling all the unlabeled data together, without accounting for the heterogeneous nature of the decentralized data sources. Very recently, there have been a few contemporaneous/concurrent attempts (He et al., 2021a; Zhuang et al., 2021; 2022; Lu et al., 2022; Makhija et al., 2022) that bridged unsupervised/self-supervised learning and decentralized learning, with focuses on designing better algorithms that mitigate the data heterogeneity issue. In contrast, we revisit this new paradigm and ask the question: Does learning from decentralized non-IID unlabeled data really benefit from SSL? We focus on understanding the use of SSL in decentralized learning when handling unlabeled data. We aim to answer whether and when decentralized SSL (Dec-SSL) is effective (even combined with simple and off-the-shelf decentralized learning algorithms, e.g., FedAvg (McMahan et al., 2017) ); what are the unique inherent properties of Dec-SSL compared to its SL counterpart; how do the properties play a role in decentralized learning, especially with highly heterogeneous data? We also aim to validate our observations on large-scale and practical datasets. We defer a more detailed comparison with these most related works to §A. In this paper, we show that unlike in decentralized (supervised) learning, data heterogeneity can be less concerning in decentralized SSL, with both empirical and theoretical evidence. This leads to more communication-efficient and robust decentralized learning schemes, which can sometimes even outperform their supervised counterpart that assumes the availability of label information. Among the first studies to bridge decentralized learning and SSL, our study provides positive evidence to embrace unlabeled data in decentralized learning, and provides new insights into this setting. We detail our contributions as follows. Contributions. (i) We show that decentralized SSL, specifically contrastive visual representation learning, is a viable learning paradigm to handle relatively large-scale unlabeled datasets, even when combined with the simple FedAvg algorithm. Moreover, we also provide both experimental evidence and theoretical insights that decentralized SSL can be inherently robust to the data heterogeneity across different data sources. This allows more local updates, and can significantly improve the communication efficiency in decentralized learning. (ii) We provide further empirical and theoretical evidences that even when labels are available and decentralized supervised learning (and associated representation learning) is allowed, Dec-SSL still stands out in face of highly non-IID data. (iii) To further improve the performance of Dec-SSL, we design a new Dec-SSL algorithm, FeatARC, by using an iterative feature alignment and clustering procedure. Finally, we validate our hypothesis and algorithm in practical and large-scale data and task domains, including a new real-world robotic warehouse dataset.

2. PRELIMINARIES AND OVERVIEW

Consider a decentralized learning setting with K different data sources, which might correspond to different devices, machines, embodied agents, or datasets/users that can generate and store data locally. The goal is to collaboratively solve a learning problem, by exploiting the decentralized data from all data sources. More specifically, consider each data source k ∈ [K] has local dataset D k = {x k,i } |D k | i=1 , and x k,i ∈ X ⊆ R d are identically and independently distributed (IID) samples



Code is available at https://github.com/liruiw/Dec-SSL Hereafter, we often use decentralized learning as a shorthand for learning from decentralized data.

