EFFICIENT ONLINE AUGMENTA-TION WITH RANGE LEARNING

Abstract

State-of-the-art automatic augmentation methods (e.g., AutoAugment and Ran-dAugment) for visual recognition tasks diversify training data using a large set of augmentation operations. The range of magnitudes of many augmentation operations (e.g., brightness and contrast) is continuous. Therefore, to make search computationally tractable, these methods use fixed and manually-defined magnitude ranges for each operation, which may lead to sub-optimal policies. To answer the open question on the importance of magnitude ranges for each augmentation operation, we introduce RangeAugment that allows us to efficiently learn the range of magnitudes for individual as well as composite augmentation operations. RangeAugment uses an auxiliary loss based on image similarity as a measure to control the range of magnitudes of augmentation operations. As a result, RangeAugment has a single scalar parameter for search, image similarity, which we simply optimize via linear search. RangeAugment integrates seamlessly with any model and learns model-and task-specific augmentation policies. With extensive experiments on the ImageNet dataset across different networks, we show that RangeAugment achieves competitive performance to state-of-the-art automatic augmentation methods with 4-5 times fewer augmentation operations. Experimental results on semantic segmentation and contrastive learning further shows RangeAugment's effectiveness.

1. INTRODUCTION

Data augmentation is a widely used regularization method for training deep neural networks (LeCun et al., 1998; Krizhevsky et al., 2012; Szegedy et al., 2015; Perez & Wang, 2017; Steiner et al., 2021) . These methods apply carefully designed augmentation (or image transformation) operations (e.g., color transforms) to increase the quantity and diversity of training data, which in turn helps improve the generalization ability of models. However, these methods rely heavily on expert knowledge and extensive trial-and-error experiments. Recently, automatic augmentation methods have gained attention because of their ability to search for augmentation policy (e.g., combinations of different augmentation operations) that maximizes validation performance (Cubuk et al., 2019; 2020; Lim et al., 2019; Hataya et al., 2020; Zheng et al., 2021) . In general, most augmentation operations (e.g., brightness and contrast) have two parameters: (1) the probability of applying them and (2) their range of magnitudes. These methods take a set of augmentation operations with a fixed (often discretized) range of magnitudes as an input, and produce a policy of applying some or all augmentation operations along with their parameters (Fig. 1 ). As an example, AutoAugment (Cubuk et al., 2019) discretizes the range of magnitudes and probabilities of 16 augmentation operations, and searches for sub-policies (i.e., composition of two augmentation operations along with their probability and magnitude) in a space of about 10 32 possible combinations. These methods empirically show that automatic augmentation policies help improve performance of downstream networks. For example, AutoAugment improves the validation top-1 accuracy of ResNet-50 (He et al., 2016) by about 1.3% on the ImageNet dataset (Deng et al., 2009) . In other words, these methods underline the importance of automatic composition of augmentation operations in improving validation performance. However, policies generated using these networks may be sub-optimal because they use hand-crafted magnitude ranges. The importance of magnitude ranges for each augmentation operation is still an open question. An obstacle in

