WEMIX: HOW TO BETTER UTILIZE DATA AUGMEN-TATION

Abstract

Data augmentation is a widely used training trick in deep learning to improve the network generalization ability. Despite many encouraging results, several recent studies did point out limitations of the conventional data augmentation scheme in certain scenarios, calling for a better theoretical understanding of data augmentation. In this work, we develop a comprehensive analysis that reveals pros and cons of data augmentation. The main limitation of data augmentation arises from the data bias, i.e. the augmented data distribution can be quite different from the original one. This data bias leads to a suboptimal performance of existing data augmentation methods. To this end, we develop two novel algorithms, termed "AugDrop" and "MixLoss", to correct the data bias in the data augmentation. Our theoretical analysis shows that both algorithms are guaranteed to improve the effect of data augmentation through the bias correction, which is further validated by our empirical studies. Finally, we propose a generic algorithm "WeMix" by combining AugDrop and MixLoss, whose effectiveness is observed from extensive empirical evaluations.

1. INTRODUCTION

Data augmentation (Baird, 1992; Schmidhuber, 2015) has been a key to the success of deep learning in image classification (He et al., 2019) , and is becoming increasingly common in other tasks such as natural language processing (Zhang et al., 2015) and object detection (Zoph et al., 2019) . The data augmentation expands training set by generating virtual instances through random augmentation to the original ones. This alleviates the overfitting (Shorten & Khoshgoftaar, 2019) problem when training large deep neural networks. Despite many encouraging results, it is not the case that data augmentation will always improve generalization errors (Min et al., 2020; Raghunathan et al., 2020) . In particular, Raghunathan et al. (2020) showed that training by augmented data will lead to a smaller robust error but potentially a larger standard error. Therefore, it is critical to answer the following two questions before applying data augmentation in deep learning: • When will the deep models benefit from data augmentation? • How to better leverage augmented data during training? Several previous works (Raghunathan et al., 2020; Wu et al., 2020; Min et al., 2020) tried to address the questions. Their analysis is limited to specific problems such as linear ridge regression therefore may not be applicable to deep learning. In this work, we aim to answer the two questions from a theoretical perspective under a more general non-convex setting. We address the first question in a more general form covering applications in deep learning. For the second question, we develop new approaches that are provably more effective than the conventional data augmentation approaches. Most data augmentation operations alter the data distribution during the training progress. This imposes a data distribution bias (we simply use "data bias" in the rest of this paper) between the augmented data and the original data, which may make it difficult to fully leverage the augmented data. To be more concrete, let us consider label-mixing augmentation (e.g., mixup (Zhang et al., 2018; Tokozume et al., 2018) ). Suppose we have n original data D = {(x i , y i ), i = 1, . . . , n}, where the input-label pair (x i , y i ) follows a distribution P xy = (P x , P y (•|x)), P x is the marginal distribution of the inputs and P y (•|x) is the conditional distribution of the labels given inputs; we generate m augmented data D = {( x i , y i ), i = 1, . . . , m}, where ( x i , y i ) ∼ P x y = (P x , P y (•| x)),

