NEGATIVE DATA AUGMENTATION

Abstract

Data augmentation is often used to enlarge datasets with synthetic samples generated in accordance with the underlying data distribution. To enable a wider range of augmentations, we explore negative data augmentation strategies (NDA) that intentionally create out-of-distribution samples. We show that such negative out-of-distribution samples provide information on the support of the data distribution, and can be leveraged for generative modeling and representation learning. We introduce a new GAN training objective where we use NDA as an additional source of synthetic data for the discriminator. We prove that under suitable conditions, optimizing the resulting objective still recovers the true data distribution but can directly bias the generator towards avoiding samples that lack the desired structure. Empirically, models trained with our method achieve improved conditional/unconditional image generation along with improved anomaly detection capabilities. Further, we incorporate the same negative data augmentation strategy in a contrastive learning framework for self-supervised representation learning on images and videos, achieving improved performance on downstream image classification, object detection, and action recognition tasks. These results suggest that prior knowledge on what does not constitute valid data is an effective form of weak supervision across a range of unsupervised learning tasks.

1. INTRODUCTION

Data augmentation strategies for synthesizing new data in a way that is consistent with an underlying task are extremely effective in both supervised and unsupervised learning (Oord et al., 2018; Zhang et al., 2016; Noroozi & Favaro, 2016; Asano et al., 2019) . Because they operate at the level of samples, they can be combined with most learning algorithms. They allow for the incorporation of prior knowledge (inductive bias) about properties of typical samples from the underlying data distribution (Jaiswal et al., 2018; Antoniou et al., 2017) , e.g., by leveraging invariances to produce additional "positive" examples of how a task should be solved. To enable users to specify an even wider range of inductive biases, we propose to leverage an alternative and complementary source of prior knowledge that specifies how a task should not be solved. We formalize this intuition by assuming access to a way of generating samples that are guaranteed to be out-of-support for the data distribution, which we call a Negative Data Augmentation (NDA). Intuitively, negative out-of-distribution (OOD) samples can be leveraged as a useful inductive bias because they provide information about the support of the data distribution to be learned by the model. For example, in a density estimation problem we can bias the model to avoid putting any probability mass in regions which we know a-priori should have zero probability. This can be an effective prior if the negative samples cover a sufficiently large area. The best NDA candidates are ones that expose common pitfalls of existing models, such as prioritizing local structure over global structure (Geirhos et al., 2018) ; this motivates us to consider known transformations from the literature that intentionally destroy the spatial coherence of an image (Noroozi & Favaro, 2016; DeVries & Taylor, 2017; Yun et al., 2019) , such as Jigsaw transforms. Building on this intuition, we introduce a new GAN training objective where we use NDA as an additional source of fake data for the discriminator as shown in Fig. 1 . Theoretically, we can show that if the NDA assumption is valid, optimizing this objective will still recover the data distribution in the limit of infinite data. However, in the finite data regime, there is a need to generalize beyond the empirical distribution (Zhao et al., 2018) . By explicitly providing the discriminator with samples we want to avoid, we are able to bias the generator towards avoiding undesirable samples thus improving generation quality. With appropriately chosen NDA strategies, we obtain superior empirical performance on a variety of tasks, with almost no cost in computation. For generative modeling, models trained with NDA achieve better image generation, image translation and anomaly detection performance compared with the same model trained without NDA. Similar gains are observed on representation learning for images and videos over downstream tasks such as image classification, object detection and action recognition. These results suggest that NDA has much potential to improve a variety of self-supervised learning techniques.

2. NEGATIVE DATA AUGMENTATION

The input to most learning algorithms is a dataset of samples from an underlying data distribution p data . While p data is unknown, learning algorithms always rely on prior knowledge about its properties (inductive biases (Wolpert & Macready, 1997) ), e.g., by using specific functional forms such as neural networks. Similarly, data augmentation strategies exploit known invariances of p data , such as the conditional label distribution being invariant to semantic-preserving transformations. While typical data augmentation strategies exploit prior knowledge about what is in support of p data , in this paper, we propose to exploit prior knowledge about what is not in the support of p data . This information is often available for common data modalities (e.g., natural images and videos) and is under-exploited by existing approaches. Specifically, we assume: (1) there exists an alternative distribution p such that its support is disjoint from that of p data ; and (2) access to a procedure to efficiently sample from p. We emphasize p need not be explicitly defined (e.g., through an explicit density) -it may be implicitly defined by a dataset or by a procedure that transforms samples from p data into ones from p by suitably altering their structure. 



Figure 1: Negative Data Augmentation for GANs.

Figure 2: Negative augmentations produce out-of-distribution samples lacking the typical structure of natural images; these negative samples can be used to inform a model on what it should not learn.

