SYNBENCH: TASK-AGNOSTIC BENCHMARKING OF PRETRAINED REPRESENTATIONS USING SYNTHETIC DATA

Abstract

Recent success in fine-tuning large models, that are pretrained on broad data at scale, on downstream tasks has led to a significant paradigm shift in deep learning, from task-centric model design to task-agnostic representation learning and task-specific fine-tuning. As the representations of pretrained models are used as a foundation for different downstream tasks, this paper proposes a new task-agnostic framework, SynBench, to measure the quality of pretrained representations using synthetic data. To address the challenge of task-agnostic data-free evaluation, we design synthetic binary classification proxy tasks with class conditional Gaussian mixtures to probe and compare model's robustness-accuracy performance on input synthetic data and their representations. Since the synthetic tasks spare access to real-life data, SynBench offers a holistic evaluation and informs the model designers of the intrinsic robustness level of the model given a user-specified threshold accuracy. Moreover, the use of class conditional Gaussian mixture allows us to derive a theoretically optimal robustness-accuracy tradeoff, which serves as a reference when evaluating the tradeoff on representations. By comparing the ratio of area-under-curve between the raw data and their representations, SynBench offers a quantifiable score for robustness-accuracy performance benchmarking. Our framework applies to a wide range of pretrained models taking continuous data inputs and is independent of the downstream tasks and datasets. Evaluated with several pretrained vision transformer models, the experimental results show that our SynBench score well matches the actual linear probing performance of the pre-trained model when fine-tuned on downstream tasks. Moreover, our framework can be used to inform the design of robust linear probing on pretrained representations to mitigate the robustness-accuracy tradeoff in downstream tasks.

1. INTRODUCTION

In recent years, the use of large pretrained neural networks for efficient fine-tuning on downstream tasks has prevailed in many application domains such as vision, language, and speech. Instead of designing task-dependent neural network architectures for different downstream tasks, the current methodology focuses on the principle of task-agnostic pretraining and task-specific finetuning, which uses a neural network pretrained on a large-scale dataset (often in a self-supervised or unsupervised manner) to extract generic representations of the input data, which we call pretrained representations for simplicity. The pretrained representations are then used as a foundation (Bommasani et al., 2021) to solve downstream tasks by training a linear head (i.e., linear probing) on the data representations with the labels provided by a downstream dataset, or by simply employing zero-shot inference. Moreover, to handle multi-modal data, one can use a similar neural network architecture (e.g., transformer) for multi-modal data representation learning and alignment. Successful examples following this new machine learning paradigm include the GPT-3 language model (Brown et al., 2020) , the vision transformer (Arnab et al., 2021) , and the CLIP image-text model (Radford et al., 2021) , to name a few. As large pretrained models are shown to achieve state-of-the-art performance on a variety of downstream tasks with minimal fine-tuning, there is an intensified demand for using pretrained representations from a large model for efficient finetuning. When gauging the usefulness of a pretrained model, it is a convention to compare the accuracy on selected real-life tasks. However, this ap-proach has two possible drawbacks: (1) if the underlying pretrained model has hidden risks, such as lacking robustness to adversarial examples, the standard accuracy cannot inform the risk as it does not correlate well (even worse, sometimes has negative correlation) with adversarial robustness (Su et al., 2018) . Therefore, the trending practice of pretraining and fine-tuning also signifies immediate damage to all downstream tasks. (2) the implications suggested by any "better" results on specific datasets are subjective to the datasets used for evaluation and could be inconclusive when the evaluation datasets change (e.g. ViT-L/16 is reportedly performing worse than ViT-B/16 on 4 out of 27 linear probing tasks according to Radford et al. (2021) , and is incompetent to ViT-B/16 on finetuned medical tasks (Okolo et al., 2022; Tummala et al., 2022) ). Consequently, an ideal pretrained model should entail both good accuracy and adversarial robustness, and the level of goodness can be measured in a task-agnostic manner. To address this emerging challenge, we propose a novel framework named SynBench to evaluate the quality of pretrained representations, in terms of quantifying the tradeoff between standard accuracy and adversarial robustness to input perturbations. Specifically, SynBench uses synthetic data generated from a conditional Gaussian distribution to establish a reference characterizing the robustness-accuracy tradeoff based on the Bayes optimal linear classifiers. Then, SynBench obtains the representations of the same synthetic data from the pretrained model and compares them to the reference for performance benchmarking. Finally, we define the ratio of area-under-curves in robustness-accuracy characterization as a quantifiable metric of the quality of pretrained representations. The entire procedure of SynBench is illustrated in Figure 1 . Our SynBench framework features the following key advantages. 1. Soundness: We formalize the fundamental tradeoff in robustness and accuracy of the considered conditional Gaussian model and use this characterization as a reference to benchmark the quality of pretrained representations. 2. Task-independence: Since the pretraining of large models is independent of the downstream datasets and tasks (e.g., through self-supervised or unsupervised training on broad data at scale), the use of synthetic data in SynBench provides a task-agnostic approach to evaluating pretrained representations without the knowledge of downstream tasks and datasets. 3. Completeness and privacy: The flexibility of generating synthetic data (e.g., by adopting a different data sampling procedure) offers a good proxy towards a more comprehensive evaluation of pretrained representations when fine-tuned on different downstream datasets, especially in the scenario when the available datasets are not representative of the entire downstream datasets. Moreover, the use of synthetic data enables full control and simulation over data size and distribution, protects data privacy, and can facilitate model auditing and governance. We highlight our main contributions as follows. • We propose SynBench, a novel task-agnostic framework that uses synthetic data to evaluate the quality of pretrained representations. The evaluation process of SynBench is independent of the downstream datasets and tasks and it applies to any model taking continuous data inputs. • Evaluated with several pretrained vision transformers, our experimental results show that the metric provided by SynBench well matches the model performance in terms of adversarial robustness and standard accuracy when finetuned on several downstream datasets. For example, SynBench-Score suggests that the Imagenet21k pretrained network (ViT-B/16-in21k) improves with finetuning on Imagenet1k (ViT-B/16), echoing with the higher CIFAR10 and CIFAR10-c linear probing accuracy of ViT-B/16. • We show that SynBench can be used to inform the design and selection of the hyperparameters in robust linear probing to mitigate the robustness-accuracy tradeoff when fine-tuned on downstream datasets. For example, conducting ϵ-robust linear probing with ϵ selected by SynBench-Score gives ViT-B/16 0.6% increase in CIFAR10 accuracy and 1.3% increase in CIFAR10-c accuracy.

2. RELATED WORK

Pretrained models in vision. In the past few years, much focus in the machine learning community has been shift to train representation networks capable of extracting features for a variety of downstream tasks with minimal fine-tuning. Nowadays, many common vision tasks are achieved with the assistant of good backbones, e.g. classifications (Yu et al., 2022; Wortsman et al., 2022; Foret et al., 2020; Xie et al., 2020; Dosovitskiy et al., 2020; Chen et al., 2020a) , object detection (Redmon

