LEARNED NEURAL NETWORK REPRESENTATIONS ARE SPREAD DIFFUSELY WITH REDUNDANCY

Abstract

Representations learned by pre-training a neural network on a large dataset are increasingly used successfully to perform a variety of downstream tasks. In this work, we take a closer look at how features are encoded in such pre-trained representations. We find that learned representations in a given layer exhibit a degree of diffuse redundancy, i.e., any randomly chosen subset of neurons in the layer that is larger than a threshold size shares a large degree of similarity with the full layer and is able to perform similarly as the whole layer on a variety of downstream tasks. For example, a linear probe trained on 20% of randomly picked neurons from a ResNet50 pre-trained on ImageNet1k achieves an accuracy within 5% of a linear probe trained on the full layer of neurons for downstream CIFAR10 classification. We conduct experiments on different neural architectures (including CNNs and Transformers) pre-trained on both ImageNet1k and ImageNet21k and evaluate a variety of downstream tasks taken from the VTAB benchmark. We find that the loss & dataset used during pre-training largely govern the degree of diffuse redundancy and the "critical mass" of neurons needed often depends on the downstream task, suggesting that there is a task-inherent sparsity-performance Pareto frontier. Our findings shed light on the nature of representations learned by pre-trained deep neural networks and suggest that entire layers might not be necessary to perform many downstream tasks. We investigate the potential for exploiting this redundancy to achieve efficient generalization for downstream tasks and also draw caution to certain possible unintended consequences.

1. INTRODUCTION

Over the years, many architectures such as VGG (Simonyan & Zisserman, 2014) , ResNet (He et al., 2016) , and Vision Transformers (ViTs) (Kolesnikov et al., 2021) have been proposed that achieve competitive accuracies on many benchmarks including the ImageNet (Russakovsky et al., 2015) challenge. A key reason for the success of these models is their ability to learn useful representations of data (LeCun et al., 2015) . Prior works have attempted to understand representations learned by deep neural networks through the lens of mutual information between the representations, inputs and outputs (Shwartz-Ziv & Tishby, 2017) and hypothesize that neural networks perform well because of a "compression" phase where mutual information between inputs and representations decreases. Moreover recent works on interpretability have found that many neurons in learned representations are polysemantic, i.e., one neuron can encode multiple "concepts" (Elhage et al., 2022; Olah et al., 2020) , and that one can then train sparse linear models on such concepts to do "explainable" classification (Wong et al., 2021) . However, it is not well understood if or how extracted features are concentrated or spread across the full representation. While the length of the feature vectors extracted from state-of-the-art networksfoot_0 can vary greatly, their accuracies on downstream tasks are not correlated to the size of the representation (see Table 1 ), but rather depend mostly on the inductive biases and training recipes (Wightman et al., 2021; Steiner et al., 2021) . In all cases, the size of extracted feature vector (i.e. number of neurons) is orders of



Extracted features for the purpose of this paper refers to the representation recorded on the penultimate layer, but the larger concept applies to any layer 1

