DO WIDE AND DEEP NETWORKS LEARN THE SAME THINGS? UNCOVERING HOW NEURAL NETWORK REPRESENTATIONS VARY WITH WIDTH AND DEPTH

Abstract

A key factor in the success of deep neural networks is the ability to scale models to improve performance by varying the architecture depth and width. This simple property of neural network design has resulted in highly effective architectures for a variety of tasks. Nevertheless, there is limited understanding of effects of depth and width on the learned representations. In this paper, we study this fundamental question. We begin by investigating how varying depth and width affects model hidden representations, finding a characteristic block structure in the hidden representations of larger capacity (wider or deeper) models. We demonstrate that this block structure arises when model capacity is large relative to the size of the training set, and is indicative of the underlying layers preserving and propagating the dominant principal component of their representations. This discovery has important ramifications for features learned by different models, namely, representations outside the block structure are often similar across architectures with varying widths and depths, but the block structure is unique to each model. We analyze the output predictions of different model architectures, finding that even when the overall accuracy is similar, wide and deep models exhibit distinctive error patterns and variations across classes.

1. INTRODUCTION

Deep neural network architectures are typically tailored to available computational resources by scaling their width and/or depth. Remarkably, this simple approach to model scaling can result in state-of-the-art networks for both high-and low-resource regimes (Tan & Le, 2019) . However, despite the ubiquity of varying depth and width, there is limited understanding of how varying these properties affects the final model beyond its performance. Investigating this fundamental question is critical, especially with the continually increasing compute resources devoted to designing and training new network architectures. More concretely, we can ask, how do depth and width affect the final learned representations? Do these different model architectures also learn different intermediate (hidden layer) features? Are there discernible differences in the outputs? In this paper, we study these core questions, through detailed analysis of a family of ResNet models with varying depths and widths trained on CIFAR-10 (Krizhevsky et al., 2009) , CIFAR-100 and ImageNet (Deng et al., 2009) . We show that depth/width variations result in distinctive characteristics in the model internal representations, with resulting consequences for representations and outputs across different model initializations and architectures. Specifically, our contributions are as follows: • We develop a method based on centered kernel alignment (CKA) to efficiently measure the similarity of the hidden representations of wide and deep neural networks using minibatches. • We apply this method to different network architectures, finding that representations in wide or deep models exhibit a characteristic structure, which we term the block structure. We study how the block structure varies across different training runs, and uncover a connection between block structure and model overparametrization -block structure primarily appears in overparameterized models. • Through further analysis, we find that the block structure corresponds to hidden representations having a single principal component that explains the majority of the variance in the representation, which is preserved and propagated through the corresponding layers. We show that some hidden layers exhibiting the block structure can be pruned with minimal impact on performance. • With this insight on the representational structures within a single network, we turn to comparing representations across different architectures, finding that models without the block structure show reasonable representation similarity in corresponding layers, but block structure representations are unique to each model. • Finally, we look at how different depths and widths affect model outputs. We find that wide and deep models make systematically different mistakes at the level of individual examples. Specifically, on ImageNet, even when these networks achieve similar overall accuracy, wide networks perform slightly better on classes reflecting scenes, whereas deep networks are slightly more accurate on consumer goods.

2. RELATED WORK

Neural network models of different depth and width have been studied through the lens of universal approximation theorems (Cybenko, 1989; Hornik, 1991; Pinkus, 1999; Lu et al., 2017; Hanin & Sellke, 2017; Lin & Jegelka, 2018) and functional expressivity (Telgarsky, 2015; Raghu et al., 2017b) . However, this line of work only shows that such networks can be constructed, and provides neither a guarantee of learnability nor a characterization of their performance when trained on finite datasets. Other work has studied the behavior of neural networks in the infinite width limit by relating architectures to their corresponding kernels (Matthews et al., 2018; Lee et al., 2018; Jacot et al., 2018) , but substantial differences exist between behavior in this infinite width limit and the behavior of finite-width networks (Novak et al., 2018; Wei et al., 2019; Chizat et al., 2019; Lewkowycz et al., 2020) . In contrast to this theoretical work, we attempt to develop empirical understanding of the behavior of practical, finite-width neural network architectures after training on real-world data. Previous empirical work has studied the effects of width and depth upon model accuracy in the context of convolutional neural network architecture design, finding that optimal accuracy is typically achieved by balancing width and depth (Zagoruyko & Komodakis, 2016; Tan & Le, 2019) . Further study of accuracy and error sets have been conducted in (Hacohen & Weinshall, 2020) (error sets over training), and (Hooker et al., 2019) (error after pruning). Other work has demonstrated that it is often possible for narrower or shallower neural networks to attain similar accuracy to larger networks when the smaller networks are trained to mimic the larger networks' predictions (Ba & Caruana, 2014; Romero et al., 2015) . We instead seek to study the impact of width and depth on network internal representations and (per-example) outputs, by applying techniques for measuring similarity of neural network hidden representations (Kornblith et al., 2019; Raghu et al., 2017a; Morcos et al., 2018) . These techniques have been very successful in analyzing deep learning, from properties of neural network training (Gotmare et al., 2018; Neyshabur et al., 2020 ), objectives (Resnick et al., 2019; Thompson et al., 2019; Hermann & Lampinen, 2020), and dynamics (Maheswaranathan et al., 2019) to revealing hidden linguistic structure in large language models (Bau et al., 2019; Kudugunta et al., 2019; Wu et al., 2019; 2020) and applications in neuroscience (Shi et al., 2019; Li et al., 2019; Merel et al., 2019; Zhang & Bellec, 2020) and medicine (Raghu et al., 2019) .

3. EXPERIMENTAL SETUP AND BACKGROUND

Our goal is to understand the effects of depth and width on the function learned by the underlying neural network, in a setting representative of the high performance models used in practice. Reflecting this, our experimental setup consists of a family of ResNets (He et al., 2016; Zagoruyko & Komodakis, 2016) trained on standard image classification datasets CIFAR-10, CIFAR-100 and ImageNet. For standard CIFAR ResNet architectures, the network's layers are evenly divided between three stages (feature map sizes), with numbers of channels increasing by a factor of two from one stage to the next. We adjust the network's width and depth by increasing the number of channels and layers respectively in each stage, following Zagoruyko & Komodakis (2016) . For ImageNet ResNets,

