NEURAL NETWORKS WITH LATE-PHASE WEIGHTS

Abstract

The largely successful method of training neural networks is to learn their weights using some variant of stochastic gradient descent (SGD). Here, we show that the solutions found by SGD can be further improved by ensembling a subset of the weights in late stages of learning. At the end of learning, we obtain back a single model by taking a spatial average in weight space. To avoid incurring increased computational costs, we investigate a family of low-dimensional late-phase weight models which interact multiplicatively with the remaining parameters. Our results show that augmenting standard models with late-phase weights improves generalization in established benchmarks such as CIFAR-10/100, ImageNet and enwik8. These findings are complemented with a theoretical analysis of a noisy quadratic problem which provides a simplified picture of the late phases of neural network learning.

1. INTRODUCTION

Neural networks trained with SGD generalize remarkably well on a wide range of problems. A classic technique to further improve generalization is to ensemble many such models (Lakshminarayanan et al., 2017) . At test time, the predictions made by each model are combined, usually through a simple average. Although largely successful, this technique is costly both during learning and inference. This has prompted the development of ensembling methods with reduced complexity, for example by collecting models along an optimization path generated by SGD (Huang et al., 2017) , by performing interpolations in weight space (Garipov et al., 2018) , or by tying a subset of the weights over the ensemble (Lee et al., 2015; Wen et al., 2020 ). An alternative line of work explores the use of ensembles to guide the optimization of a single model (Zhang et al., 2015; Pittorino et al., 2020) . We join these efforts and develop a method that fine-tunes the behavior of SGD using late-phase weights: late in training, we replicate a subset of the weights of a neural network and randomly initialize them in a small neighborhood. Together with the stochasticity inherent to SGD, this initialization encourages the late-phase weights to explore the loss landscape. As the late-phase weights explore, the shared weights accumulate gradients. After training we collapse this implicit ensemble into a single model by averaging in weight space. Building upon recent work on ensembles with shared parameters (Wen et al., 2020) we explore a family of late-phase weight models involving multiplicative interactions (Jayakumar et al., 2020) . We focus on low-dimensional late-phase models that can be ensembled with negligible overhead. Our experiments reveal that replicating the ubiquitous batch normalization layers (Ioffe & Szegedy, 2015) is a surprisingly simple and effective strategy for improving generalizationfoot_0 . Furthermore, we find that late-phase weights can be combined with stochastic weight averaging (Izmailov et al., 2018) , a complementary method that has been shown to greatly improve generalization.



We provide code to reproduce our experiments at https://github.com/seijin-kobayashi/ late-phase-weights

