GEOMETRY-AWARE GRADIENT ALGORITHMS FOR NEURAL ARCHITECTURE SEARCH

Abstract

Recent state-of-the-art methods for neural architecture search (NAS) exploit gradient-based optimization by relaxing the problem into continuous optimization over architectures and shared-weights, a noisy process that remains poorly understood. We argue for the study of single-level empirical risk minimization to understand NAS with weight-sharing, reducing the design of NAS methods to devising optimizers and regularizers that can quickly obtain high-quality solutions to this problem. Invoking the theory of mirror descent, we present a geometry-aware framework that exploits the underlying structure of this optimization to return sparse architectural parameters, leading to simple yet novel algorithms that enjoy fast convergence guarantees and achieve state-of-the-art accuracy on the latest NAS benchmarks in computer vision. Notably, we exceed the best published results for both CIFAR and ImageNet on both the DARTS search space and NAS-Bench-201; on the latter we achieve near-oracle-optimal performance on CIFAR-10 and CIFAR-100. Together, our theory and experiments demonstrate a principled way to co-design optimizers and continuous relaxations of discrete NAS search spaces.

1. INTRODUCTION

Neural architecture search has become an important tool for automating machine learning (ML) but can require hundreds of thousands of GPU-hours to train. Recently, weight-sharing approaches have achieved state-of-the-art performance while drastically reducing the computational cost of NAS to just that of training a single shared-weights network (Pham et al., 2018; Liu et al., 2019) . Methods such as DARTS (Liu et al., 2019) , GDAS (Dong & Yang, 2019) , and many others (Pham et al., 2018; Zheng et al., 2019; Yang et al., 2020; Xie et al., 2019; Liu et al., 2018; Laube & Zell, 2019; Cai et al., 2019; Akimoto et al., 2019; Xu et al., 2020) combine weight-sharing with a continuous relaxation of the discrete search space to allow cheap gradient updates, enabling the use of popular optimizers. However, despite some empirical success, weight-sharing remains poorly understood and has received criticism due to (1) rank-disorder (Yu et al., 2020; Zela et al., 2020b; Zhang et al., 2020; Pourchot et al., 2020) , where the shared-weights performance is a poor surrogate of standalone performance, and (2) poor results on recent benchmarks (Dong & Yang, 2020; Zela et al., 2020a) . Motivated by the challenge of developing simple and efficient methods that achieve state-of-the-art performance, we study how to best handle the goals and optimization objectives of NAS. We start by observing that weight-sharing subsumes architecture hyperparameters as another set of learned parameters of the shared-weights network, in effect extending the class of functions being learned. This suggests that a reasonable approach towards obtaining high-quality NAS solutions is to study how to regularize and optimize the empirical risk over this extended class. While many regularization approaches have been implicitly proposed in recent NAS efforts, we focus instead on the question of optimizing architecture parameters, which may not be amenable to standard procedures such as SGD that work well for standard neural network weights. In particular, to better-satisfy desirable properties such as generalization and sparsity of architectural decisions, we propose to constrain architecture parameters to the simplex and update them using exponentiated gradient, which has favorable convergence properties due to the underlying problem structure. Theoretically, we draw upon the mirror descent meta-algorithm (Nemirovski & Yudin, 1983; Beck & Teboulle, 2003) to give convergence guarantees when using any of a broad class of such geometry-aware gradient methods to optimize the weight-sharing objective; empirically, we show that our solution leads to strong improvements on several NAS benchmarks. We summarize these contributions below: 1. We argue for studying NAS with weight-sharing as a single-level objective over a structured function class in which architectural decisions are treated as learned parameters rather than hyperparameters. Our setup clarifies recent concerns about rank disorder and makes clear that proper regularization and optimization of this objective is critical to obtaining high-quality solutions. 2. Focusing on optimization, we propose to improve existing NAS algorithms by re-parameterizing architecture parameters over the simplex and updating them using exponentiated gradient, a variant of mirror descent that converges quickly over this domain and enjoys favorable sparsity properties. et al., 2019; Nayman et al., 2019; Carlucci et al., 2019) . In contrast, we prove polynomial-time stationary-point convergence on a single-level objective for weight-sharing NAS, so far only studied empirically (Xie et al., 2019; Li et al., 2019) . Our results draw upon the mirror descent meta-algorithm (Nemirovski & Yudin, 1983; Beck & Teboulle, 2003) and extend recent nonconvex convergence results Zhang & He (2018) to handle alternating descent. While there exist related results (Dang & Lan, 2015) the associated guarantees do not hold for the algorithms we propose. Finally, we note that a variant of GAEA that modifies first-order DARTS is related to XNAS (Nayman et al., 2019) , whose update also involves exponentiated gradient; however, GAEA is simpler and easier to implement.foot_1 Furthermore, the regret guarantees for XNAS do not relate to any meaningful performance measure for NAS such as speed or accuracy, whereas we guarantee convergence on the ERM objective.

2. THE WEIGHT-SHARING OPTIMIZATION PROBLEM

In supervised ML we have a dataset T of labeled pairs (x, y) drawn from a distribution D over input/output spaces X and Y . The goal is to use T to search a function class H for h w : X → Y parameterized by w ∈ R d that has low expected test loss (h w (x), y) when using x to predict the associated y on unseen samples drawn from D, as measured by some loss : Y × Y → [0, ∞). A common way to do so is by approximate (regularized) empirical risk minimization (ERM), i.e. finding w ∈ R d with the smallest average loss over T , via some iterative method Alg, e.g. SGD.

2.1. THE BENEFITS AND CRITICISMS OF WEIGHT-SHARING FOR NAS

NAS is often viewed as hyperparameter optimization on top of Alg, with each architecture a ∈ A corresponding to a function class H a = {h w,a : X → Y, w ∈ R d } to be selected by using validation data V ⊂ X × Y to evaluate the predictor obtained by fixing a and doing approximate ERM over T : min a∈A (x,y)∈V (h wa,a (x), y) s.t. w a = Alg(T, a) Since training individual sets of weights for any sizeable number of architectures is prohibitive, weight-sharing methods instead use a single set of shared weights to obtain validation signal about many architectures at once. In its most simple form, RS-WS (Li & Talwalkar, 2019), these weights



Code to obtain these results has been made available in the supplementary material. XNAS code does not implement search and, as with previous efforts(Li et al., 2019, OpenReview), we cannot reproduce results after correspondence with the authors. XNAS's best architecture achieves an average test error of 2.70% under the DARTS evaluation, while GAEA achieves 2.50%. For details see Appendix C.4.



This simple modification-which we call the Geometry-Aware Exponentiated Algorithm (GAEA)-is easily applicable to numerous methods, including first-order DARTSLiu et al. (2019), GDAS Dong & Yang (2019), and PC-DARTS (Xu et al., 2020). 3. To show correctness and efficiency of our scheme, we prove polynomial-time stationary-point convergence of block-stochastic mirror descent-a family of geometry-aware gradient algorithms that includes GAEA-over a continuous relaxation of the single-level NAS objective. To the best of our knowledge these are the first finite-time convergence guarantees for gradient-based NAS. 4. We demonstrate that GAEA improves upon state-of-the-art methods on three of the latest NAS benchmarks for computer vision. Specifically, we beat the current best results on NAS-Bench-201 (Dong & Yang, 2020) by 0.18% on CIFAR-10, 1.59% on CIFAR-100, and 0.82% on ImageNet-16-120; we also outperform the state-of-the-art on the DARTS search space Liu et al. (2019), for both CIFAR-10 and ImageNet, and match it on NAS-Bench-1Shot1(Zela et al., 2020a).

