GEOMETRY-AWARE GRADIENT ALGORITHMS FOR NEURAL ARCHITECTURE SEARCH

Abstract

Recent state-of-the-art methods for neural architecture search (NAS) exploit gradient-based optimization by relaxing the problem into continuous optimization over architectures and shared-weights, a noisy process that remains poorly understood. We argue for the study of single-level empirical risk minimization to understand NAS with weight-sharing, reducing the design of NAS methods to devising optimizers and regularizers that can quickly obtain high-quality solutions to this problem. Invoking the theory of mirror descent, we present a geometry-aware framework that exploits the underlying structure of this optimization to return sparse architectural parameters, leading to simple yet novel algorithms that enjoy fast convergence guarantees and achieve state-of-the-art accuracy on the latest NAS benchmarks in computer vision. Notably, we exceed the best published results for both CIFAR and ImageNet on both the DARTS search space and NAS-Bench-201; on the latter we achieve near-oracle-optimal performance on CIFAR-10 and CIFAR-100. Together, our theory and experiments demonstrate a principled way to co-design optimizers and continuous relaxations of discrete NAS search spaces.

1. INTRODUCTION

Neural architecture search has become an important tool for automating machine learning (ML) but can require hundreds of thousands of GPU-hours to train. Recently, weight-sharing approaches have achieved state-of-the-art performance while drastically reducing the computational cost of NAS to just that of training a single shared-weights network (Pham et al., 2018; Liu et al., 2019) . Methods such as DARTS (Liu et al., 2019) , GDAS (Dong & Yang, 2019) , and many others (Pham et al., 2018; Zheng et al., 2019; Yang et al., 2020; Xie et al., 2019; Liu et al., 2018; Laube & Zell, 2019; Cai et al., 2019; Akimoto et al., 2019; Xu et al., 2020) combine weight-sharing with a continuous relaxation of the discrete search space to allow cheap gradient updates, enabling the use of popular optimizers. However, despite some empirical success, weight-sharing remains poorly understood and has received criticism due to (1) rank-disorder (Yu et al., 2020; Zela et al., 2020b; Zhang et al., 2020; Pourchot et al., 2020) , where the shared-weights performance is a poor surrogate of standalone performance, and (2) poor results on recent benchmarks (Dong & Yang, 2020; Zela et al., 2020a) . Motivated by the challenge of developing simple and efficient methods that achieve state-of-the-art performance, we study how to best handle the goals and optimization objectives of NAS. We start by observing that weight-sharing subsumes architecture hyperparameters as another set of learned parameters of the shared-weights network, in effect extending the class of functions being learned. This suggests that a reasonable approach towards obtaining high-quality NAS solutions is to study how to regularize and optimize the empirical risk over this extended class. While many regularization approaches have been implicitly proposed in recent NAS efforts, we focus instead on the question of optimizing architecture parameters, which may not be amenable to standard procedures such as SGD that work well for standard neural network weights. In particular, to better-satisfy desirable properties such as generalization and sparsity of architectural decisions, we propose to constrain architecture parameters to the simplex and update them using exponentiated gradient, which has favorable convergence properties due to the underlying problem structure. Theoretically, we draw upon the mirror descent meta-algorithm (Nemirovski & Yudin, 1983; Beck & Teboulle, 2003) to give convergence guarantees when using any of a broad class of such geometry-aware gradient methods to optimize the weight-sharing objective; empirically, we show that our solution leads to strong improvements on several NAS benchmarks. We summarize these contributions below: 1. We argue for studying NAS with weight-sharing as a single-level objective over a structured function class in which architectural decisions are treated as learned parameters rather than hyperparameters. Our setup clarifies recent concerns about rank disorder and makes clear that proper regularization and optimization of this objective is critical to obtaining high-quality solutions. 2. Focusing on optimization, we propose to improve existing NAS algorithms by re-parameterizing architecture parameters over the simplex and updating them using exponentiated gradient, a variant of mirror descent that converges quickly over this domain and enjoys favorable sparsity properties. This simple modification-which we call the Geometry-Aware Exponentiated Algorithm (GAEA)-is easily applicable to numerous methods, including first-order DARTS Liu et al. (2019) , GDAS Dong & Yang (2019) , and PC-DARTS (Xu et al., 2020) . 3. To show correctness and efficiency of our scheme, we prove polynomial-time stationary-point convergence of block-stochastic mirror descent-a family of geometry-aware gradient algorithms that includes GAEA-over a continuous relaxation of the single-level NAS objective. To the best of our knowledge these are the first finite-time convergence guarantees for gradient-based NAS. 4. We demonstrate that GAEA improves upon state-of-the-art methods on three of the latest NAS benchmarks for computer vision. Specifically, we beat the current best results on NAS-Bench-201 (Dong & Yang, 2020 ) by 0.18% on CIFAR-10, 1.59% on CIFAR-100, and 0.82% on ImageNet-16-120; we also outperform the state-of-the-art on the DARTS search space Liu et al. (2019) , for both CIFAR-10 and ImageNet, and match it on NAS-Bench-1Shot1 (Zela et al., 2020a ).foot_0  Related Work. Most optimization analyses of NAS show monotonic improvement (Akimoto et al., 2019) , asymptotic guarantees (Yao et al., 2020) , or bounds on auxiliary quantities disconnected from any objective (Noy et al., 2019; Nayman et al., 2019; Carlucci et al., 2019) . In contrast, we prove polynomial-time stationary-point convergence on a single-level objective for weight-sharing NAS, so far only studied empirically (Xie et al., 2019; Li et al., 2019) . Our results draw upon the mirror descent meta-algorithm (Nemirovski & Yudin, 1983; Beck & Teboulle, 2003) and extend recent nonconvex convergence results Zhang & He (2018) to handle alternating descent. While there exist related results (Dang & Lan, 2015) the associated guarantees do not hold for the algorithms we propose. Finally, we note that a variant of GAEA that modifies first-order DARTS is related to XNAS (Nayman et al., 2019) , whose update also involves exponentiated gradient; however, GAEA is simpler and easier to implement.foot_1 Furthermore, the regret guarantees for XNAS do not relate to any meaningful performance measure for NAS such as speed or accuracy, whereas we guarantee convergence on the ERM objective.

2. THE WEIGHT-SHARING OPTIMIZATION PROBLEM

In supervised ML we have a dataset T of labeled pairs (x, y) drawn from a distribution D over input/output spaces X and Y . The goal is to use T to search a function class H for h w : X → Y parameterized by w ∈ R d that has low expected test loss (h w (x), y) when using x to predict the associated y on unseen samples drawn from D, as measured by some loss : Y × Y → [0, ∞). A common way to do so is by approximate (regularized) empirical risk minimization (ERM), i.e. finding w ∈ R d with the smallest average loss over T , via some iterative method Alg, e.g. SGD.

2.1. THE BENEFITS AND CRITICISMS OF WEIGHT-SHARING FOR NAS

NAS is often viewed as hyperparameter optimization on top of Alg, with each architecture a ∈ A corresponding to a function class H a = {h w,a : X → Y, w ∈ R d } to be selected by using validation data V ⊂ X × Y to evaluate the predictor obtained by fixing a and doing approximate ERM over T : min a∈A (x,y)∈V (h wa,a (x), y) s.t. w a = Alg(T, a) Since training individual sets of weights for any sizeable number of architectures is prohibitive, weight-sharing methods instead use a single set of shared weights to obtain validation signal about many architectures at once. In its most simple form, RS-WS (Li & Talwalkar, 2019) , these weights are trained to minimize a non-adaptive objective, min w∈R d E a (x,y)∈T (h wa,a (x), y), where the expectation is over a fixed distribution over architectures A. The final architecture a is then chosen to maximize the outer (validation) objective in (1) subject to w a = w. More frequently used is a bilevel objective over some continuous relaxation Θ of the architecture space A, after which a valid architecture is obtained via a discretization step Map : Θ → A (Pham et al., 2018; Liu et al., 2019) : min θ∈Θ (x,y)∈V (h w,θ (x), y) s.t. w ∈ arg min u∈R d (x,y)∈T (h u,θ (x), y) This objective is not significantly different from (2), since Alg(T, a) approximately minimizes the empirical risk w.r.t. T ; the difference is replacing discrete architectures with relaxed architecture parameters θ ∈ Θ, w.r.t. which we can take derivatives of the outer objective. This allows (2) to be approximated via alternating gradient updates w.r.t. w and θ. Relaxations can be stochastic, so that Map(θ) is a sample from a θ-parameterized distribution (Pham et al., 2018; Dong & Yang, 2019) , or a mixture, in which case Map(θ) selects architectural decisions with the highest weight in a convex combination given by θ (Liu et al., 2019) . We overview this in more detail in Appendix A. While weight-sharing significantly shortens search (Pham et al., 2018) , it draws two main criticisms: • Rank disorder: this describes when the rank of an architecture a according to the validation risk evaluated with fixed shared weights w is poorly correlated with the one using "standalone" weights w a = Alg(T, a). This causes suboptimal architectures to be selected after shared weights search (Yu et al., 2020; Zela et al., 2020b; Zhang et al., 2020; Pourchot et al., 2020) . • Poor performance: weight-sharing can converge to degenerate architectures (Zela et al., 2020a) and is outperformed by regular hyperparameter tuning on NAS-Bench-201 (Dong & Yang, 2020) .

2.2. SINGLE-LEVEL NAS AS A BASELINE OBJECT OF STUDY

Why are we able to apply weight-sharing to NAS? The key is that, unlike regular hyperparameters such as step-size, architectural hyperparameters directly affect the loss function without requiring a dependent change in the model weights w. Thus we can distinguish architectures without retraining simply by changing architectural decisions. Besides enabling weight-sharing, this point reveals that the goal of NAS is perhaps better viewed as a regular learning problem over an extended class  H A = a∈A H a = {h w,a : X → Y, w ∈ R d , Several works have optimized this single-level objective as an alternative to bilevel (2) (Xie et al., 2019; Li et al., 2019) . We argue for its use as the baseline object of study in NAS for three reasons: 1. As discussed above, it is the natural first approach to solving the statistical objective of NAS: finding a good predictor h w,a ∈ H A in the extended function class over architectures and weights. 2. The common alternating gradient approach to the bilevel problem (2) is in practice very similar to alternating block approaches to ERM (3); as we will see, there are established ways of analyzing such methods for the latter objective, while for the former convergence is known only under very strong assumptions such as uniqueness of the inner minimum (Franceschi et al., 2018) . 3. While less frequently used in practice than bilevel, single-level optimization can be very effective: we use it to achieve new state-of-the-art results on NAS-Bench-201 (Dong & Yang, 2020) . Understanding NAS as single-level optimization-the usual deep learning setting-makes weightsharing a natural, not surprising, approach. Furthermore, for methods-both single-level and bilevelthat adapt architecture parameters during search, it suggests that we need not worry about rank disorder as long as we can use optimization to find a single feasible point that generalizes well; we explicitly do not need a ranking. Non-adaptive methods such as RS-WS still do require rank correlation to select good architectures after search, but they are explicitly not changing θ and so have no variant solving (3). The single-level formulation thus reduces search method design to well-studied questions of how to best regularize and optimize ERM. While there are many techniques for regularizing weight-sharing-including partial channels (Xu et al., 2020) and validation Hessian penalization (Zela et al., 2020a) -we focus on the second question of optimization.

3. GEOMETRY-AWARE GRADIENT ALGORITHMS

We seek to minimize the (possibly regularized) empirical risk f (w, θ) = 1 |T | (x,y)∈T (h w,θ (x), y) over shared-weights w ∈ R d and architecture parameters θ ∈ Θ. Assuming we have noisy gradients of f w.r.t. w or θ at any point (w, θ) ∈ R d × Θ-i.e. ∇w f (w, θ) or ∇θ f (w, θ) satisfying E ∇w f (w, θ) = ∇ w f (w, θ) or E ∇θ f (w, θ) = ∇ θ f (w, θ), respectively-our goal is a point where f , or at least its gradient, is small, while taking as few gradients as possible. Our main complication is that architecture parameters lie in a constrained, non-Euclidean domain Θ. Most search spaces A are product sets of categorical decisions-which operation o ∈ O to use at edge e ∈ E-so the natural relaxation is a product of |E| |O|-simplices. However, NAS methods often re-parameterize Θ to be unconstrained using a softmax and then SGD or Adam (Kingma & Ba, 2015) . Is there a better parameterization-algorithm co-design? We consider a geometry-aware approach that uses mirror descent to design NAS methods with better properties depending on the domain; a key desirable property is to return sparse architectural parameters to reduce loss from post-search discretization.

3.1. BACKGROUND ON MIRROR DESCENT

Mirror descent has many formulations (Nemirovski & Yudin, 1983; Beck & Teboulle, 2003; Shalev-Shwartz, 2011) ; the proximal starts by noting that, in the unconstrained case, an SGD update at a point θ ∈ Θ = R k given gradient estimate ∇f (θ) with step-size η > 0 is equivalent to θ -η ∇f (θ) = arg min u∈R k η ∇f (θ) • u + 1 2 u -θ 2 2 (4) Here the first term aligns the output with the gradient while the second (proximal) term regularizes for closeness to the previous point as measured by the Euclidean distance. While the SGD update has been found to work well for unconstrained high-dimensional optimization, e.g. deep nets, this choice of proximal regularization may be sub-optimal over a constrained space with sparse solutions. The canonical such setting is optimization over the unit simplex, i.e. when Θ = {θ ∈ [0, 1] k : θ 1 = 1}. Replacing the 2 -regularizer in Equation 4 by the relative entropy u • (log u -log θ), i.e. the KL-divergence, yields the exponentiated gradient (EG) update ( denotes element-wise product): θ exp(-η ∇f (θ)) ∝ arg min u∈Θ η ∇f (θ) • u + u • (log u -log θ) Note that the full EG update is obtained by 1 -normalizing the l.h.s. It is well-known that EG over the k-dimensional simplex requires only O(log k)/ε 2 iterations to achieve a function value ε-away from optimal (Beck & Teboulle, 2003, Theorem 5 .1), compared to the O(k/ε 2 ) guarantee of gradient descent. This nearly dimension-independent iteration complexity is achieved by choosing a regularizer-the KL divergence-well-suited to the underlying geometry-the simplex. More generally, mirror descent is specified by a distance-generating function (DGF) φ that is stronglyconvex w.r.t. some norm. φ induces a Bregman divergence D φ (u||v) = φ(u)-φ(v)-∇φ(v)•(u-v) (Bregman, 1967) , a notion of distance on Θ that acts as a regularizer in the mirror descent update: arg min u∈Θ η ∇f (θ) • u + D φ (u||θ) For example, to recover SGD (4) we set φ(u) = 1 2 u 2 2 , which is strongly-convex w.r.t. the Euclidean norm, while EG ( 5) is recovered by setting φ(u) = u • log u, strongly-convex w.r.t. the 1 -norm.

3.2. BLOCK-STOCHASTIC MIRROR DESCENT

In the previous section we saw how mirror descent can perform better over certain geometries such as the simplex. However, in weight-sharing we are interested in optimizing over a hybrid geometry containing both the shared weights in an unconstrained Euclidean space and the architecture parameters in a non-Euclidean domain. Thus we focus on optimization over two blocks: shared weights w ∈ R d and architecture parameters θ ∈ Θ, the latter associated with a DGF φ that is strongly-convex w.r.t. some norm • . In NAS a common approach is to perform alternating gradient steps on each domain; for example, both ENAS (Pham et al., 2018) and first-order DARTS (Liu et al., 2019 ) alternate between SGD on the shared weights and Adam on architecture parameters. This approach is encapsulated in the block-stochastic algorithm described in Algorithm 1, which at each step chooses one block at random to update using mirror descent (recall that SGD is a variant) and after T steps returns a random iterate. Algorithm 1 generalizes the single-level variant of both ENAS and first-order DARTS if SGD is used to update θ instead of Adam, with some mild caveats: in practice blocks are picked cyclically and the algorithm returns the last iterate, not a a random one. To analyze the convergence of Algorithm 1 we first state some regularity assumptions on the function: Algorithm 1: Block-stochastic mirror descent optimization of a function f : R d × Θ → R. Input: initialization (w (1) , θ (1) ) ∈ R d × Θ, strongly-convex DGF φ : Θ → R, number of iterations T ≥ 1, step-size η > 0 for iteration t = 1, . . . , T do sample bt ∼ Unif{w, θ} // randomly select update block if block bt = w then w (t+1) ← w (t) -η ∇wf (w (t) , θ (t) ) // SGD update to shared weights θ (t+1) ← θ (t) // no update to architecture params else w (t+1) ← w (t) // no update to shared weights θ (t+1) ← arg min u∈Θ η ∇θ f (w (t) , θ (t) ) • u + D φ (u||θ (t) ) // update architecture params Output: (w (r) , θ (r) ) for r ∼ Unif{1, . . . , T } // return random iterate Assumption 1. Suppose φ is strongly-convex w.r.t. some norm • on a convex set Θ and the objective function f : R d × Θ → [0, ∞) satisfies the following: 1. γ-relatively-weak-convexity: f (w, θ) + γφ(θ) is convex on R d × Θ for some γ > 0. 2. gradient bound: E ∇w f (w, θ) 2 2 ≤ G 2 w and E ∇θ f (w, θ) 2 * ≤ G 2 θ for some G w , G θ ≥ 0. The second assumption is a standard bound on the gradient norm while the first is a generalization of smoothness that allows all smooth and some non-smooth non-convex functions (Zhang & He, 2018) . Our aim will be to show (first-order) ε-stationary-point convergence of Algorithm 1, a standard metric indicating that it has reached a point with no feasible descent direction, up to error ε; for example, in the unconstrained Euclidean case an ε-stationary-point is simply one where the gradient has squared-norm ≤ ε. The number of steps required to obtain such a point thus measures how fast a first-order method terminates. Stationarity is also significant as a necessary condition for optimality. In our case Θ may be constrained and so the gradient may never be small, thus necessitating a measure other than gradient norm. We use Bregman stationarity (Zhang & He, 2018, Equation 2.11) , which measures stationary at a point (w, θ) using the Bregman divergence between the point and its proximal map prox λ (w, θ) = arg min u∈R d ×Θ λf (u) + D 2,φ (u||w, θ) for some λ > 0: ∆ λ (w, θ) = D 2,φ (w, θ|| prox λ (w, θ)) + D 2,φ (prox λ (w, θ)||w, θ) λ 2 Here λ = 1 2γ and the Bregman divergence D 2,φ is that of the DGF 1 2 w 2 2 + φ(θ) that encodes the geometry of the joint optimization domain over w ∈ R d and θ; note that the dependence of the stationarity measure on γ is standard (Dang & Lan, 2015; Zhang & He, 2018) . To understand why reaching a point (w, θ) with small Bregman stationarity is a reasonable goal, note that the proximal operator prox λ has the property that its fixed points, i.e. those satisfying (w, θ) = prox λ (w, θ), correspond to points where f has no feasible descent direction. Thus measuring how close (w, θ) is to being a fixed point of prox λ -as is done using the Bregman divergence in (7)-is a good measure of how far away the point is from being a stationary point of f . Finally, note that if f is smooth, φ is Euclidean, and Θ is unconstrained-i.e. if we are running SGD over architecture parameters as well-then ∆ 1 2γ ≤ ε implies an O(ε)-bound on the squared gradient norm, recovering the standard definition of ε-stationarity. More intuition on proximal operators can be found in Parikh & Boyd (2013, Section 1.2), while further details on Bregman stationarity and how it relates to other notions of convergence can be found in Zhang & He (2018, Section 2.3). The following result shows that Algorithm 1 needs polynomially many iterations to finds a point (w, θ) with ε-small Bregman stationarity in-expectation: Theorem 1. Let F = f (w (1) , θ (1) ) be the value of f at initialization. Under Assumption 1, if we run Algorithm 1 for T = 16γF ε 2 (G 2 w + G 2 θ ) iterations with step-size η = 4F γ(G 2 w +G 2 θ )T then E∆ 1 2γ (w (r) , θ (r) ) ≤ ε. Here the expectation is over the randomness of the algorithm and gradients. The proof in the appendix follows from single-block analysis (Zhang & He, 2018, Theorem 3.1) and in fact holds for the general case of any number of blocks associated to any set of strongly-convex DGFs. Although there are prior results for the multi-block case (Dang & Lan, 2015), they do not hold for nonsmooth Bregman divergences such as the KL divergence needed for exponentiated gradient. Figure 1 : Sparsity: Evolution over search phase epochs of the average entropy of the operationweights for GAEA and approaches it modifies when run on the DARTS search space (left), NAS-Bench-1Shot1 Search Space 1 (middle), and NASBench-201 on CIFAR-10 (right). GAEA reduces entropy much more quickly, allowing it to quickly obtain sparse architecture weights. This leads to both faster convergence to a single architecture and a lower loss when pruning at the end of search. Thus Algorithm 1 returns an ε-stationary-point given T = O(G 2 w + G 2 θ )/ε 2 iterations, where G 2 w bounds the squared 2 -norm of the shared-weights gradient ∇w and G 2 θ bounds the squared magnitude of the architecture gradient ∇θ , as measured by the dual norm • * of • . Only the last term G θ is affected by our choice of DGF φ. The DGF of SGD is strongly-convex w.r.t. the 2 -norm, which is its own dual, so G 2 w is defined via 2 . However, for EG the DGF φ(u) = u • log u is strongly-convex w.r.t. the 1 -norm, whose dual is ∞ . Since the 2 -norm of a k-dimensional vector can be √ k times its ∞ -norm, picking this DGF can lead to better bound on G θ and thus on the number of iterations.

3.3. GAEA: A GEOMETRY-AWARE EXPONENTIATED ALGORITHM

Equipped with these single-level guarantees, we turn to designing methods that can in-principle be applied to both the single-level and bilevel objectives, seeking parameterizations and algorithms that converge quickly and encourage favorable properties; in particular, we focus on returning architecture parameters that are sparse to reduce loss due to post-search discretization. EG is often considered to converge quickly to sparse solutions over the simplex (Bradley & Bagnell, 2008; Bubeck, 2019) , which makes it a natural choice for the architecture update. We thus propose GAEA, a Geometry-Aware Exponentiated Algorithm in which operation weights on each edge are constrained to the simplex and trained using EG; as in DARTS, the shared weights w are trained using SGD. GAEA can be used as a simple, principled modification to the many NAS methods that treat architecture parameters θ ∈ Θ = R |E|×|O| as real-valued "logits" to be passed through a softmax to obtain mixture weights or probabilities for simplices over the operations O. Such methods include DARTS, PC-DARTS (Xu et al., 2020) , and GDAS (Dong & Yang, 2019) . To apply GAEA, first re-parameterize Θ to be the product set of |E| simplices, each associated to an edge (i, j) ∈ E; thus θ i,j,o corresponds directly to the weight or probability of operation o ∈ O for edge (i, j), not a logit. Then, given a stochastic gradient ∇θ f (w (t) , θ (t) ) and step-size η > 0, replace the architecture update by EG: θ(t+1) ← θ (t) exp -η ∇θ f (w (t) , θ (t) ) (multiplicative update) θ (t+1) i,j,o ← θ(t+1) i,j,o o ∈O θ(t+1) i,j,o ∀ o ∈ O, ∀ (i, j) ∈ E (simplex projection) (8) These two simple modifications, re-parameterization and exponentiation, suffice to obtain state-ofthe-art results on several NAS benchmarks, as shown in Section 4. Note that to obtain a bilevel algorithm we simply replace the gradient w.r.t. θ of the training loss with that of the validation loss. GAEA is equivalent to Algorithm 1 with φ(θ) = (i,j)∈E o∈O θ i,j,o log θ i,j,o , which is strongly- convex w.r.t. • 1 / |E| over the product of |E| |O|-simplices. The dual is |E| • ∞ , so if G w bounds the shared-weights gradient and we have an entry-wise bound on the architecture gradient then GAEA reach ε-stationarity in O(G 2 w + |E|)/ε 2 iterations. This can be up to a factor |O| improvement over SGD, either over the simplex or the logit space. In addition, GAEA encourages sparsity in the architecture weights by using a multiplicative update over simplices and not an additive update over R |E|×|O| . Obtaining sparse architecture parameters is critical for good performance, both for the mixture relaxation, where it alleviates the effect of discretization on the validation loss, and for the stochastic relaxation, where it reduces noise when sampling architectures. Table 1 : DARTS: Comparison with SOTA NAS methods on the DARTS search space, plus three results on different search spaces with a similar number of parameters reported at the top for comparison. All evaluations and reported performances of models found on the DARTS search space use similar training routines; this includes auxiliary towers and cutout but no other modifications, e.g. label smoothing (Müller et al., 2019 ), AutoAugment (Cubuk et al., 2019) , Swish (Ramachandran et al., 2017) , Squeeze & Excite (Hu et al., 2018) , etc. The specific training procedure we use is that of PC-DARTS, which differs slightly from the DARTS routine by a small change to the drop-path probability; PDARTS tunes both this and batch-size. Our results are averaged over 10 random seeds. Search cost is hardware-dependent; we used Tesla V100 GPUs. For more details see Tables 4 & 5 . 

4. EMPIRICAL RESULTS USING GAEA

We evaluate GAEA on three different computer vision benchmarks: the large and heavily studied search space from DARTS (Liu et al., 2019) and two smaller oracle evaluation benchmarks, NAS-Bench-1Shot1 (Zela et al., 2020a) , and NAS-Bench-201 (Dong & Yang, 2020) . NAS-Bench-1Shot1 differs from the others by applying operations per node instead of per edge, while NAS-Bench-201 differs by not requiring edge-pruning. Since GAEA can modify a variety of methods, e.g. DARTS, PC-DARTS (Xu et al., 2020) , and GDAS (Dong & Yang, 2019) , on each benchmark we start by evaluating the GAEA variant of the current best method on that benchmark. We show that despite the diversity of search spaces, GAEA improves upon this state-of-the-art across all three. Note that we use the same step-size for GAEA variants of DARTS/PC-DARTS and do not require weight-decay on architecture parameters. We defer experimental details and hyperparameter settings to the appendix and release all code, hyperparameters, and random seeds for reproducibility.

4.1. CONVERGENCE AND SPARSITY OF GAEA

We first examine the impact of GAEA on convergence and sparsity. Figure 1 shows the entropy of the operation weights averaged across nodes for a GAEA-variant and its base method across the three benchmarks, demonstrating that it decreases much faster for GAEA-modified approaches. This validates our expectation that GAEA encourages sparse architecture parameters, which should alleviate the mismatch between the continuously relaxed architecture parameters and the discrete architecture returned. Indeed, we find that post-search discretization on the DARTS search space causes the validation accuracy of the PC-DARTS supernet to drop from 72.17% to 15.27%, while for GAEA PC-DARTS the drop is only 75.07% to 33.23%; note that this is shared-weights accuracy, obtained without retraining the final network. The numbers demonstrate that GAEA both (1) achieves better supernet optimization of the weight-sharing objective and (2) suffers less due to discretization.

4.2. GAEA ON THE DARTS SEARCH SPACE

Here we evaluate GAEA on the task of designing CNN cells for CIFAR-10 ( Krizhevksy, 2009) and ImageNet (Russakovsky et al., 2015) by using it to modify PC-DARTS (Xu et al., 2020) , the current state-of-the-art method. We follow the same three stage process used by both DARTS and RS-WS for search and evaluation. Table 1 displays results on both datasets and demonstrates that GAEA's parameterization and optimization scheme improves upon PC-DARTS. In fact, GAEA PC-DARTS Published as a conference paper at ICLR 2021 outperforms all search methods except ProxylessNAS, which uses 1.5 times as many parameters on a different search space. Thus we improve the state-of-the-art on the DARTS search space. To meet a higher bar for reproducibility on CIFAR-10, in Appendix C we report "broad reproducibility" (Li & Talwalkar, 2019 ) by repeating our pipeline with new seeds. While GAEA PC-DARTS consistently finds good networks when selecting the best of four independent trials, multiple trials are required due to sensitivity to initialization, as is true for many approaches (Liu et al., 2019; Xu et al., 2020) . On ImageNet, we follow Xu et al. ( 2020) by using subsamples containing 10% and 2.5% of the training images from ILSVRC-2012 (Russakovsky et al., 2015) as training and validation sets, respectively. We fix architecture parameters for the first 35 epochs, then run GAEA PC-DARTS with step-size 0.1. All other hyperparameters match those of Xu et al. (2020) . Table 1 shows the final performance of both the architecture found by GAEA PC-DARTS on CIFAR-10 and the one found directly on ImageNet when trained from scratch for 250 epochs using the same settings as Xu et al. (2020) . GAEA PC-DARTS achieves a top-1 test error of 24.0%, which is state-of-the-art performance in the mobile setting when excluding additional training modifications, e.g. those in the caption. Additionally, the architecture found by GAEA PC-DARTS for CIFAR-10 and transferred achieves a test error of 24.2%, comparable to the 24.2% error of the one found by PC-DARTS directly on ImageNet. Top architectures found by GAEA PC-DARTS are depicted in Figure 3 in the appendix.

4.3. GAEA ON NAS-BENCH-1SHOT1

NAS-Bench-1Shot1 (Zela et al., 2020a ) is a subset of NAS-Bench-101 (Ying et al., 2019) that allows benchmarking weight-sharing methods on three search spaces over CIFAR-10 that differ in the number of nodes considered and the number of input edges per node. Of the weight-sharing methods benchmarked by Zela et al. (2020a) , we found that PC-DARTS achieves the best performance on 2 of 3 search spaces, so we again evaluate GAEA PC-DARTS here. Figure 2 shows that GAEA PC-DARTS consistently finds better architectures on average than PC-DARTS and thus exceeds the performance of the best method from Zela et al. (2020a) on 2 of 3 search spaces. We hypothesize that the benefits of GAEA are limited here due to the near-saturation of NAS methods. In particular, existing methods obtain within 1% test error of the top network in each space, while the latters' test errors when evaluated with different initializations are 0.37%, 0.23% and 0.19%, respectively. Table 2 reports a subset of these results alongside evaluations of our implementation of several existing and GAEA-modified NAS methods in both the transfer and direct setting. Both the results from Dong & Yang (2020) and our reproductions show that GDAS is the best previous weight-sharing method; we evaluate GAEA GDAS and find that it achieves better results on CIFAR-100 and similar results on the other two datasets. Since we are interested in improving upon not only GAEA GDAS but also upon traditional hyperparameter optimization methods, we also investigate the performance of GAEA applied to first-order DARTS. We evaluate GAEA DARTS with both single-level (ERM) and bilevel optimization; recall that in the latter case we optimize architecture parameters w.r.t. the validation loss and the shared weights w.r.t. the training loss, whereas in ERM there is no data split. GAEA DARTS (ERM) achieves state-of-the-art performance on all three datasets in both the transfer and direct setting, exceeding the test accuracy of both weight-sharing and traditional hyperparameter tuning by a wide margin. GAEA DARTS (bilevel) performs worse but still exceeds all other methods on CIFAR-100 and ImageNet-16-120 in the direct search setting. The result thus also confirms the relevance of studying the single-level case to understand NAS; notably, the DARTS (ERM) baseline also improves substantially upon the DARTS (bilevel) baseline.

5. CONCLUSION

In this paper we take an optimization-based view of NAS, arguing that the design of good NAS algorithms is largely a matter of successfully optimizing and regularizing the supernet. In support of this, we develop GAEA, a simple modification of gradient-based NAS that attains state-of-the-art performance on several computer vision benchmarks while enjoying favorable speed and sparsity properties. We believe that obtaining high-performance NAS algorithms for a wide variety of applications will continue to require a similar co-design of search space parameterizations and optimization methods, and that our geometry-aware framework can help accelerate this process. In particular, most modern NAS algorithms search over products of categorical decision spaces, to which our approach is directly applicable. More generally, as the field moves towards more ambitious search spaces, e.g. full-network topologies or generalizations of operations such as convolution or attention, these developments may result in new architecture domains for which our work can inform the design of appropriate, geometry-aware optimization methods.

A BACKGROUND ON NAS WITH WEIGHT-SHARING

Here we review the NAS setup motivating our work. Weight-sharing methods almost exclusively use micro cell-based search spaces for their tractability and additional structure (Pham et al., 2018; Liu et al., 2019) . These search spaces can be represented as directed acyclic graphs (DAGs) with a set of ordered nodes N and edges E. Each node x (i) ∈ N is a feature representation and each edge (i, j) ∈ E is associated with an operation on the feature of node j passed to node i and aggregated with other inputs to form x (j) , with the restriction that a given node j can only receive edges from prior nodes as input. Hence, the feature at node i is j) ). Search spaces are specified by the number of nodes, the number of edges per node, and the set of operations O that can be applied at each edge. Thus for NAS, A ⊂ {0, 1} |E|×|O| is the set of all valid architectures for encoded by edge and operation decisions. Treating both the shared weights w ∈ R d and architecture decisions a ∈ A as parameters, weight-sharing methods train a single network subsuming all possible functions within the search space. x (i) = j<i o (i,j) (x Gradient-based weight-sharing methods apply continuous relaxations to the architecture space A in order to compute gradients in a continuous space Θ. Methods like DARTS (Liu et al., 2019) and its variants (Chen et al., 2019; Laube & Zell, 2019; Hundt et al., 2019; Liang et al., 2019; Noy et al., 2019; Nayman et al., 2019) relax the search space by considering a mixture of operations per edge. For example, we will consider a relaxation where the architecture space A = {0, 1} |E|×|O| is relaxed into Θ = [0, 1] |E|×|O| with the constraint that o∈O θ i,j,o = 1, i.e. the operation weights on each edge sum to 1. The feature at node i is then j) ). To get a valid architecture a ∈ A from a mixture θ, rounding and pruning are typically employed after the search phase. x (i) = j<i o∈O θ i,j,o o(x ( An alternative, stochastic approach, such as that used by GDAS (Dong & Yang, 2019) , instead uses Θ-parameterized distributions p θ over A to sample architectures (Pham et al., 2018; Xie et al., 2019; Akimoto et al., 2019; Cai et al., 2019) ; unbiased gradients w.r.t. θ ∈ Θ can be computed using Monte Carlo sampling. The goal of all these relaxations is to use simple gradient-based approaches to approximately optimize (1) over a ∈ A by optimizing (2) over θ ∈ Θ instead. However, both the relaxation and the optimizer critically affect the convergence speed and solution quality. We next present a principled approach for understanding both mixture and stochastic methods.

B OPTIMIZATION

This section contains proofs and generalizations of the non-convex optimization results in Section 3. Throughout this section, V denotes a finite-dimensional real vector space with Euclidean inner product •, • , R + denotes the set of nonnegative real numbers, and R denotes the set of extended real numbers R ∪ {±∞}.

B.1 PRELIMINARIES

Definition 1. Consider a closed and convex subset X ⊂ V. For any α > 0 and norm • : X → R + an everywhere-subdifferentiable function f : X → R is called α-strongly-convex w.r.t. • if ∀ x, y ∈ X we have f (y) ≥ f (x) + ∇f (x), y -x + α 2 y -x 2 Definition 2. Consider a closed and convex subset X ⊂ V. For any β > 0 and norm • : X → R + an continuously-differentiable function f : X → R is called β-strongly-smooth w.r.t. • if ∀ x, y ∈ X we have f (y) ≤ f (x) + ∇f (x), y -x + β 2 y -x 2 Definition 3. Let X be a closed and convex subset of V. The Bregman divergence induced by a strictly convex, continuously-differentiable distance-generating function (DGF) φ : X → R is D φ (x||y) = φ(x) -φ(y) -∇φ(y), x -y ∀ x, y ∈ X By definition, the Bregman divergence satisfies the following properties: (Zhang & He, 2018 , Definition 2.1) Consider a closed and convex subset X ⊂ V. For any γ > 0 and φ : X → R an everywhere-subdifferentiable function f : 2.11) Consider a closed and convex subset X ⊂ V. For any λ > 0, function f : X → R, and DGF φ : 1. D φ (x||y) ≥ 0 ∀ x, y ∈ X and D φ (x||y) = 0 ⇐⇒ x = y. 2. If φ is α-strongly-convex w.r.t. norm • then so is D φ (•||y) ∀ y ∈ X . Furthermore, D φ (x||y) ≥ α 2 x -y 2 ∀ x, y ∈ X . 3. If φ is β-strongly-smooth w.r.t. norm • then so is D φ (•||y) ∀ y ∈ X . Furthermore, D φ (x||y) ≤ β 2 x -y 2 ∀ x, y ∈ X . Definition 4. X → R is called γ-relatively-weakly-convex (γ-RWC) w.r.t. φ if f (•) + γφ(•) is convex on X . X → R the Bregman stationarity of f at any point x ∈ X is ∆ λ (x) = D φ (x|| prox λ (x)) + D φ (prox λ (x)||x) λ 2 B.2 RESULTS Throughout this subsection let V = × b i=1 V i be a product space of b finite-dimensional real vector spaces V i , each with an associated norm • i : V i → R + , and X = × b i=1 X i be a product set of b subsets X i ⊂ V i , each with an associated 1-strongly-convex DGF φ i : X i → R w.r.t. • i . For each i ∈ [b] will use • i, * to denote the dual norm of • i and for any element x ∈ X we will use x i to denote its component in block i and x -i to denote the component across all blocks other than i. Define the functions • : V → R + and • * V → R + for any x ∈ V by x 2 = b i=1 x i 2 i and x 2 * = b i=1 x i 2 i, * , respectively, and the function φ : X → R for any x ∈ X by φ(x) = b i=1 φ i (x). Finally, for any n ∈ N we will use [n] to denote the set {1, . . . , n}. Setting 1. For some fixed constants γ i , L i > 0 for each i ∈ [b] we have the following: 1. f : X → R is everywhere-subdifferentiable with minimum f * > -∞ and for all x ∈ X and each i ∈ [b] the restriction f (•, x -i ) is γ i -RWC w.r.t. φ i . 2. For each i ∈ [b] there exists a stochastic oracle G i that for input x ∈ X outputs a random vector G i (x, ξ) s.t. E ξ G i (x, ξ) ∈ ∂ i f (x), where ∂ i f (x) is the subdifferential set of the restriction f (•, x -i ) at x i . Moreover, E ξ G i (x, ξ) 2 i, * ≤ L 2 i . Define γ = max i∈[b] γ i and L 2 = b i=1 L 2 i . Claim 1. • is a norm on V. Proof. Positivity and homogeneity are trivial. For the triangle inequality, note that for any λ ∈ [0, 1] and any x, y ∈ X we have that λx + (1 -λ)y = b i=1 λx i + (1 -λ)y i 2 i ≤ b i=1 (λ x i i + (1 -λ) y i i ) 2 ≤ λ b i=1 x i 2 i + (1 -λ) b i=1 y i 2 i = λ x + (1 -λ) y where the first inequality follows by convexity of the norms • i ∀ i ∈ [b] and the fact that the Euclidean norm on R b is nondecreasing in each argument, while the second inequality follows by convexity of the Euclidean norm on R b . Setting λ = 1 2 and multiplying both sides by 2 yields the triangle inequality. Algorithm 2: Block-stochastic mirror descent over X = × b i=1 X i given associated DGFs φ i : X i → R. Input: initialization x (1) ∈ X , number of steps T ≥ 1, step-size sequence {η t } T t=1 for iteration t ∈ [T ] do sample i ∼ Unif[b] set x (t+1) -i = x (t) -i get g = G i (x (t) , ξ t ) set x (t+1) i = arg min u∈Xi η t g, u + D φi (u||x (t) i ) Output: x = x (t) w.p. ηt T t=1 ηt . Claim 2. 1 2 • 2 * is the convex conjugate of 1 2 • 2 . Proof. Consider any u ∈ V. To upper-bound the convex conjugate note that sup x∈V u, x - x 2 2 = sup x∈V b i=1 u i , x i - x 2 i 2 ≤ sup x∈V b i=1 u i i, * x i i - x i 2 i 2 = 1 2 b i=1 u i 2 i, * = u 2 * 2 where the first inequality follows by definition of a dual norm and the second by maximizing each term w.r.t. x i i . For the lower bound, pick x ∈ V s.t. u i , x i = u i i, * x i i and x i i = u i i, * ∀ i ∈ [b], which must exist by the definition of a dual norm. Then u, x - x 2 2 = b i=1 u i , x i - x i 2 i 2 = 1 2 b i=1 u i 2 i, * 2 = u 2 * 2 so sup x∈V u, x -1 2 x 2 ≥ 1 2 u 2 * , completing the proof. Theorem 2. Let x be the output of Algorithm 2 after T iterations with non-increasing step-size sequence {η t } T t=1 . Then under Setting 1, for any γ > γ we have that E∆ 1 γ (x) ≤ γb γ -γ min u∈X f (u) + γD φ (u||x (1) ) -f * + γL 2 2b T t=1 η 2 t T t=1 η t where the expectation is w.r.t. ξ t and the randomness of the algorithm. Proof. Define transforms U i , i ∈ [b] s.t. U T i x = x i and x = b i=1 U i x i ∀ x ∈ X . Let G be a stochastic oracle that for input x ∈ X outputs G(x, i, ξ) = bU i G i (x, ξ). This implies E i,ξ G(x, i, ξ) = 1 b b i=1 bU i E ξ G i (x, ξ) ∈ b i=1 U i ∂ i f (x) = ∂f (x) and E i,ξ G(x, i, ξ) 2 * = 1 b b i=1 b 2 E ξ U i G i (x, ξ) 2 i, * ≤ b b i=1 L 2 i = bL 2 . Then x (t+1) = U i arg min u∈Xi η t g, u + D φi (u||x (t) i ) = U i U T i arg min u∈X η t U i G i (x (t) , ξ t ), u + b i=1 D φi (u i ||x (t) i ) = arg min u∈X η t b G(x, i, ξ t ), u + D φ (u||x (t) ) Thus Algorithm 2 is equivalent to Zhang & He (2018, Algorithm 1) with stochastic oracle G(x, i, ξ), step-size sequence {η t /b} T t=1 , and no regularizer. Note that φ is 1-strongly-convex w.r.t. • and f is γ-RWC w.r.t. φ, so in light of Claims 1 and 2 our setup satisfies Assumption 3.1 of Zhang & He (2018) . The result then follows from Theorem 3.1 of the same. Corollary 1. Under Setting 1 let x be the output of Algorithm 2 with constant step-size ) ). Then we have η t = 2b(f (1) -f * ) γL 2 T ∀ t ∈ [T ], where f (1) = f (x ( E∆ 1 2γ (x) ≤ 2L 2bγ(f (1) -f * ) T where the expectation is w.r.t. ξ t and the randomness of the algorithm. Equivalently, we can reach a point x satisfying E∆ 1 2γ (x) ≤ ε in 8γbL 2 (f (1) -f * ) ε 2 stochastic oracle calls.

B.3 A SINGLE-LEVEL ANALYSIS OF ENAS AND DARTS

In this section we apply our analysis to understanding two existing NAS algorithms, ENAS (Pham et al., 2018) and DARTS (Liu et al., 2019) . For simplicity, we assume objectives induced by architectures in the relaxed search space are γ-smooth, which excludes components such as ReLU. However, such cases can be smoothed via Gaussian convolution, i.e. adding noise to every gradient; thus given the noisiness of SGD training we believe the following analysis is still informative (Kleinberg et al., 2018) . ENAS continuously relaxes A via a neural controller that samples architectures a ∈ A, so Θ = R O(h 2 ) , where h is the number of hidden units. The controller is trained with Monte Carlo gradients. On the other hand, first-order DARTS uses a mixture relaxation similar to the one in Section A but using a softmax instead of constraining parameters to the simplex. Thus Θ = R |E|×|O| for E the set of learnable edges and O the set of possible operations. If we assume that both algorithms use SGD for the architecture parameters then to compare them we are interested in their respective values of G θ , which we will refer to as G ENAS and G DARTS . Before proceeding, we note again that our theory holds only for the single-level objective and when using SGD as the architecture optimizer, whereas both algorithms specify the bilevel objective and Adam (Kingma & Ba, 2015) , respectively. At a very high level, the Monte Carlo gradients used by ENAS are known to be high-variance, so G ENAS may be much larger than G DARTS , yielding faster convergence for DARTS, which is reflected in practice (Liu et al., 2019) . We can also do a simple low-level analysis under the assumption that all architecture gradients are bounded entry-wise, i.e. in ∞ -norm, by some constant; then since the squared 2 -norm is bounded by the product of the dimension and the squared ∞ -norm we have G 2 ENAS = O(h 2 ) while G 2 DARTS = O(|E||O|). Since ENAS uses a hidden state size of h = 100 and the DARTS search space has |E| = 14 edges and |O| = 7 operations, this also points to DARTS needing fewer iterations to converge. et al., 2019) , Swish (Ramachandran et al., 2017) , squeeze and excite modules (Hu et al., 2018) , etc. "-" indicates that the field does not apply while "N/A" indicates unknown. Note that search cost is hardware-dependent; our results used Tesla V100 GPUs. 

C.2 NAS-BENCH-1SHOT1

The NAS-Bench-1Shot1 benchmark (Zela et al., 2020a) contains 3 different search spaces that are subsets of the NAS-Bench-101 search space. The search spaces differ in the number of nodes and the number of input edges selected per node. We refer the reader to (Zela et al., 2020a) for details about each individual search space. Of the NAS methods evaluated in Zela et al. (2020a) , PC-DARTS had the most robust performance across the three search spaces and converged to the best architecture in search spaces 1 and 3. GDAS, a probabilistic gradient NAS method, achieved the best performance on search space 2. Hence, we focused on applying a geometry-aware approach to PC-DARTS. We implemented GAEA PC-DARTS within the repository provided by the authors of Zela et al. (2020a) available at https: //github.com/automl/nasbench-1shot1. We used the same hyperparameter settings for training the weight-sharing network as that used by Zela et al. (2020a) for PC-DARTS. Similar to the previous benchmark, we initialize architecture parameters to allocate equal weight to all options. For the architecture updates, the only hyperparameter for GAEA PC-DARTS is the learning rate for exponentiated gradient, which we set to 0.1. As mentioned in Section 4, the search spaces considered in this benchmark differ in that operations are applied after aggregating all edge inputs to a node instead of per edge input as in the DARTS and NAS-Bench-201 search spaces. This structure inherently limits the size of the weight-sharing network to scale with the number of nodes instead of the number of edges (O(|nodes| 2 )), thereby limiting the degree of overparameterization. Understanding the impact of overparameterization on the performance of weight-sharing NAS methods is a direction for future study. 

C.4 COMPARISON TO XNAS

As discussed in Section 1, XNAS is similar to GAEA in that it uses an exponentiated gradient update but is motivated from a regret minimization perspective. Nayman et al. (2019) provides regret bounds for XNAS relative to the observed sequence of validation losses, however, this is not equivalent to the regret relative to the best architecture in the search space, which would have generated a different sequence of validation losses. XNAS also differs in its implementation in two ways: (1) a wipeout routine is used to zero out operations that cannot recover to exceed the current best operation within the remaining number of iterations and (2) architecture gradient clipping is applied per data point before aggregating to form the update. These differences are motivated from the regret analysis and meaningfully increase the complexity of the algorithm. Unfortunately, the authors do not provide the code for architecture search in their code release at https://github.com/NivNayman/XNAS. Nonetheless, we implemented XNAS for the NAS-Bench-201 search space to provide a point of comparison to GAEA. Our results shown in Figure 5 demonstrate that XNAS exhibits much of the same behavior as DARTS in that the operations all converge to skip connections. We hypothesize that this is due to the gradient clipping, which obscures the signal kept by GAEA in favor of convolutional operations.



Code to obtain these results has been made available in the supplementary material. XNAS code does not implement search and, as with previous efforts(Li et al., 2019, OpenReview), we cannot reproduce results after correspondence with the authors. XNAS's best architecture achieves an average test error of 2.70% under the DARTS evaluation, while GAEA achieves 2.50%. For details see Appendix C.4. Note(Liu et al., 2019) trains the weight-sharing network with random seeds. However, since PC-DARTS is significantly faster than DARTS, the cost of an additional seed is negligible.



-201: CIFAR-10 Entropy DARTS (first order) GAEA DARTS(bi-level)

Figure 2: NAS-Bench-1Shot1: Online comparison of PC-DARTS and GAEA PC-DARTS in terms of the test regret at each epoch of shared-weights training, i.e. the difference between the ground truth test error of the proposed architecture and that of the best architecture in the search space. The dark lines indicate the mean of four random trials and the light colored bands ± one standard deviation.The dashed line is the final regret of the best weight-sharing method according toZela et al. (2020b); note that in our reproduction PC-DARTS performed better than their evaluation on spaces 1 and 3.

GAEA ON NAS-BENCH-201 NAS-Bench-201 has one search space on three datasets-CIFAR-10, CIFAR-100, and ImageNet-16-120-that includes 4-node architectures with an operation from O = {none, skip connect, 1x1 convolution, 3x3 convolution, 3x3 avg pool} on each edge, yielding 15625 possible networks.Dong & Yang (2020) report results for several algorithms in the transfer NAS setting, where search is conducted on CIFAR-10 and the resulting networks are trained on a possibly different target dataset.

Definition 5.(Zhang & He, 2018, Definition 2.3) Consider a closed and convex subset X ⊂ V. For any λ > 0, function f : X → R, and DGF φ : X → R the Bregman proximal operator of f isprox λ (x) = arg min u∈X λf (u) + D φ (u||x)Definition 6.(Zhang & He, 2018, Equation

Figure 3: The best normal and reduction cells found by GAEA PC-DARTS on CIFAR-10 (top) and ImageNet (bottom).

Figure4: NAS-Bench-201: Learning Curves. Evolution over search phase epochs of the best architecture according to the NAS method. DARTS (first-order) converges to nearly all skip connections while GAEA is able to suppress overfitting to the mixture relaxation by encouraging sparsity in operation weights.

Figure 5: NAS-Bench-201: XNAS Learning Curves. Evolution over search phase epochs of the best architecture according 4 runs of XNAS. XNAS exhibits the same behavior as DARTS and converges to nearly all skip connections.

Results are separated into traditional hyperparameter optimization algorithms with search run on CIFAR-10 (top block), weight-sharing methods with search run on CIFAR-10 (middle block), and weight-sharing methods run directly on the dataset used for training (bottom block). The use of transfer NAS follows the evaluations conducted byDong & Yang (2020); unless otherwise stated all non-GAEA results are from their paper. The best results in the transfer and direct settings on each dataset are bolded.

We show results for networks with a comparable number of parameters. † For fair comparison to other work, we show the search cost for training the shared-weights network with a single initialization. ‡ Search space and backbone architecture (PyramidNet) differ from the DARTS setting.PDARTS results not reported for multiple seeds. Additionally, PDARTS uses deeper weight-sharing networks during search, on which PC-DARTS has also been shown to improve performance(Xu et al., 2020), so we GAEA PC-DARTS to further improve if modified similarly.

DARTS (CIFAR-10): Comparison with manually designed networks and those found by SOTA NAS methods, mainly on the DARTS search space(Liu et al., 2019). Results grouped by the type of search method: manually designed, full-evaluation NAS, and weight-sharing NAS. All test errors are for models trained with auxiliary towers and cutout (parameter counts exclude auxiliary weights). Test errors we report are averaged over 10 seeds. "-" indicates that the field does not apply while "N/A" indicates unknown. Note that search cost is hardware-dependent; our results used Tesla V100 GPUs.

DARTS (ImageNet): Comparison with manually designed networks and those found by SOTA NAS methods, mainly on the DARTS search space(Liu et al., 2019). Results are grouped by the type of search method: manually designed, full-evaluation NAS, and weight-sharing NAS. All test errors are for models trained with auxiliary towers and cutout but no other modifications, e.g. label smoothing(Müller et al., 2019), AutoAugment (Cubuk

ACKNOWLEDGMENTS

We thank Jeremy Cohen, Jeffrey Li, and Nicholas Roberts for helpful feedback. This work was supported in part by DARPA under cooperative agreements FA875017C0141 and HR0011202000, NSF grants CCF-1535967, CCF-1910321, IIS-1618714, IIS-1705121, IIS-1838017, IIS-1901403, and IIS-2046613, a Microsoft Research Faculty Fellowship, a Bloomberg Data Science research grant, an Amazon Research Award, an AWS Machine Learning Research Award, a Facebook Faculty Research Award, funding from Booz Allen Hamilton Inc., a Block Center Grant, a Carnegie Bosch Institute Research Award, and a Two Sigma Fellowship Award. Any opinions, findings and conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA, NSF, or any other funding agency.

C EXPERIMENTAL DETAILS

We provide additional detail on the experimental setup and hyperparameter settings used for each benchmark studied in Section 4. We also provide a more detailed discussion of how XNAS differs from GAEA, along with empirical results for XNAS on the NAS-Bench-201 benchmark.

C.1 DARTS SEARCH SPACE

We consider the same search space as DARTS (Liu et al., 2019) , which has become one of the standard search spaces for CNN cell search (Xie et al., 2019; Nayman et al., 2019; Chen et al., 2019; Noy et al., 2019; Liang et al., 2019) . Following the evaluation procedure used in Liu et al. (2019) and Xu et al. (2020) , our evaluation of GAEA PC-DARTS consists of three stages:• Stage 1: In the search phase, we run GAEA PC-DARTS with 5 random seeds to reduce variance from different initialization of the shared-weights network. For completeness, we describe the convolutional neural network search space considered. A cell consists of 2 input nodes and 4 intermediate nodes for a total of 6 nodes. The nodes are ordered and subsequent nodes can receive the output of prior nodes as input so for a given node k, there are k -1 possible input edges to node k. Therefore, there are a total of 2 + 3 + 4 + 5 = 14 edges in the weight-sharing network.An architecture is defined by selecting 2 input edges per intermediate node and also selecting a single operation per edge from the following 8 operations: (1) 3 × 3 separable convolution, (2) 5 × 5 separable convolution, (3) 3 × 3 dilated convolution, (4) 5 × 5 dilated convolution, (5) max pooling, (6) average pooling, ( 7) identity (8) zero. We use the same search space to design a "normal" cell and a "reduction" cell; the normal cells have stride 1 operations that do not change the dimension of the input, while the reduction cells have stride 2 operations that half the length and width dimensions of the input. In the experiments, for both cell types, , after which the output of all intermediate nodes are concatenated to form the output of the cell. 3 show the final stage 3 evaluation performance of GAEA PC-DARTS for 2 additional sets of random seeds from stage 1 search. The performance of GAEA PC-DARTS for one set is similar to that reported in Table 1 , while the other is on par with the performance reported for PC-DARTS in Xu et al. (2020) . We do observe non-negligible variance in the performance of the architecture found by different random seed initializations of the shared-weights network, necessitating running multiple searches before selecting an architecture.We also found that it was possible to identify and eliminate poor performing architectures in just 20 epochs of training during stage 2 intermediate evaluation, thereby reducing the total training cost by over 75% (we only trained 3 out of 10 architectures for the entire 600 epochs).

Stage 3 Evaluation Set 1 (Reported)

Set 2 Set 3 2.50 ± 0.07 2.50 ± 0.09 2.60 ± 0.09Table 3 : GAEA PC-DARTS Stage 3 Evaluation for 3 sets of random seeds.We depict the top architectures found by GAEA PC-DARTS for CIFAR-10 and ImageNet in Figure 3 and detailed results in Tables 4 and 5 .

C.3 NAS-BENCH-201

The NAS-Bench-201 benchmark (Dong & Yang, 2020 ) evaluates a single search space across 3 datasets: CIFAR-10, CIFAR-100, and a miniature version of ImageNet (ImageNet-16-120). ImageNet-16-120 is a downsampled version of ImageNet with 16 × 16 images and 120 classes for a total of 151.7k training images, 3k validation images, and 3k test images. The authors of Dong & Yang (2020) evaluated the architecture search performance of multiple weight-sharing methods and traditional hyperparameter optimization methods on all three datasets. According to the results from Dong & Yang (2020) , GDAS outperformed other weight-sharing methods by a large margin. Hence, we first evaluated the performance of GAEA GDAS on each of the three datasets. Our implementation of GAEA GDAS uses an architecture learning rate of 0.1, which matches the learning rate used for GAEA approaches in the previous two benchmarks. Additionally, we run GAEA GDAS for 150 epochs instead of 250 epochs used for GDAS in the original benchmarked results; this is why the search cost is lower for GAEA GDAS. All other hyperparameter settings are the same. Our results for GAEA GDAS is comparable to the reported results for GDAS on CIFAR-10 and CIFAR-100 but slightly lower on ImageNet-16-120. Compared to our reproduced results for GDAS, GAEA GDAS outperforms GDAS on CIFAR-100 and matches it on CIFAR-10 and ImageNet-16-120.Next, to see if we can use GAEA to further improve the performance of weight-sharing methods, we evaluated GAEA DARTS (first order) applied to both the single-level (ERM) and bi-level optimization problems. Again, we used a learning rate of 0.1 and trained GAEA DARTS for 25 epochs on each dataset. The one additional modification we made was to exclude the zero operation, which limits GAEA DARTS to a subset of the search space. To isolate the impact of this modification, we also evaluated first-order DARTS with this modification. Similar to (Dong & Yang, 2020) , we observe DARTS with this modification to also converge to architectures with nearly all skip connections, resulting in similar performance as that reported in Dong & Yang (2020) . We present the learning curves of the oracle architecture recommended by DARTS and GAEA DARTS (when excluding zero operation) over the training horizon for 4 different runs in Figure 4 . For GAEA GDAS and GAEA DARTS, we train the weight-sharing network with the following hyperparameters:train: scheduler: cosine lr_anneal_cycles: 1 smooth_cross_entropy: false batch_size: 64 learning_rate: 0.025 learning_rate_min: 0.001 momentum: 0.9 weight_decay: 0.0003 init_channels: 16 layers: 5 autoaugment: false cutout: false auxiliary: false auxiliary_weight: 0.4 drop_path_prob: 0 grad_clip: 5Surprisingly, we observe single-level optimization to yield better performance than solving the bilevel problem with GAEA DARTS on this search space. In fact, the performance of GAEA DARTS (ERM) not only exceeds that of GDAS, but also outperforms traditional hyperparameter optimization approaches on all three datasets, nearly reaching the optimal accuracy on all three datasets. In contrast, GAEA DARTS (bi-level) outperforms GDAS on CIFAR-100 and ImageNet-16-120 but underperforms slightly on CIFAR-10. The single-level results on this benchmark provides concrete support for our convergence analysis, which only applies to the ERM problem. As noted in Section 4, the search space considered in this benchmark differs from the prior two in that there is no subsequent edge pruning. Additionally, the search space is fairly small with only 3 nodes for which architecture decisions must be made. The success of GAEA DARTS (ERM) on this benchmark indicate the need for a better understanding of when single-level optimization should be used in favor of the default bi-level optimization problem and how the search space impacts the decision.

