PROVABLY FASTER ALGORITHMS FOR BILEVEL OPTI-MIZATION AND APPLICATIONS TO META-LEARNING

Abstract

Bilevel optimization has arisen as a powerful tool for many machine learning problems such as meta-learning, hyperparameter optimization, and reinforcement learning. In this paper, we investigate the nonconvex-strongly-convex bilevel optimization problem. For deterministic bilevel optimization, we provide a comprehensive finite-time convergence analysis for two popular algorithms respectively based on approximate implicit differentiation (AID) and iterative differentiation (ITD). For the AID-based method, we orderwisely improve the previous finitetime convergence analysis due to a more practical parameter selection as well as a warm start strategy, and for the ITD-based method we establish the first theoretical convergence rate. Our analysis also provides a quantitative comparison between ITD and AID based approaches. For stochastic bilevel optimization, we propose a novel algorithm named stocBiO, which features a sample-efficient hypergradient estimator using efficient Jacobian-and Hessian-vector product computations. We provide the finite-time convergence guarantee for stocBiO, and show that stocBiO outperforms the best known computational complexities orderwisely with respect to the condition number κ and the target accuracy . We further validate our theoretical results and demonstrate the efficiency of bilevel optimization algorithms by the experiments on meta-learning and hyperparameter optimization.

1. INTRODUCTION

Bilevel optimization has received significant attention recently and become an influential framework in various machine learning applications including meta-learning (Franceschi et al., 2018; Bertinetto et al., 2018; Rajeswaran et al., 2019; Ji et al., 2020a) , hyperparameter optimization (Franceschi et al., 2018; Shaban et al., 2019; Feurer & Hutter, 2019) , reinforcement learning (Konda & Tsitsiklis, 2000; Hong et al., 2020) , and signal processing (Kunapuli et al., 2008; Flamary et al., 2014) . A general bilevel optimization takes the following formulation. min x∈R p Φ(x) := f (x, y * (x)) s.t. y * (x) = arg min y∈R q g(x, y), where the upper-and inner-level functions f and g are both jointly continuously differentiable. The goal of eq. ( 1) is to minimize the objective function Φ(x) w.r.t. x, where y * (x) is obtained by solving the lower-level minimization problem. In this paper, we focus on the setting where the lower-level function g is strongly convex with respect to (w.r.t.) y, and the upper-level objective function Φ(x) is nonconvex w.r.t. x. Such types of geometrics commonly exist in many applications including meta-learning and hyperparameter optimization, where g corresponds to an empirical loss with a strongly-convex regularizer and x are parameters of neural networks. A broad collection of algorithms have been proposed to solve such types of bilevel optimization problems. For example, Hansen et al. (1992) ; Shi et al. (2005) ; Moore (2010) reformulated the bilevel problem in eq. ( 1) into a single-level constrained problem based on the optimality conditions of the lower-level problem. However, such type of methods often involve a large number of constraints, and are hard to implement in machine learning applications. Recently, more efficient gradient-based bilevel optimization algorithms have been proposed, which can be generally categorized into the approximate implicit differentiation (AID) based approach (Domke, 2012; Pedregosa, 2016; Gould et al., 2016; Liao et al., 2018; Ghadimi & Wang, 2018; Grazzi et al., 2020; Lorraine et al., 2020) and the iterative differentiation (ITD) based approach (Domke, 2012; Maclaurin et al., 2015; Franceschi et al., 2017; 2018; Shaban et al., 2019; Grazzi et al., 2020) . However, most of these studies have focused on the asymptotic convergence analysis, and the finite-time analysis (that characterizes how fast an algorithm converges) has not been well explored except a few attempts recently. Ghadimi & Wang (2018) provided the finite-time analysis for the ITD-based approach. Grazzi et al. (2020) provided the iteration complexity for the hypergradient computation via ITD and AID, but did not characterize the finite-time convergence for the entire execution of algorithms. • Thus, the first focus of this paper is to develop a comprehensive and enhanced theory, which covers a broader class of bilevel optimizers via ITD and AID based techniques, and more importantly, to improve the exiting analysis with a more practical parameter selection and order-level lower computational complexity. The stochastic bilevel optimization often occurs in applications where fresh data need to be sampled as the algorithms run (e.g., reinforcement learning (Hong et al., 2020) ) or the sample size of training data is large (e.g., hyperparameter optimization (Franceschi et al., 2018) , Stackelberg game (Roth et al., 2016) ). Typically, the corresponding objective function is given by min x∈R p Φ(x) = f (x, y * (x)) := E ξ [F (x, y * (x); ξ)] 1 n n i=1 F (x, y * (x); ξ i ) s.t. y * (x) = arg min y∈R q g(x, y) := E ζ [G(x, y * (x); ζ)] 1 m m i=1 G(x, y * (x); ζ i ), where f (x, y) and g(x, y) take either the expectation form w.r.t. the random variables ξ and ζ or the finite-sum form over given data D n,m = {ξ i , ζ j , i = 1, ..., n; j = 1, ..., m} often with large sizes n and m. During the optimization process, the algorithms sample data batch via the distributions of ξ and ζ or from the set D n,m . For such a stochastic setting, Ghadimi & Wang (2018) proposed a bilevel stochastic approximation (BSA) method via single-sample gradient and Hessian estimates. Based on such a method, Hong et al. (2020) further proposed a two-timescale stochastic approximation (TTSA), and showed that TTSA achieves a better trade-off between the complexities of inner-and outer-loop optimization stages than BSA. • The second focus of this paper is to design a more sample-efficient algorithm for bilevel stochastic optimization, which achieves an order-level lower computational complexity over BSA and TTSA.

1.1. MAIN CONTRIBUTIONS

Our main contributions lie in developing enhanced theory and provably faster algorithms for the nonconvex-strongly-convex bilevel deterministic and stochastic optimization problems, respectively. Our analysis involves several new developments, which can be of independent interest. We first provide a unified finite-time convergence and complexity analysis for both ITD and AID based bilevel optimizers, which we call as ITD-BiO and AID-BiO. Compared to existing analysis in Ghadimi & Wang (2018) for AID-BiO that requires a continuously increasing number of innerloop steps to achieve the guarantee, our analysis allows a constant number of inner-loop steps as often used in practice. In addition, we introduce a warm start initialization for the inner-loop updates and the outer-loop hypergradient estimation, which allows us to backpropagate the tracking errors to previous loops, and results in an improved computational complexity. As shown in Table 1 , the gradient complexities Gc(f, ), Gc(g, ), and Jacobian-and Hessian-vector product complexities JV(g, ) and HV(g, ) of AID-BiO to attain an -accurate stationary point improve those of Ghadimi & Wang (2018) by the order of κ, κ -1/4 , κ, and κ, respectively, where κ is the condition number. In addition, our analysis shows that AID-BiO requires less computations of Jacobian-and Hessianvector products than ITD-BiO by an order of κ and κ 1/2 , which provides a justification for the observation in Grazzi et al. (2020) that ITD often has a larger memory cost than AID. We then propose a stochastic bilevel optimizer (stocBiO) to solve the stochastic bilevel optimization problem in eq. ( 2). Our algorithm features a mini-batch hyper-gradient estimation via implicit differentiation, where the core design involves a sample-efficient Hypergradient estimator via the Neumann series. As shown in Table 2 , the gradient complexities of our proposed algorithm w.r.t. F and G improve upon those of BSA (Ghadimi & Wang, 2018 ) by an order of κ and -1 , respectively. In addition, the Jacobian-vector product complexity JV(G, ) of our algorithm improves that of BSA by an order of κ. In terms of the target accuracy , our computational complexities improve those of TTSA (Hong et al., 2020) by an order of -1/2 . We further provide the theoretical complexity guarantee of ITD-BiO, AID-BiO and stocBiO in metalearning and hyperparameter optimization. The experiments validate our theoretical results for determinisitic bilevel optimization, and demonstrate the superior efficiency of stocBiO for stochastic bilevel optimization. Due to the space limitations, we present all theoretical and empirical results on hyperparameter optimization in the supplementary materials.

1.2. RELATED WORK

Bilevel optimization approaches: Bilevel optimization was first introduced by Bracken & McGill (1973) . Since then, a number of bilevel optimization algorithms have been proposed, which include but not limited to constraint-based methods (Shi et al., 2005; Moore, 2010) and gradient-based methods (Domke, 2012; Pedregosa, 2016; Gould et al., 2016; Maclaurin et al., 2015; Franceschi et al., 2018; Ghadimi & Wang, 2018; Liao et al., 2018; Shaban et al., 2019; Hong et al., 2020; Liu et al., 2020; Li et al., 2020; Grazzi et al., 2020; Lorraine et al., 2020) . Among them, Ghadimi & Wang (2018) ; Hong et al. (2020) provided the finite-time complexity analysis for their proposed methods for the nonconvex-strongly-convex bilevel optimization problem. For such a problem, this paper develops a general and enhanced finite-time analysis for gradient-based bilevel optimizers for the deterministic setting, and proposes a novel algorithm for the stochastic setting with order-level lower computational complexity than the existing results. Some works have studied other types of loss geometries. For example, Liu et al. (2020) ; Li et al. (2020) assumed that the lower-and upper-level functions g(x, •) and f (x, •) are convex and stronglyconvex, and provided an asymptotic analysis for their methods. Ghadimi & Wang (2018) ; Hong et al. (2020) studied the setting where Φ(•) is strongly-convex or convex, and g(x, •) is strongly-convex. Bilevel optimization in meta-learning: Bilevel optimization framework has been successfully employed in meta-learning recently (Snell et al., 2017; Franceschi et al., 2018; Rajeswaran et al., 2019; Zügner & Günnemann, 2019; Ji et al., 2020a; b) . For example, Snell et al. (2017) proposed a bilevel optimization procedure for meta-learning to learn a common embedding model for all tasks. Rajeswaran et al. ( 2019) reformulated the model-agnostic meta-learning (MAML) (Finn et al., 2017) as a bilevel optimization problem, and proposed iMAML via implicit gradient. The paper provides a theoretical guarantee for two popular types of bilevel optimization algorithms, i.e., AID-BiO and ITD-BiO, for meta-learning. Bilevel optimization in hyperparameter optimization: Hyperparameter optimization has become increasingly important as a powerful tool in the automatic machine learning (autoML) (Okuno et al., Algorithm 1 Deterministic bilevel optimization via AID or ITD 1: Input: Stepsizes α, β > 0, initializations x0, y0, v0. 2: for k = 0, 1, 2, ..., K do 3: Set y 0 k = y T k-1 if k > 0 and y0 otherwise 4: for t = 1, ...., T do 5: Update y t k = y t-1 k -α∇yg(x k , y t-1 k ) 6: end for 7: Hypergradient estimation via , 2020) . Recently, various bilevel optimization algorithms have been proposed in the context of hyperparameter optimization, which include implicit differentiation based methods (Pedregosa, 2016) , dynamical system based methods via reverse or forward gradient computation (Franceschi et al., 2017; 2018; Shaban et al., 2019) , etc. This paper demonstrates the superior efficiency of the proposed stocBiO algorithm in hyperparameter optimization. • AID: 1) set v 0 k = v N k-1 if k > 0 and v0 otherwise 2) solve v N k from ∇ 2 y g(x k , y T k )v = ∇yf (x k , y T k ) via N steps of CG starting from v 0 k 3) compute Jacobian-vector product ∇x∇yg(x k , y T k )v N k via automatic differentiation 4) compute ∇Φ(x k ) = ∇xf (x k , y T k ) -∇x∇yg(x k , y T k )v N k • ITD: compute ∇Φ(x k ) = ∂f (x k ,y T k ) x k via backpropagation w.r.t. x k 8: Update x k+1 = x k -β ∂f (x k ,y T k ) ∂x k 9: end for 2018; Yu & Zhu

2. ALGORITHMS

In this section, we describe two popular types of deterministic bilevel optimization algorithms, and propose a new algorithms for stochastic bilevel optimization.

2.1. ALGORITHMS FOR DETERMINISTIC BILEVEL OPTIMIZATION

As shown in Algorithm 1, we describe two popular types of deterministic bilevel optimizers respectively based on AID and ITD (referred to as AID-BiO and ITD-BiO) for solving the problem eq. (1). Both AID-BiO and ITD-BiO update in a nested-loop manner. In the inner loop, both of them run T steps of gradient decent (GD) to find an approximation point y T k close to y * (x k ). Note that we choose the initialization y 0 k of each inner loop as the output y T k-1 of the preceding inner loop rather than a random start. Such a warm start allows us to backpropagate the tracking error y T k -y * (x k ) to previous loops, and yields an improved computational complexity. At the outer loop, AID-BiO first solves v N k from a linear system ∇ 2 y g(x k , y T k )v = ∇ y f (x k , y T k )foot_0 using N steps of conjugate-gradient (CG) starting from v 0 k (where we also adopt a warm start scheme here by setting v 0 k = v N k-1 ), and then constructs ∇Φ(x k ) = ∇ x f (x k , y T k ) -∇ x ∇ y g(x k , y T k )v N k (3) as an estimate of the true hypergradient ∇Φ(x k ), whose form is given by the following proposition. Proposition 1. Recalling the definition Φ(x) := f (x, y * (x)), it holds that ∇Φ(x k ) =∇ x f (x k , y * (x k )) -∇ x ∇ y g(x k , y * (x k ))v * k , where v * k is the solution of the linear system ∇ 2 y g(x k , y * (x k ))v = ∇ y f (x k , y * (x k )). As shown in Domke (2012) ; Grazzi et al. (2020) , the construction of eq. ( 3) involves only Hessianvector products in solving v N via CG and Jacobian-vector product ∇ x ∇ y g(x k , y T k )v N k , which can be efficiently computed and stored via existing automatic differentiation packages. As a comparison, the outer loop of ITD-BiO computes the gradient Compute gradient estimate ∇Φ(x k ) via eq. ( 6) 12: Update x k+1 = x k -β ∇Φ(x k ) 13: end for Algorithm 3 Construct v Q given v 0 1: Input: An integer Q, data samples DH = {Bj} Q j=1 and a constant η > 0. 2: for j = 1, 2, ..., Q do 3: Sample Bj and compute gradient Gj(y) = y -η∇yG(x, y; Bj) 4: end for 5: Set rQ = v0 6: for i = Q, ..., 1 do 7: ri-1 = ∂ Gi(y)ri /∂y = ri -η∇ 2 y G(x, y; Bi)ri via automatic differentiation 8: end for 9: Return vQ = η Q i=0 ri because the output y T k of the inner loop has a dependence on x k through the inner-loop iterative GD updates. The explicit form of the estimate ∂f (x k ,y T k (x k )) ∂x k is given by the following proposition via the chain rule. For notation simplification, let T -1 j=T (•) = I. Proposition 2. The gradient ∂f (x k ,y T k (x k )) ∂x k takes the following analytical form: ∂f (x k , y T k ) ∂x k = ∇ x f (x k , y T k ) -α T -1 t=0 ∇ x ∇ y g(x k , y t k ) T -1 j=t+1 (I -α∇ 2 y g(x k , y j k ))∇ y f (x k , y T k ). Proposition 2 shows that the differentiation involves the computations of second-order derivatives such as Hessian ∇ 2 y g(•, •). Since efficient Hessian-free methods such as CG have been successfully deployed in the existing automatic differentiation tools, computing these second-order derivatives reduces to more efficient computations of Jacobian-and Hessian-vector products.

2.2. ALGORITHM FOR STOCHASTIC BILEVEL OPTIMIZATION

We propose a new stochastic bilevel optimizer (stocBiO) in Algorithm 2 to solve the problem eq. ( 2). It has a double-loop structure similar to Algorithm 1, but runs T steps of stochastic gradient decent (SGD) at the inner loop to obtain an approximated solution y T k . Based on the output y T k of the inner loop, stocBiO first computes a gradient ∇ y F (x k , y T k ; D F ) over a sample batch D F , and then computes a vector v Q via Algorithm 3, which takes a form of v Q = η Q-1 q=-1 Q j=Q-q (I -η∇ 2 y G(x k , y T k ; B j ))∇ y F (x k , y T k ; D F ), where {B j , j = 1, ..., Q} are mutually-independent sample sets, Q and η are constants, and we let Q Q+1 (•) = I for notational simplification. Note that our construction of v Q , i.e., Algorithm 3, is motived by the Neumann series ∞ i=0 U k = (I -U ) -1 , and involves only Hessian-vector products rather than Hessians, and hence is computationally and memory efficient. Then, we construct ∇Φ(x k ) =∇ x F (x k , y T k ; D F ) -∇ x ∇ y G(x k , y T k ; D G )v Q (6) as an estimate of hypergradient ∇Φ(x k ) given by Proposition 1. An important component of our algorithm is v Q , which serves as an estimate of v * k in eq. ( 4) . Compared to the deterministic case, designing a sample-efficient Hypergradient estimator in the stochastic case is more challenging. For example, instead of choosing the same batch sizes for all B j , j = 1, ..., Q in eq. ( 5), our analysis captures the different impact of components ∇ 2 y G(x k , y T k ; B j ), j = 1, ..., Q on the Hypergradient estimation variance, and inspires an adaptive and more efficient choice by setting |B Q-j | to decay exponentially with j from 0 to Q -1. By doing so, we achieve an improved complexity.

3. DEFINITIONS AND ASSUMPTIONS

Let z = (x, y) denote all parameters. For simplicity, suppose sample sets S t for all t = 0, ..., T -1, D G and D F have the sizes of S, D g and D f , respectively. In this paper, we focus on the following types of loss functions for both the deterministic and stochastic cases. Assumption 1. The lower-level function g(x, y) is µ-strongly-convex w.r.t. y and the total objective function Φ(x) = f (x, y * (x)) is nonconvex w.r.t. x. For the stochastic setting, the same assumptions hold for G(x, y; ζ) and Φ(x), respectively. Since the objective function Φ(x) is nonconvex, algorithms are expected to find an -accurate stationary point defined as follows. Definition 1. We say x is an -accurate stationary point for the objective function Φ(x) in eq. ( 2) if E ∇Φ(x) 2 ≤ , where x is the output of an algorithm. In order to compare the performance of different bilevel algorithms, we adopt the following metrics of computational complexity. Definition 2. For a function f (x, y) and a vector v, let Gc(f, ) be the number of the partial gradient ∇ x f or ∇ y f , and let JV(g, ) and HV(g, ) be the number of Jacobian-vector products ∇ x ∇ y g(x, y)v. and Hessian-vector products ∇ 2 y g(x, y)v. For the stochastic case, similar metrics are adopted but w.r.t. the stochastic function F (x, y; ξ). We take the following standard assumptions on the loss functions in eq. ( 2), which have been widely adopted in bilevel optimization (Ghadimi & Wang, 2018; Ji et al., 2020a) . Assumption 2. The loss function f (z) and g(z) satisfy • f (z) is M -Lipschitz, i.e., for any z, z , |f (z) -f (z )| ≤ M z -z . • Gradients ∇f (z) and ∇f (z) are L-Lipschitz, i.e., for any z, z , ∇f (z) -∇f (z ) ≤ L z -z , ∇g(z) -∇g(z ) ≤ L z -z . For the stochastic case, the same assumptions hold for F (z; ξ) and G(z; ζ) for any given ξ and ζ. As shown in Proposition 1, the gradient of the objective function Φ(x) involves the second-order derivatives ∇ x ∇ y g(z) and ∇ 2 y g(z). The following assumption imposes the Lipschitz conditions on such high-order derivatives, as also made in Ghadimi & Wang (2018) . Assumption 3. Suppose the derivatives ∇ x ∇ y g(z) and ∇ 2 y g(z) are τ -and ρ-Lipschitz, i.e., • For any z, z , ∇ x ∇ y g(z) -∇ x ∇ y g(z ) ≤ τ z -z . • For any z, z , ∇ 2 y g(z) -∇ 2 y g(z ) ≤ ρ z -z . For the stochastic case, the same assumptions hold for ∇ x ∇ y G(z; ζ) and ∇ 2 y G(z; ζ) for any ζ. As typically adopted in the analysis for stochastic optimization, we make the following boundedvariance assumption for the lower-level stochastic function G(z; ζ). Assumption 4. ∇G(z; ζ) has a bounded variance, i.e., E ξ ∇G(z; ζ) -∇g(z) 2 ≤ σ 2 for some σ.

4.1. DETERMINISTIC BILEVEL OPTIMIZATION

We first characterize the convergence and complexity performance of the AID-BiO algorithm. Let κ = L µ denote the condition number. Theorem 1 (AID-BiO). Suppose Assumptions 1, 2, 3 hold. Define a smoothness parameter L Φ = L + 2L 2 +τ M 2 µ + ρLM +L 3 +τ M L µ 2 + ρL 2 M µ 3 = Θ(κ 3 ), choose the stepsizes α ≤ 1 L , β = 1 8LΦ , and set the inner-loop iteration number T ≥ Θ(κ) and the CG iteration number N ≥ Θ( √ κ), where the detailed forms of T, N can be found in Appendix E. Then, the outputs of AID-BiO satisfy 1 K K-1 k=0 ∇Φ(x k ) 2 ≤ 64L Φ (Φ(x 0 ) -inf x Φ(x)) + 5∆ 0 K , where ∆ 0 = y 0 -y * (x 0 ) 2 + v * 0 -v 0 2 > 0. In order to achieve an -accurate stationary point, we have • Gradient complexity: Gc(f, ) = O(κ 3 -1 ), Gc(g, ) = O(κ 4 -1 ). • Jacobian-and Hessian-vector product: JV(g, ) = O κ 3 -1 , HV(g, ) = O κ 3.5 -1 . It can be seen from Table 1 that the complexities Gc(f, ), Gc(g, ), JV(g, ) and HV(g, ) of our analysis improves that of Ghadimi & Wang (2018) (eq. ( 2.30) therein) by the order of κ, κ -1/4 , κ and κ. Such an improvement is achieved by a refined analysis with a constant number of innerloop steps, and by a warm start strategy to backpropagate the tracking errors y T k -y * (x k ) and v N k -v * k to previous loops, as also demonstrated by our meta-learning experiments. We next characterize the convergence and complexity performance of the ITD-BiO algorithm. Theorem 2 (ITD-BiO). Suppose Assumptions 1, 2, and 3 hold. Define the parameter L Φ as in Theorem 1, and choose α ≤ 1 L , β = 1 4LΦ and T ≥ Θ(κ log 1 ), where the detailed form of T can be found in Appendix F. Then, the outputs of ITD-BiO satisfy 1 K K-1 k=0 ∇Φ(x k ) 2 ≤ 16L Φ (Φ(x 0 ) -inf x Φ(x)) K + 2 3 . In order to achieve an -accurate stationary point, we have • Gradient complexity: Gc(f, ) = O(κ 3 -1 ), Gc(g, ) = O(κ 4 -1 log( 1 )). • Jacobian-and Hessian-vector product complexity: JV(g, ) = O κ 4 -1 log -1 , HV(g, ) = O κ 4 -1 log -1 . By comparing Theorem 1 and Theorem 2, it can be seen that the complexities Gc(g, ), JV(g, ), and HV(g, ) of AID-BiO are better than those of ITD-BiO by the order of log( 1 ), κ log( 1 ) and κ 0.5 log( 1 ). This is in consistence with the comparison in Grazzi et al. (2020) that AID-BiO often has a lower memory cost than ITD-BiO.

4.2. STOCHASTIC BILEVEL OPTIMIZATION

We first characterize the bias and variance of an important component v Q in eq. ( 5). Proposition 3. Suppose Assumptions 1, 2 and 3 hold. Let the constant η ≤ 1 L and choose the batch sizes |B Q+1-j | = BQ(1-ηµ) j-1 for j = 1, ..., Q, where B ≥ 1 Q(1-ηµ) Q-1 . Then, the bias satisfies Ev Q -[∇ 2 y g(x k , y T k )] -1 ∇ y f (x k , y T k ) ≤ µ -1 (1 -ηµ) Q+1 M. (8) Furthermore, the estimation variance is given by E v Q -[∇ 2 y g(x k , y T k )] -1 ∇ y f (x k , y T k ) 2 ≤ 4η 2 L 2 M 2 µ 2 1 B + 4(1 -ηµ) 2Q+2 M 2 µ 2 + 2M 2 µ 2 D f . Proposition 3 shows that if we choose Q and B at the order level of O(log 1 ) and O(1/ ), the bias and variance are smaller than O( ), and the required number of samples is Q j=1 BQ(1 - ηµ) j-1 = O -1 log 1 . Note that the chosen batch size |B Q+1-j | exponentially decays w.r.t. j. In comparison, the uniform choice of all |B j | would yield a worse complexity of O -1 (log 1 ) 2 . We next analyze stocBiO when the objective function Φ(x) := f (x, y * (x)) is nonconvex. Theorem 3. Suppose Assumptions 1, 2, 3 and 4 hold. Define parameter L Φ = L + 2L 2 +τ M 2 µ + ρLM +L 3 +τ M L µ 2 + ρL 2 M µ 3 = O κ 3 , choose stepsize β = 1 4LΦ , and set η < 1 L in Algorithm 3. Set T ≥ max log 12+ 48β 2 L 2 µ 2 (L+ L 2 µ + M τ µ + LM ρ µ 2 ) 2 2 log( L+µ L-µ ) , log √ β(L+ L 2 µ + M τ µ + LM ρ µ 2 ) log( L+µ L-µ ) . Then, we have 1 K K-1 k=0 E ∇Φ(x k ) 2 ≤ 32L Φ (Φ(x 0 ) -inf x Φ(x) + 5 2 y 0 -y * (x 0 ) 2 ) K + 72κ 2 M 2 (1 -ηµ) 2Q + 40 L + L 2 µ + M τ µ + LM ρ µ 2 2 Lµ σ 2 S + 16κ 2 M 2 D g + (8 + 32κ 2 )M 2 D f + 64κ 2 M 2 B . ( ) In order to achieve an -accurate stationary point, we have • Gradient complexity: Gc(F, ) = O(κ 5 -2 ), Gc(G, ) = O(κ 9 -2 ). • Jacobian-and Hessian-vector product: JV(G, ) = O(κ 5 -2 ), HV(G, ) = O(κ 6 -2 ). Theorem 3 shows that stocBiO converges sublinearly with the convergence error decaying exponentially w.r.t. Q and sublinearly w.r.t. the batch sizes S, D g , D f for gradient estimation and B for Hessian inverse estimation. In addition, it can be seen that the total number T of the inner-loop steps is chosen at nearly a constant level, rather than a typical choice of Θ(log( 1)). As shown in Table 2 , the gradient complexities of our proposed algorithm in terms of F and G improve those of BSA in Ghadimi & Wang (2018) by an order of κ and -1 , respectively. In addition, the Jacobian-vector product complexity JV(G, ) of our algorithm improves that of BSA by the order of κ. In terms of the accuracy , our gradient, Jacobian-and Hessian-vector product complexities improve those of TTSA in Hong et al. (2020) all by an order of -0.5 .

5.1. META-LEARNING WITH COMMON EMBEDDING MODEL

Consider the few-shot meta-learning problem with m tasks {T i , i = 1, ..., m} sampled from distribution P T . Each task T i has a loss function L(φ, w i ; ξ) over each data sample ξ, where φ are the parameters of an embedding model shared by all tasks, and w i are the task-specific parameters. The goal of this framework is to find good parameters φ for all tasks, and building on the embedded features, each task then adapts its own parameters w i by minimizing its loss. The model training takes a bilevel procedure. In the lower-level stage, building on the embedded features, the base learner of task T i searches w * i as the minimizer of its loss function over a training set S i . In the upper-level stage, the meta-learner evaluates the minimizers w * i , i = 1, ..., m on held-out test sets, and optimizes φ of the embedding model over all tasks. Specifically, let w = (w 1 , ..., w m ) denote all task-specific parameters. Then, the objective function is given by min φ L D (φ, w * ) := 1 m m i=1 1 |D i | ξ∈Di L(φ, w * i ; ξ) L D i (φ,w * i ): task-specific upper-level loss s.t. w * = arg min w L S (φ, w) = arg min (w1,...,wm) 1 m m i=1 1 |S i | ξ∈Si L(φ, w i ; ξ) + R(w i ) L S i (φ,wi): task-specific lower-level loss , where S i and D i are the training and test datasets of task T i , and R(w i ) is a strongly-convex regularizer, e.g., L 2 . Note that the lower-level problem is equivalent to solving each w * i as a minimizer of the task-specific loss L Si (φ, w i ) for i = 1, ..., m. In practice, w i often corresponds to the parameters of the last linear layer of a neural network and φ are the parameters of the remaining layers (e.g., 4 convolutional layers in Bertinetto et al. (2018) ; Ji et al. (2020a) ), and hence the lower-level function is strongly-convex w.r.t. w and the upper-level function L D (φ, w * (φ)) is generally nonconvex w.r.t. φ. In addition, due to the small sizes of datasets D i and S i in few-shot learning, all updates for each task T i use full gradient descent without data resampling. As a result, AID-BiO and ITD-BiO in Algorithm 1 can be applied here. In some applications where the number m of tasks is large, it is more efficient to sample a batch B of i.i.d. tasks from {T i , i = 1, ..., m} at each meta (outer) iteration, and optimizes the mini-batch versions L D (φ, w ; B) = 1 |B| i∈B L Di (φ, w i ) and L S (φ, w; B) = 1 |B| i∈B L Si (φ, w i ) instead. The following theorem provides the convergence analysis of ITD-BiO for this case. Theorem 4. Suppose Assumptions 1, 2 and 3 hold and suppose each task loss L Si (φ, w i ) is µstrongly-convex w.r.t. w i . Choose the same parameters β, T as in Theorem 2. Then, we have 1 K K-1 k=0 E ∇Φ(φ k ) 2 ≤ 16L Φ (Φ(φ 0 ) -inf φ Φ(φ)) K + 2 3 + 1 + L µ 2 M 2 8|B| . Theorem 4 shows that compared to the full batch (i.e., without task sampling) case in eq. ( 11), the task sampling introduces a variance term O( 1 |B| ) due to the stochastic nature of the algorithm. Using an approach similar to Theorem 4, we can derive a similar result for AID-BiO.

5.2. EXPERIMENTS

To validate our theoretical results for deterministic bilevel optimization, we compare the performance among the following four algorithms: ITD-BiO, AID-BiO-constant (AID-BiO with a constant number of inner-loop steps as in our analysis), AID-BiO-increasing (AID-BiO with an increasing number of inner-loop steps under analysis in Ghadimi & Wang (2018) ), and two popular meta-learning algorithms MAMLfoot_2 (Finn et al., 2017) and ANILfoot_3 (Raghu et al., 2019) . We conduct experiments over a 5-way 5-shot task on two benchmark datasets: FC100 and miniImageNet, and the results are averaged over 10 trials with different random seeds. Due to the space limitations, we provide the model architectures, hyperparameter settings and additional experiments in Appendix B. It can be seen from Figure 1 that for both the miniImageNet and FC100 datasets, AID-BiO-constant converges faster than AID-BiO-increasing in terms of both the training accuracy and test accuracy, and achieves a better final test accuracy than ANIL and MAML. This demonstrates the superior improvement of our developed analysis over existing analysis in Ghadimi & Wang (2018) for AID-BiO algorithm. Moreover, it can be observed that AID-BiO is slightly faster than ITD-BiO in terms of the training accuracy and test accuracy. This is also in consistence with our theoretical results.

6. CONCLUSION

In this paper, we develop a general and enhanced finite-time analysis for the nonconvex-stronglyconvex bilevel deterministic optimization, and propose a novel algorithm for the stochastic setting whose computational complexity outperforms the best known results order-wisely. We also provide the theoretical guarantee of various bilevel optimizers in meta-learning and hyperparameter optimization. The experiments validate our theoretical results and demonstrate the effectiveness of the proposed algorithm. We anticipate that the finite-time analysis that we develop will be useful for analyzing other bilevel optimization problems with different loss geometries, and the proposed algorithms will be useful for other applications such as reinforcement learning and Stackelberg game. 

Supplementary Materials

A APPLICATION TO HYPERPARAMETER OPTIMIZATION A.1 HYPERPARAMETER OPTIMIZATION The goal of hyperparameter optimization (Franceschi et al., 2018; Feurer & Hutter, 2019) is to search for representation or regularization parameters λ to minimize the validation error evaluated over the learner's parameters w * , where w * is the minimizer of the inner-loop regularized training error. Mathematically, the objective function is given by min λ L Dval (λ) = 1 |D val | ξ∈Dval L(w * (λ); ξ) s.t. w * (λ) = arg min w L Dtr (w, λ) := 1 |D tr | ξ∈Dtr L(w, λ; ξ) + R(w, λ) , where D val and D tr are validation and training data, L is the loss, and R(w, λ) is a regularizer. In practice, the lower-level function L Dtr (w, λ) is often strongly-convex w.r.t. w. For example, for the data hyper-cleaning application proposed by Franceschi et al. (2018) ; Shaban et al. (2019) , the predictor is modeled by a linear classifier, and the loss function L(w; ξ) is convex w.r.t. w and R(w, λ) is a strongly-convex regularizer, e.g., L 2 regularization. In addition, the sample sizes of D val and D tr are often large, and stochastic algorithms are preferred for achieving better efficiency. As a result, the above hyperparameter optimization falls into the stochastic bilevel optimization we study in eq. ( 2), and we can apply the proposed stocBiO algorithm here and Theorem 3 establishes its finite-time performance guarantee.

A.2 EXPERIMENTS

We compare our proposed stocBiO with the following baseline bilevel optimization algorithms. • BSA (Ghadimi & Wang, 2018) : implicit gradient based stochastic bilevel optimizer via singlesample data sampling. • TTSA (Hong et al., 2020) : two-time-scale stochastic optimizer via single-sample data sampling. • HOAG (Pedregosa, 2016) : a hyperparameter optimization algorithm with approximate gradient. We use the implementation in the repository https://github.com/fabianp/hoag. • reverse (Franceschi et al., 2017) : an iterative differentiation based method that approximates the hypergradient via backpropagation. We use its implementation in https://github.com/ prolearner/hypertorch. • AID-FP (Grazzi et al., 2020) : AID with the fixed-point method. We use its implementation in https://github.com/prolearner/hypertorch  • AID-CG (Grazzi et al., 2020) : AID with the conjugate gradient method. We use its implementation in https://github.com/prolearner/hypertorch. We demonstrate the effectiveness of the proposed stocBiO algorithm on two experiments: data hyper-cleaning and logistic regression. Logistic Regression on 20 Newsgroup: We compare the performance of our algorithm stocBiO with the existing baseline algorithms reverse, AID-FP, AID-CG and HOAG over a logistic regression problem on 20 Newsgroup dataset Grazzi et al. (2020) . The objective function of such a problem is given by min λ E(λ, w * ) = 1 |D val | (xi,yi)∈Dval L(x i w * , y i ) s.t. w * = arg min w∈R p×c 1 |D tr | (xi,yi)∈Dtr L(x i w, y i ) + 1 cp c i=1 p j=1 exp(λ j )w 2 ij , where L is the cross-entropy loss, c = 20 is the number of topics, and p = 101631 is the feature dimension. Following Grazzi et al. (2020) , we use SGD as the optimizer for the outer-loop update for all algorithms. For reverse, AID-FP, AID-CG, we use the suggested and well-tuned hyperparameter setting in their implementations https://github.com/prolearner/hypertorch on this application. In specific, they choose the inner-and outer-loop stepsizes as 100, the number of inner loops as 10, the number of CG steps as 10. For HOAG, we use the same parameters as reverse, AID-FP, AID-CG. For stocBiO, we use the same parameters as reverse, AID-FP, AID-CG, and choose η = 0.5, Q = 10. We use stocBiO-B as a shorthand of stocBiO with a batch size of B. As shown in Figure 2 , the proposed stocBiO achieves the fastest convergence rate as well as the best test accuracy among all comparison algorithms. This demonstrates the practical advantage of our proposed algorithm stocBiO. Note that we do not include BSA and TTSA in the comparison, because they converge too slowly with a large variance, and are much worse than the other competing algorithms. In addition, we investigate the impact of the batch size on the performance of our stocBiO in Figure 3 . It can be seen that stocBiO outperforms HOAG under the batch sizes of 100, 500, 1000, 2000. This shows that the performance of stocBiO is not very sensitive to the batch size, and hence the tuning of the batch size is easy to handle in practice. Data Hyper-Cleaning on MNIST. We first compare the performance of our proposed algorithm stocBiO with other baseline algorithms BSA, TTSA, HOAGfoot_4 on a hyperparameter optimization problem: data hyper-cleaning (Shaban et al., 2019) on a dataset derived from MNIST (LeCun et al., 1998) , which consists of 20000 images for training, 5000 images for validation, and 10000 images for testing. Data hyper-cleaning is to train a classifier in a corrupted setting where each label of training data is replaced by a random class number with a probability p (i.e., the corruption rate). The objective function is given by min λ E(λ, w * ) = 1 |D val | (xi,yi)∈Dval L(w * x i , y i ) s.t. w * = arg min w L(w, λ) := 1 |D tr | (xi,yi)∈Dtr σ(λ i )L(wx i , y i ) + C r w 2 , where L is the cross-entropy loss, σ(•) is the sigmoid function, C r is a regularization parameter. Following Shaban et al. (2019) , we choose C r = 0.001. All results are averaged over 10 trials with different random seeds. We adopt Adam (Kingma & Ba, 2014) as the optimizer for the outer-loop update for all algorithms. For stochastic algorithms, we set the batch size as 50 for stocBiO, and 1 for BSA and TTSA because they use the single-sample data sampling. For all algorithms, we use a grid search to choose the inner-loop stepsize from {0.01, 0.1, 1, 10}, the outer-loop stepsize from {10 i , i = -4, -3, -2, -1, 0, 1, 2, 3, 4}, and the number T of inner-loop steps from {1, 10, 50, 100, 200, 1000}, where values that achieve the lowest loss after a fixed running time are selected. For stocBiO, BSA, and TTSA, we choose η from {0.5 × 2 i , i = -3, -2, -1, 0, 1, 2, 3}, and Q from {3 × 2 i , i = 0, 1, 2, 3}. It can be seen from Figure 4 that our proposed stocBiO algorithm achieves the fastest convergence rate among all competing algorithms in terms of both the training loss and the test loss. In addition, it is observed that such an improvement is more significant when the corruption rate p is smaller. We note that the stochastic algorithm TTSA converges very slowly with a large variance. This is because TTSA updates the costly outer loop more frequently than other algorithms, and has a larger variance due to the single-sample data sampling. As a comparison, our stocBiO achieves a much lower variance for hypergradient estimation as well as a much faster convergence rate. This verifies our theoretical results in Theorem 3.

B FURTHER SPECIFICATIONS ON META-LEARNING EXPERIMENTS

B.1 DATASETS AND MODEL ARCHITECTURES FC100 (Oreshkin et al., 2018) is a dataset derived from CIFAR-100 (Krizhevsky & Hinton, 2009) Parameter selection for the experiments in Figure 1 (a): For ANIL and MAML, we adopt the suggested hyperparameter selection in the repository (Arnold et al., 2019) . In specific, for ANIL, we choose the inner-loop stepsize as 0.1, the outer-loop (meta) stepsize as 0.002, the task sampling size as 32, and the number of inner-loop steps as 5L. For MAML, we choose the inner-loop stepsize as 0.5, the outer-loop stepsize as 0.003, the task sampling sizeas 32, and the number of inner-loop steps as 3. For ITD-BiO, AID-BiO-constant and AID-BiO-increasing, we use a grid search to choose the inner-loop stepsize from {0.01, 0.1, 1, 10}, the task sampling size from {32, 128, 256}, and the outer-loop stepsize from {10 i , i = -3, -2, -1, 0, 1, 2, 3}, where values that achieve the lowest loss after a fixed running time are selected. For ITD-BiO and AID-BiO-constant, we choose the number of inner-loop steps from {5, 10, 15, 20, 50}, and for AID-BiO-increasing, we choose the number of inner-loop steps as c(k + 1) 1/4 as adopted by the analysis in Ghadimi & Wang (2018) , where we choose c from {0.5, 2, 5, 10, 50}. For both AID-BiO-constant and AID-BiO-increasing, we choose the number N of CG steps for solving the linear system from {5, 10, 15}. Parameter selection for the experiments in Figure 1 (b): For ANIL and MAML, we adopt the suggested hyperparameter selection in the repository (Arnold et al., 2019) . Specifically, for ANIL, we choose the inner-loop stepsize as 0.1, the outer-loop (meta) stepsize as 0.001, the task sampling size as 32 and the number of inner-loop steps as 10. For MAML, we choose the inner-loop stepsize as 0.5, the outer-loop stepsize as 0.001, the task samling size as 32, and the number of inner-loop steps as 3. For ITD-BiO, AID-BiO-constant and AID-BiO-increasing, we adopt the same procedure as in the experiments in Figure 1(a) . In this subsection, we compare the robustness between bilevel optimizer ITD-BiO (AID-BiO performs similarly to ITD-BiO in terms of the convergence rate) and ANIL (ANIL outperforms MAML in general) to the number of inner-loop steps. For the experiments in Figure 5 , we choose the innerloop stepsize as 0.05, the outer-loop (meta) stepsize as 0.002, the mini-batch size as 32, and the number T of inner-loop steps as 10 for both ANIL and ITD-BiO. For the experiments in Figure 6 , we choose the inner-loop stepsize as 0.1, the outer-loop (meta) stepsize as 0.001, the mini-batch size as 32, and the number T of inner-loop steps as 20 for both ANIL and ITD-BiO. It can be seen from Figure 5 and Figure 6 that when the number of inner-loop steps become larger, i.e., T = 10 for miniImageNet and T = 20 for FC100, the bilevel optimizer ITD-BiO converges stably with a small variance, whereas ANIL suffers from a sudden descent at 1500s on miniImageNet and even diverges after 2000s on FC100.

C SUPPORTING LEMMAS

In this section, we provide some auxiliary lemmas used for proving the main convergence results. First note that the Lipschitz properties in Assumption 2 imply the following lemma. Lemma 1. Suppose Assumption 2 holds. Then, the stochastic derivatives ∇F (z; ξ), ∇G(z; ξ), ∇ x ∇ y G(z; ξ) and ∇ 2 y G(z; ξ) have bounded variances, i.e., for any z and ξ, • E ξ ∇F (z; ξ) -∇f (z) 2 ≤ M 2 . • E ξ ∇ x ∇ y G(z; ξ) -∇ x ∇ y g(z) 2 ≤ L 2 . • E ξ ∇ 2 y G(z; ξ) -∇ 2 y g(z) 2 ≤ L 2 . Recall that Φ(x) = f (x, y * (x)) in eq. ( 2). Then, we use the following lemma to characterize the Lipschitz properties of ∇Φ(x), which is adapted from Lemma 2.2 in Ghadimi & Wang (2018) . Lemma 2. Suppose Assumptions 1, 2 and 3 hold. Then, we have, for any x, x ∈ R p , ∇Φ(x) -∇Φ(x ) ≤ L Φ x -x , where the constant L Φ is given by L Φ = L + 2L 2 + τ M 2 µ + ρLM + L 3 + τ M L µ 2 + ρL 2 M µ 3 .

D PROOF OF PROPOSITIONS IN SECTION 2

In this section, we provide the proofs for Proposition 1 and Proposition 2 in Section 2.

D.1 PROOF OF PROPOSITION 1

Using the chain rule over the gradient ∇Φ( x k ) = ∂f (x k ,y * (x k )) ∂x k , we have ∇Φ(x k ) = ∇ x f (x k , y * (x k )) + ∂y * (x k ) ∂x k ∇ y f (x k , y * (x k )). Based on the optimality of y * (x k ), we have ∇ y g(x k , y * (x k )) = 0, which, using the implicit differentiation w.r.t. x k , yields ∇ x ∇ y g(x k , y * (x k )) + ∂y * (x k ) ∂x k ∇ 2 y g(x k , y * (x k )) = 0. Let v * k be the solution of the linear system ∇ 2 y g(x k , y * (x k ))v = ∇ y f (x k , y * (x k )). Then, multiplying v * k at the both sides of eq. ( 15), yields -∇ x ∇ y g(x k , y * (x k ))v * k = ∂y * (x k ) ∂x k ∇ 2 y g(x k , y * (x k ))v * k = ∂y * (x k ) ∂x k ∇ y f (x k , y * (x k )), which, in conjunction with eq. ( 14) , yields the proof.

D.2 PROOF OF PROPOSITION 2

Based on the iterative update of line 5 in Algorithm 1, we have y T k = y 0 k -α T -1 t=0 ∇ y g(x k , y t k ) , which, combined with the fact that ∇ y g(x k , y t k ) is differentiable w.r.t. x k , indicates that the inner output y T k is differentiable w.r.t. x k . Then, based on the chain rule, we have ∂f (x k , y T k ) ∂x k = ∇ x f (x k , y T k ) + ∂y T k ∂x k ∇ y f (x k , y T k ). Based on the iterative updates that y t k = y t-1 k -α∇ y g(x k , y t-1 k ) for t = 1, ..., T , we have ∂y t k ∂x k = ∂y t-1 k ∂x k -α∇ x ∇ y g(x k , y t-1 k ) -α ∂y t-1 k ∂x k ∇ 2 y g(x k , y t-1 k ) = ∂y t-1 k ∂x k (I -α∇ 2 y g(x k , y t-1 k )) -α∇ x ∇ y g(x k , y t-1 k ). Telescoping the above equality over t from 1 to T yields ∂y T k ∂x k = ∂y 0 k ∂x k T -1 t=0 (I -α∇ 2 y g(x k , y t k )) -α T -1 t=0 ∇ x ∇ y g(x k , y t k ) T -1 j=t+1 (I -α∇ 2 y g(x k , y j k )) (i) = -α T -1 t=0 ∇ x ∇ y g(x k , y t k ) T -1 j=t+1 (I -α∇ 2 y g(x k , y j k )). where (i) follows from the fact that ∂y 0 k ∂x k = 0. Combining eq. ( 16) and eq. ( 17) finishes the proof. E CONVERGENCE PROOFS FOR AID-BIO IN SECTION 4.1 For notation simplification, we define the following quantities. Γ =3L 2 + 3τ 2 M 2 µ 2 + 6L 2 1 + √ κ 2 κ + ρM µ 2 2 , δ T,N = Γ(1 -αµ) T + 6L 2 κ √ κ -1 √ κ + 1 2N Ω =8 βκ 2 + 2βM L µ 2 + 2βLM κ µ 2 2 , ∆ 0 = y 0 -y * (x 0 ) 2 + v * 0 -v 0 2 . ( ) We first provide some supporting lemmas. The following lemma characterizes the Hypergradient estimation error ∇Φ(x k )-∇Φ(x k ) , where ∇Φ(x k ) is given by eq. ( 3) via implicit differentiation. Lemma 3. Suppose Assumptions 1, 2 and 3 hold. Then, we have ∇Φ(x k ) -∇Φ(x k ) 2 ≤Γ(1 -αµ) T y * (x k ) -y 0 k 2 + 6L 2 κ √ κ -1 √ κ + 1 2N v * k -v 0 k 2 . where Γ is given by eq. ( 18). Proof of Lemma 3. Based on the form of ∇Φ(x k ) given by Proposition 1, we have ∇Φ(x k ) -∇Φ(x k ) 2 ≤3 ∇ x f (x k , y * (x k )) -∇ x f (x k , y T k ) 2 + 3 ∇ x ∇ y g(x k , y T k ) 2 v * k -v N k 2 + 3 ∇ x ∇ y g(x k , y * (x k )) -∇ x ∇ y g(x k , y T k ) 2 v * k 2 , which, in conjunction with Assumptions 1, 2 and 3, yields ∇Φ(x k ) -∇Φ(x k ) 2 ≤ 3L 2 y * (x k ) -y T k 2 + 3L 2 v * k -v N k 2 + 3τ 2 v * k 2 y T k -y * (x k ) 2 (i) ≤3L 2 y * (x k ) -y T k 2 + 3L 2 v * k -v N k 2 + 3τ 2 M 2 µ 2 y T k -y * (x k ) 2 . (19) where (i) follows from the fact that v * k ≤ (∇ 2 y g(x k , y * (x k ))) -1 ∇ y f (x k , y * (x k )) ≤ M µ . For notation simplification, let 19). Based on the convergence result of CG for the quadratic programing, e.g., eq. ( 17) in Grazzi et al. (2020) , we have v k = (∇ 2 y g(x k , y T k )) -1 ∇ y f (x k , y T k ). We next upper-bound v * k - v N k in eq. ( v N k -v k ≤ √ κ √ κ-1 √ κ+1 N v 0 k -v k . Based on this inequality, we further have v * k -v N k ≤ v * k -v k + v N k -v k ≤ v * k -v k + √ κ √ κ -1 √ κ + 1 N v 0 k -v k ≤ 1 + √ κ √ κ -1 √ κ + 1 N v * k -v k + √ κ √ κ -1 √ κ + 1 N v * k -v 0 k . Next, based on the definitions of v * k and v k , we have v * k -v k = (∇ 2 y g(x k , y T k )) -1 ∇ y f (x k , y T k ) -(∇ 2 y g(x k , y * (x k )) -1 ∇ y f (x k , y * (x k )) ≤ κ + ρM µ 2 y T k -y * (x k ) . Combining eq. ( 19), eq. ( 20), eq. ( 21) yields ∇Φ(x k ) -∇Φ(x k ) 2 ≤ 3L 2 + 3τ 2 M 2 µ 2 y * (x k ) -y T k 2 + 6L 2 κ √ κ -1 √ κ + 1 2N v * k -v 0 k 2 + 6L 2 1 + √ κ √ κ -1 √ κ + 1 N 2 κ + ρM µ 2 2 y T k -y * (x k ) 2 , which, in conjunction with y T k -y * (x k ) ≤ (1 -αµ) T 2 y 0 k -y * (x k ) and the notations in eq. ( 18), finishes the proof. Lemma 4. Suppose Assumptions 1, 2 and 3 hold. Choose T ≥ log (36κ(κ + ρM µ 2 ) 2 + 16(κ 2 + 4LM κ µ 2 ) 2 β 2 Γ)/ log 1 1 -α = Θ(κ) N ≥ 1 2 log(8κ + 48(κ 2 + 2M L µ 2 + 2LM κ µ 2 ) 2 β 2 L 2 κ)/ log √ κ + 1 √ κ -1 = Θ( √ κ), ( ) where Γ is given by eq. (18). Then, we have y 0 k -y * (x k ) 2 + v * k -v 0 k 2 ≤ 1 2 k ∆ 0 + Ω k-1 j=0 1 2 k-1-j ∇Φ(x j ) 2 , ( ) where Ω and ∆ 0 are given by eq. (18). Proof of Lemma 4. Recall that y 0 k = y T k-1 . Then, we have y 0 k -y * (x k ) 2 ≤2 y T k-1 -y * (x k-1 ) 2 + 2 y * (x k ) -y * (x k-1 ) 2 (i) ≤2(1 -αµ) T y 0 k-1 -y * (x k-1 ) 2 + 2κ 2 β 2 ∇Φ(x k-1 ) 2 ≤2(1 -αµ) T y 0 k-1 -y * (x k-1 ) 2 + 4κ 2 β 2 ∇Φ(x k-1 ) -∇Φ(x k-1 ) 2 + 4κ 2 β 2 ∇Φ(x k-1 ) 2 (ii) ≤ 2(1 -αµ) T + 4κ 2 β 2 Γ(1 -αµ) T y * (x k-1 ) -y 0 k-1 2 + 24κ 4 L 2 β 2 √ κ -1 √ κ + 1 2N v * k-1 -v 0 k-1 2 + 4κ 2 β 2 ∇Φ(x k-1 ) 2 . ( ) where (i) follows from Lemma 2.2 in Ghadimi & Wang (2018) and (ii) follows from Lemma 3. In addition, note that v * k -v 0 k 2 = v * k -v N k-1 2 ≤ 2 v * k-1 -v N k-1 2 + 2 v * k -v * k-1 2 (i) ≤4 1 + √ κ 2 κ + ρM µ 2 2 (1 -αµ) T y 0 k-1 -y * (x k-1 ) 2 + 4κ √ κ -1 √ κ + 1 2N v * k-1 -v 0 k-1 2 + 2 v * k -v * k-1 2 , where (i) follows from eq. ( 20). Combining eq. ( 25) with v * k -v * k-1 ≤ (κ 2 + 2M L µ 2 + 2LM κ µ 2 ) x k - x k-1 , we have v * k -v 0 k 2 (i) ≤ 16κ κ + ρM µ 2 2 + 4 κ 2 + 4LM κ µ 2 2 β 2 Γ (1 -αµ) T y 0 k-1 -y * (x k-1 ) 2 + 4κ + 48 κ 2 + 2M L µ 2 + 2LM κ µ 2 2 β 2 L 2 κ √ κ -1 √ κ + 1 2N v * k-1 -v 0 k-1 2 + 4 κ 2 + 2M L µ 2 + 2LM κ µ 2 2 β 2 ∇Φ(x k-1 ) 2 , where (i) follows from Lemma 3. Combining eq. ( 24) and eq. ( 26) yields y 0 k -y * (x k ) 2 + v * k -v 0 k 2 ≤ 18κ κ + ρM µ 2 2 + 8 κ 2 + 4LM κ µ 2 2 β 2 Γ (1 -αµ) T y 0 k-1 -y * (x k-1 ) 2 + 4κ + 24 κ 2 + 2M L µ 2 + 2LM κ µ 2 2 β 2 L 2 κ √ κ -1 √ κ + 1 2N v * k-1 -v 0 k-1 2 + 8 κ 2 + 2M L µ 2 + 2LM κ µ 2 2 β 2 ∇Φ(x k-1 ) 2 , which, in conjunction with eq. ( 22), y 0 k -y * (x k ) 2 + v * k -v 0 k 2 ≤ 1 2 ( y 0 k-1 -y * (x k-1 ) 2 + v * k-1 -v 0 k-1 2 ) + 8 βκ 2 + 2βM L µ 2 + 2βLM κ µ 2 2 ∇Φ(x k-1 ) 2 . ( ) Telescoping eq. ( 27) over k and using the notations in eq. ( 18), we finish the proof. Lemma 5. Under the same setting as in Lemma 4, we have ∇Φ(x k ) -∇Φ(x k ) 2 ≤δ T,N 1 2 k ∆ 0 + δ T,N Ω k-1 j=0 1 2 k-1-j ∇Φ(x j ) 2 . where δ T,N , Ω and ∆ 0 are given by eq. (18). Proof of Lemma 5. Based on Lemma 3, eq. ( 18) and using ab+cd ≤ (a+c)(b+d) for any positive a, b, c, d, we have ∇Φ(x k ) -∇Φ(x k ) 2 ≤δ T,N ( y * (x k ) -y 0 k 2 + v * k -v 0 k 2 ), which, in conjunction with Lemma 4, finishes the proof.

E.1 PROOF OF THEOREM 1

In this subsection, provide the proof for Theorem 1 based on the supporting Lemma 5. Based on the smoothness of the function Φ(x) established in Lemma 2, we have Φ(x k+1 ) ≤Φ(x k ) + ∇Φ(x k ), x k+1 -x k + L Φ 2 x k+1 -x k 2 ≤Φ(x k ) -β ∇Φ(x k ), ∇Φ(x k ) -∇Φ(x k ) -β ∇Φ(x k ) 2 + β 2 L Φ ∇Φ(x k ) 2 + β 2 L Φ ∇Φ(x k ) -∇Φ(x k ) 2 ≤Φ(x k ) - β 2 -β 2 L Φ ∇Φ(x k ) 2 + β 2 + β 2 L Φ ∇Φ(x k ) -∇Φ(x k ) 2 , which, combined with Lemma 5, yields Φ(x k+1 ) ≤Φ(x k ) - β 2 -β 2 L Φ ∇Φ(x k ) 2 + β 2 + β 2 L Φ δ T,N 1 2 k ∆ 0 + β 2 + β 2 L Φ δ T,N Ω k-1 j=0 1 2 k-1-j ∇Φ(x j ) 2 . ( ) Telescoping eq. ( 29) over k from 0 to K -1 yields β 2 -β 2 L Φ K-1 k=0 ∇Φ(x k ) 2 ≤ Φ(x 0 ) -inf x Φ(x) + β 2 + β 2 L Φ δ T,N ∆ 0 + β 2 + β 2 L Φ δ T,N Ω K-1 k=1 k-1 j=0 1 2 k-1-j ∇Φ(x j ) 2 , which, using the fact that K-1 k=1 k-1 j=0 1 2 k-1-j ∇Φ(xj) 2 ≤ K-1 k=0 1 2 k K-1 k=0 ∇Φ(x k ) 2 ≤ 2 K-1 k=0 ∇Φ(x k ) 2 , yields β 2 -β 2 L Φ -βΩ + 2Ωβ 2 L Φ δ T,N K-1 k=0 ∇Φ(x k ) 2 ≤ Φ(x 0 ) -inf x Φ(x) + β 2 + β 2 L Φ δ T,N ∆ 0 . Choose N and T such that Ω + 2ΩβL Φ δ T,N ≤ 1 4 , δ T,N ≤ 1. Note that based on the definition of δ T,N in eq. ( 18), it suffices to choose T ≥ Θ(κ) and N ≥ Θ( √ κ) to satisfy eq. ( 31). Then, substituting eq. ( 31) into eq. ( 30) yields β 4 -β 2 L Φ K-1 k=0 ∇Φ(x k ) 2 ≤ Φ(x 0 ) -inf x Φ(x) + β 2 + β 2 L Φ ∆ 0 , which, in conjunction with β ≤ 1 8LΦ , yields 1 K K-1 k=0 ∇Φ(x k ) 2 ≤ 64L Φ (Φ(x 0 ) -inf x Φ(x)) + 5∆ 0 K . In order to achieve an -accurate stationary point, we obtain from eq. ( 33) that AID-BiO requires at most the total number K = O(κ 3 -1 ) of outer iterations. Then, based on eq. ( 3), we have the following complexity results. • Gradient complexity: Gc(f, ) = 2K = O(κ 3 -1 ), Gc(g, ) = KT = O κ 4 -1 . • Jacobian-and Hessian-vector product complexities: JV(g, ) = K = O κ 3 -1 , HV(g, ) = KN = O κ 3.5 -1 . Then, the proof is complete.

F CONVERGENCE PROOFS FOR ITD-BIO IN SECTION 4.1

We first characterize an important estimation property of the outer-loop gradient estimator ∂f (x k ,y T k ) ∂x k in ITD-BiO for approximating the true gradient ∇Φ(x k ) based on Proposition 2. Lemma 6. Suppose Assumptions 1, 2 and 3 hold. Choose α ≤ 1 L . Then, we have ∂f (x k , y T k ) ∂x k -∇Φ(x k ) ≤ L(L + µ)(1 -αµ) T 2 µ + 2M (τ µ + Lρ) µ 2 (1 -αµ) T -1 2 y 0 k -y * (x k ) + LM (1 -αµ) T µ . Lemma 6 shows that the gradient estimation error ∂f (x k ,y T k ) ∂x k -∇Φ(x k ) decays exponentially w.r.t. the number T of the inner-loop steps. We note that Grazzi et al. (2020) proved a similar result via a fixed point based approach. As a comparison, our proof of Lemma 6 directly characterizes the rate of the sequence ∂y t k ∂x k , t = 0, ..., T converging to ∂y * (x k ) ∂x k via the differentiation over all corresponding points along the inner-loop GD path as well as the optimality of the point y * (x k ). Proof of Lemma 6. Using ∇Φ( x k ) = ∇ x f (x k , y * (x k )) + ∂y * (x k ) ∂x k ∇ y f (x k , y * (x k )) and eq. ( 16) , and using the triangle inequality, we have ∂f (x k , y T k ) ∂x k -∇Φ(x k ) = ∇ x f (x k , y T k ) -∇ x f (x k , y * (x k )) + ∂y T k ∂x k - ∂y * (x k ) ∂x k ∇ y f (x k , y T k ) + ∂y * (x k ) ∂x k ∇ y f (x k , y T k ) -∇ y f (x k , y * (x k )) (i) ≤L y T k -y * (x k ) + M ∂y T k ∂x k - ∂y * (x k ) ∂x k + L ∂y * (x k ) ∂x k y T k -y * (x k ) , where (i) follows from Assumption 2. Our next step is to upper-bound ∂y T k ∂x k -∂y * (x k ) ∂x k in eq. ( 34). Based on the updates y t k = y t-1 k -α∇ y g(x k , y t-1 k ) for t = 1, ..., T in ITD-BiO and using the chain rule, we have ∂y t k ∂x k = ∂y t-1 k ∂x k -α ∇ x ∇ y g(x k , y t-1 k ) + ∂y t-1 k ∂x k ∇ 2 y g(x k , y t-1 k ) . Based on the optimality of y * (x k ), we have ∇ y g(x k , y * (x k )) = 0, which, in conjunction with the implicit differentiation theorem, yields where (i) follows from Assumption 3 and (ii) follows from the strong-convexity of g(x, •). Based on the strong-convexity of the lower-level function g(x, •), we have ∇ x ∇ y t-1 k -y * (x k ) ≤ (1 -αµ) t-1 2 y 0 k -y * (x k ) . ( ) Substituting eq. ( 40) into eq. ( 39) and telecopting eq. ( 39) over t from 1 to T , we have ∂y T k ∂x k - ∂y * (x k ) ∂x k ≤(1 -αµ) T ∂y 0 k ∂x k - ∂y * (x k ) ∂x k + α τ + Lρ µ T -1 t=0 (1 -αµ) T -1-t (1 -αµ) t 2 y 0 k -y * (x k ) =(1 -αµ) T ∂y 0 k ∂x k - ∂y * (x k ) ∂x k + 2 (τ µ + Lρ) µ 2 (1 -αµ) T -1 2 y 0 k -y * (x k ) ≤ L(1 -αµ) T µ + 2 (τ µ + Lρ) µ 2 (1 -αµ) T -1 2 y 0 k -y * (x k ) , where the last inequality follows from ∂y 0 k ∂x k = 0 and eq. ( 38). Then, combining eq. ( 34), eq. ( 38), eq. ( 40) and eq. ( 41) completes the proof.

F.1 PROOF OF THEOREM 2

Based on the characterization on the estimation error of the gradient estimate ∂f (x k ,y T k ) ∂x k in Lemma 6, we now prove Theorem 2. Recall the notation that ∇Φ(x k ) = ∂f (x k ,y T k ) ∂x k . Using an approach similar to eq. ( 28), we have Φ(x k+1 ) ≤Φ(x k ) - β 2 -β 2 L Φ ∇Φ(x k ) 2 + β 2 + β 2 L Φ ∇Φ(x k ) -∇Φ(x k ) 2 , ( ) which, in conjunction with Lemma 6 and use y 0 k -y * (x k ) 2 ≤ ∆, yields Φ(x k+1 ) ≤Φ(x k ) - β 2 -β 2 L Φ ∇Φ(x k ) 2 + 3∆ β 2 + β 2 L Φ L 2 (L + µ) 2 µ 2 (1 -αµ) T + 4M 2 (τ µ + Lρ) 2 µ 4 (1 -αµ) T -1 + 3 β 2 + β 2 L Φ L 2 M 2 (1 -αµ) 2T µ 2 . ( ) Telescoping eq. ( 43) over k from 0 to K -1 yields 1 K K-1 k=0 1 2 -βL Φ ∇Φ(x k ) 2 ≤ Φ(x 0 ) -inf x Φ(x) βK + 3 1 2 + βL Φ L 2 M 2 (1 -αµ) 2T µ 2 +3∆ 1 2 + βL Φ L 2 (L + µ) 2 µ 2 (1 -αµ) T + 4M 2 (τ µ + Lρ) 2 µ 4 (1 -αµ) T -1 . ( ) Proof. Let w T k = (w T 1,k , ..., w T m,k ) be the output of T inner-loop steps of gradient descent at the k th outer loop. Using Proposition 2, we have, for task T i , ∂L Di (φ k , w T i,k ) ∂φ k ≤ ∇ φ L Di (φ k , w T i,k ) + α T -1 t=0 ∇ φ ∇ wi L Si (φ k , w t i,k ) T -1 j=t+1 (I -α∇ 2 wi L Si (φ k , w j i,k ))∇ wi L Di (φ k , w T i.k ) (i) ≤M + αLM T -1 t=0 (1 -αµ) T -t-1 = M + LM µ , where (i) follows from assumptions 2 and strong-convexity of L Si (φ, •). Then, using the definition of L D (φ, w; B) = 1 |B| i∈B L Di (φ, w i ), we have E B ∂L D (φ k , w T k ; B) ∂φ k - ∂L D (φ k , w T k ) ∂φ k 2 = 1 |B| E i ∂L Di (φ k , w T i,k ) ∂φ k - ∂L D (φ k , w T k ) ∂φ k 2 (i) ≤ 1 |B| E i ∂L Di (φ k , w T i,k ) ∂φ k 2 (ii) ≤ 1 + L µ 2 M 2 |B| . ( ) where (i) follows from E i ∂L D i (φ k ,w T i,k ) ∂φ k = ∂L D (φ k , w T k ) ∂φ k and (ii) follows from eq. ( 72). Then, the proof is complete. (74) Taking the expectation of eq. ( 74) yields EΦ(φ k+1 ) (i) ≤EΦ(φ k ) -βE ∇Φ(φ k ), ∇Φ(φ k ) + β 2 L Φ 2 E ∇Φ(φ k ) 2 + β 2 L Φ 2 E ∇Φ(φ k ) - ∂L D (φ k , w T k ; B) ∂φ k 2 (ii) ≤ EΦ(φ k ) -βE ∇Φ(φ k ), ∇Φ(φ k ) + β 2 L Φ 2 E ∇Φ(φ k ) 2 + β 2 L Φ 2 1 + L µ 2 M 2 |B| ≤EΦ(φ k ) - β 2 -β 2 L Φ E ∇Φ(φ k ) 2 + β 2 + β 2 L Φ E ∇Φ(φ k ) -∇Φ(φ k ) 2 + β 2 L Φ 2 1 + L µ 2 M 2 |B| , where (i) follows from E B L D (φ k , w T k ; B) = L D (φ k , w T k ) and (ii) follows from Lemma 10. Using Lemma 6 in eq. ( 75) and rearranging the terms, we have 1 K K-1 k=0 1 2 -βL Φ E ∇Φ(φ k ) 2 ≤ Φ(φ 0 ) -inf φ Φ(φ) βK + 3 1 2 + βL Φ L 2 M 2 (1 -αµ) 2T µ 2 + βL Φ 2 1 + L µ 2 M 2 |B| + 3∆ 1 2 + βL Φ L 2 (L + µ) 2 µ 2 (1 -αµ) T + 4M 2 (τ µ + Lρ) 2 µ 4 (1 -αµ) T -1 ,



This is equivalent to solve a quadratic programing minv 1 v T ∇ 2 y g(x k , y T k )v -v T ∇yf (x k , y T k ). MAML consists of an inner loop for task adaptation and an outer loop for meta initialization training. ANIL refers to almost no inner loop, which is an efficient MAML variant with task-specific adaption on the last-layer of parameters. We do not include reverse, AID-CG and AID-FG because they perform similarly to HOAG.



Figure 1: Convergence of various algorithms on meta-learning. For each dataset, left plot: training accuracy v.s. running time; right plot: test accuracy v.s. running time.

Figure 2: Comparison of various algorithms on logistic regression on 20 Newsgroup dataset. For left plot: test loss v.s. running time; right plot: test accuracy v.s. running time

Figure 4: Convergence of various algorithms on hyperparameter optimization at different corruption rates. For each corruption rate p, left plot: training loss v.s. running time; right plot: test loss v.s. running time.

Figure 5: Comparison of ITD-BiO and ANIL on miniImageNet dataset with T = 10.

Figure 6: Comparison of ITD-BiO and ANIL on FC100 dataset with T = 20.

Proof of Theorem 4. Recall Φ(φ) := L D (φ, w * (φ)) be the objective function, and let∇Φ(φ k ) = ∂L D (φ k , w T k ) ∂φ k. Using an approach similar to eq. (42), we haveΦ(φ k+1 ) ≤Φ(φ k ) + ∇Φ(φ k ), φ k+1 -φ k + L Φ 2 φ k+1 -φ k 2 ≤Φ(φ k ) -β ∇Φ(φ k ), ∂L D (φ k , w T k ;

Oriol Vinyals, Charles Blundell, Timothy Lillicrap, and Daan Wierstra. Matching networks for one shot learning. In Advances in Neural Information Processing Systems (NIPS), 2016. Tong Yu and Hong Zhu. Hyper-parameter optimization: A review of algorithms and applications. arXiv preprint arXiv:2003.05689, 2020. Daniel Zügner and Stephan Günnemann. Adversarial attacks on graph neural networks via meta learning. In International Conference on Learning Representations (ICLR), 2019.

× 3 convolution (padding = 1, stride = 2), batch normalization, ReLU activation, and 2 × 2 max pooling. Each convolutional layer has 64 filters. The miniImageNet dataset (Vinyals et al., 2016) is generated from ImageNet Russakovsky et al. (2015), and consists of 100 classes with each class containing 600 images of size 84×84. Following the repository Arnold et al. (2019), we partition these classes into 64 classes for meta-training, 16 classes for meta-validation, and 20 classes for meta-testing. Following the repository (Arnold et al., 2019), we use a four-layer CNN with four convolutional blocks, where each block sequentially consists of a 3 × 3 convolution, batch normalization, ReLU activation, and 2 × 2 max pooling. Each convolutional layer has 32 filters.B.2 IMPLEMENTATIONS AND HYPERPARAMETER SETTINGSWe adopt the existing implementations in the repository(Arnold et al., 2019) for ANIL and MAML. For all algorithms, we adopt Adam(Kingma & Ba, 2014) as the optimizer for the outer-loop update.

y g(x k , y * (x k )) + ∂y * (x k ) ∂x k ∇ 2 y g(x k , y * (x k )) = 0. (x k ) ∂x k -α ∇ x ∇ y g(x k , y t-1 k ) -∇ x ∇ y g(x k , y * (x k )) (x k ) ∂x k = ∇ x ∇ y g(x k , y * (x k )) ∇ 2 y g(x k , y * (x k ))

annex

Substuting β = 1 4LΦ and T = log max 3LM µ , 9∆L 2 (1+ L µ ) 2 , 36∆M 2 (τ µ+Lρ) 2(1-αµ)µ 4 9 2/ log 1 1-αµ = Θ(κ log 1 ) in eq. ( 44) yieldsIn order to achieve an -accurate stationary point, we obtain from eq. ( 45) that ITD-BiO requires at most the total number K = O(κ 3 -1 ) of outer iterations. Then, based on the gradient form given by Proposition 2, we have the following complexity results.• Gradient complexity: Gc(f, ) = 2K = O(κ 3 -1 ), Gc(g, ) = KT = O κ 4 -1 log 1 .• Jacobian-and Hessian-vector product complexities:Then, the proof is complete.G PROOFS OF MAIN RESULTS FOR STOCHASTIC CASE IN SECTION 4.2 In this section, we provide proofs for the convergence and complexity results of the proposed algorithm for the stochastic case.

G.1 PROOF OF PROPOSITION 3

Based on the definition of v Q in eq. ( 5) and conditioning on x k , y T k , we havewhich, in conjunction with the strong-convexity of function g(x, •), yieldsUnder review as a conference paper at 2021This finishes the proof for the estimation bias. We next prove the variance bound. Note thatwhere (i) follows from Lemma 1, (ii) follows from eq. ( 46), and (iii) follows from the Cauchy-Schwarz inequality.Our next step is to upper-bound M q in eq. ( 47). For simplicity, we define a general quantity M i for by replacing q in M q with i. Then, we havewhere (i) follows from that fact that), (ii) follows from the strong-convexity of function G(x, •; ξ), and (iii) follows from Lemma 1.Then, telescoping eq. ( 48) over i from 2 to q yieldswhich, in conjunction with the choice of |B Q+1-j | = BQ(1 -ηµ) j-1 for j = 1, ..., Q, yieldsSubstituting eq. ( 49) into eq. ( 47) yieldswhere the last inequality follows from the fact that S q=0 x q ≤ 1 1-x . Then, the proof is complete.

G.2 AUXILIARY LEMMAS FOR PROVING THEOREM 3

We first use the following lemma to characterize the first-moment error of the gradient estimate ∇Φ(x k ), whose form is given by eq. ( 6).Lemma 7. Suppose Assumptions 1, 2 and 3 hold. Then, conditioning on x k and y T k , we haveProof of Lemma 7. To simplify notations, we defineBased on the definition of ∇Φ(x k ) in eq. ( 6) and conditioning on x k and y T k , we havewhere the last inequality follows from Proposition 3. Our next step is to upper-bound the first term at the right hand side of eq. ( 52). Using the fact that ∇ 2 y g(x, y) -1 ≤ 1 µ and based on Assumptions 2 and 3, we havewhere the last inequality follows from the inequalityM 1 -M 2 for any two matrices M 1 and M 2 . Combining eq. ( 52) and eq. ( 53) yieldswhich completes the proof.Then, we use the following lemma to characterize the variance of the estimator ∇Φ(x k ). Lemma 8. Suppose Assumptions 1, 2 and 3 hold. Then, we haveProof of Lemma 8. Based on the definitions of ∇Φ(x k ) and ∇Φ T (x k ) in eq. ( 4) and eq. ( 51) and conditioning on x k and y T k , we havewhere (i) follows from the fact that), (ii) follows from Lemma 1 and eq. ( 53), and (iii) follows from the Young's inequality and Assumption 2.Using Lemma 1 and Proposition 3 in eq. ( 54), yieldswhich, unconditioning on x k and y T k , completes the proof.It can be seen from Lemmas 7 and 8 that the upper bounds on both the estimation error and bias depend on the tracking error y T k -y * (x k ) 2 . The following lemma provides an upper bound on such tracking error y T k -y * (x k ) 2 .Lemma 9. Suppose Assumptions 1, 2 and 4 hold. Define constantsChoose T such that λ < 1 and set inner-loop stepsize α = 2 L+µ . Then, we haveProof of Lemma 9. First note that for an integer t ≤ T yConditioning on y t k and taking expectation in eq. ( 57), we havewhere (i) follows from the third item in Assumption 2, (ii) follows from the strong-convexity and smoothness of the function g. Since α = 2 L+µ , we obtain from eq. ( 58) thatUnconditioning on y t k in eq. ( 59) and telescoping eq. ( 59) over t from 0 to T -1 yieldswhere the last inequality follows from Algorithm 2 that y 0where (i) follows from Lemma 2.2 in Ghadimi & Wang (2018) . Using Lemma 8 in eq. ( 61) yieldsCombining eq. ( 60) and eq. ( 62) yieldsBased on the definitions of λ, ω, ∆ in eq. ( 56), we obtain from eq. ( 63) thatTelescoping eq. ( 64) over k yieldswhich completes the proof.

G.3 PROOF OF THEOREM 3

In this subsection, we provide the proof for Theorem 3, based on the supporting lemmas we develop in Appendix G.2.Based on the smoothness of the function Φ(x) in Lemma 2, we haveFor simplicity, let. Note that we choose β = 1 4L φ . Then, taking expectation over the above inequality, we havewhere (i) follows from Cauchy-Schwarz inequality, and (ii) follows from Lemma 7 and Lemma 8.To simplify notations, LetThen, applying Lemma 9 in eq. ( 65) and using the definitions of ω, ∆, λ in eq. ( 56), we haveTelescoping the above inequality over k from 0 to K -1 yieldswhich, using the fact that, we have λ ≤ 1 6 , and hence eq. ( 67) is further simplified toBy the definitions of ω in eq. ( 56) and ν in eq. ( 66) and T ≥In addition, since T >, we haveSubstituting eq. ( 69) and eq. ( 70) in eq. ( 68) yieldswhich, in conjunction with eq. ( 56) and eq. ( 66), yields eq. ( 10) in Theorem 3.Then, based on eq. ( 10), in order to achieve an -accurate stationary point, i.e., E ∇Φ(x) 2 ≤ with x chosen from x 0 , ..., x K-1 uniformly at random, it suffices to choose K = 32L Φ (Φ(x 0 ) -inf x Φ(x) + 5 2 y 0 -y * (x 0 ) 2 ) = O κ 3 , T = Θ(κ)Note that the above choices of Q and B satisfy the condition that B ≥ 1 Q(1-ηµ) Q-1 required in Proposition 3.Then, the gradient complexity is given by Gc(F, ) = KD f = O(κ 5 -2 ), Gc(G, ) = KT S = O(κ 9 -2 ). In addition, the Jacobian-and Hessian-vector product complexities are given by JV(G, ) = KD g = O(κ 5 -2 ) andThen, the proof is complete.

H PROOF OF THEOREM 4 ON META-LEARNING

To prove Theorem 4, we first establish the following lemma to characterize the estimation variance, where w T k is the output of T inner-loop steps of gradient descent at the k th outer loop.Lemma 10. Suppose Assumptions 2 and 3 are satisfied and suppose each task loss L Si (φ, w i ) is µ-strongly-convex w.r.t. w i . Then, we havewhere ∆ = max k w 0 k -w * (φ k ) 2 < ∞. Choose the same parameters β, T as in Theorem 2. Then, we haveThen, the proof is complete.

