DDPNOPT: DIFFERENTIAL DYNAMIC PROGRAMMING NEURAL OPTIMIZER

Abstract

Interpretation of Deep Neural Networks (DNNs) training as an optimal control problem with nonlinear dynamical systems has received considerable attention recently, yet the algorithmic development remains relatively limited. In this work, we make an attempt along this line by reformulating the training procedure from the trajectory optimization perspective. We first show that most widely-used algorithms for training DNNs can be linked to the Differential Dynamic Programming (DDP), a celebrated second-order method rooted in the Approximate Dynamic Programming. In this vein, we propose a new class of optimizer, DDP Neural Optimizer (DDP-NOpt), for training feedforward and convolution networks. DDPNOpt features layer-wise feedback policies which improve convergence and reduce sensitivity to hyper-parameter over existing methods. It outperforms other optimal-control inspired training methods in both convergence and complexity, and is competitive against state-of-the-art first and second order methods. We also observe DDPNOpt has surprising benefit in preventing gradient vanishing. Our work opens up new avenues for principled algorithmic design built upon the optimal control theory.

1. INTRODUCTION

In this work, we consider the following optimal control problem (OCP) in the discrete-time setting: min ū J( ū; x 0 ) := φ(x T ) + T -1 t=0 t (x t , u t ) s.t. x t+1 = f t (x t , u t ) , where x t ∈ R n and u t ∈ R m represent the state and control at each time step t. f t (•, •), t (•, •) and φ(•) respectively denote the nonlinear dynamics, intermediate cost and terminal cost functions. OCP aims to find a control trajectory, ū {u t } T -1 t=0 , such that the accumulated cost J over the finite horizon t ∈ {0, 1, • • • , T } is minimized. Problems with the form of OCP appear in multidisciplinary areas since it describes a generic multi-stage decision making problem (Gamkrelidze, 2013), and have gained commensurate interest recently in deep learning (Weinan, 2017; Liu & Theodorou, 2019) . Central to the research along this line is the interpretation of DNNs as discrete-time nonlinear dynamical systems, where each layer is viewed as a distinct time step (Weinan, 2017). The dynamical system perspective provides a mathematically-sound explanation for existing DNN models (Lu et al., 2019) . It also leads to new architectures inspired by numerical differential equations and physics (Lu et al., 2017; Chen et al., 2018; Greydanus et al., 2019) . In this vein, one may interpret the training as the parameter identification (PI) of nonlinear dynamics. However, PI typically involves (i) searching time-independent parameters (ii) given trajectory measurements at each time step (Voss et al., 2004; Peifer & Timmer, 2007) . Neither setup holds in piratical DNNs training, which instead optimizes time-(i.e. layer-) varying parameters given the target measurements only at the final stage. An alternative perspective, which often leads to a richer analysis, is to recast network weights as control variables. Through this lens, OCP describes w.l.o.g. the training objective composed of layerwise loss (e.g. weight decay) and terminal loss (e.g. cross-entropy). This perspective (see Table 1 ) has been explored recently to provide theoretical statements for convergence and generalization (Weinan et al., 2018; Seidman et al., 2020) . On the algorithmic side, while OCP has motivated new architectures (Benning et al., 2019) and methods for breaking sequential computation (Gunther et al., 2020; Zhang et al., 2019) , OCP-inspired optimizers remain relatively limited, often restricted to either specific network class (e.g. discrete weight) (Li & Hao, 2018) or small-size dataset (Li et al., 2017) . The aforementioned works are primarily inspired by the Pontryagin Maximum Principle (PMP, Boltyanskii et al. (1960) ), which characterizes the first-order optimality conditions to OCP. Another parallel methodology which receives little attention is the Approximate Dynamic Programming (ADP, Bertsekas et al. (1995) ). Despite both originate from the optimal control theory, ADP differs from PMP in that at each time step a locally optimal feedback policy (as a function of state x t ) is computed. These policies, as opposed to the vector update from PMP, are known to enhance the numerical stability of the optimization process when models admit chain structures (e.g. in DNNs) (Liao & Shoemaker, 1992; Tassa et al., 2012) . Practical ADP algorithms such as the Differential Dynamic Programming (DDP, Jacobson & Mayne (1970)) appear extensively in modern autonomous systems for complex trajectory optimization (Tassa et al., 2014; Gu, 2017) . However, whether they can be lifted to large-scale stochastic optimization, as in the DNN training, remains unclear. In this work, we make a significant advance toward optimal-control-theoretic training algorithms inspired by ADP. We first show that most existing first-and second-order optimizers can be derived from DDP as special cases. Built upon this intriguing connection, we present a new class of optimizer which marries the best of both. The proposed method, DDP Neural Optimizer (DDPNOpt), features layer-wise feedback policies, which, as we will show through experiments, improve convergence and robustness. To enable efficient training, DDPNOpt adapts key components including (i) curvature adaption from existing methods, (ii) stabilization techniques used in trajectory optimization, and (iii) an efficient factorization to OCP. These lift the complexity by orders of magnitude compared with other OCP-inspired baselines, without sacrificing the performance. In summary, we present the following contributions. • We draw a novel perspective of DNN training from the trajectory optimization viewpoint, based on a theoretical connection between existing training methods and the DDP algorithm. • We present a new class of optimizer, DDPNOpt, that performs a distinct backward pass inherited with Bellman optimality and generates layer-wise feedback policies to robustify the training against unstable hyperparameter (e.g. large learning rate) setups. • We show that DDPNOpt achieves competitive performance against existing training methods on classification datasets and outperforms previous OCP-inspired methods in both training performance and runtime complexity. We also identify DDPNOpt can mitigate vanishing gradient.

2. PRELIMINARIES

We will start with the Bellman principle to OCP and leave discussions on PMP in Appendix A.1. Theorem 1 (Dynamic Programming (DP) (Bellman, 1954) ). Define a value function V t : R n → R at each time step that is computed backward in time using the Bellman equation V t (x t ) = min ut(xt)∈Γx t t (x t , u t ) + V t+1 (f t (x t , u t )) Qt(xt,ut)≡Qt , V T (x T ) = φ(x T ) , where Γ xt : R n → R m denotes a set of mapping from state to control space. Then, we have V 0 (x 0 ) = J * (x 0 ) be the optimal objective value to OCP. Further, let µ * t (x t ) ∈ Γ xt be the minimizer of Eq. 1 for each t, then the policy π * = {µ * t (x t )} T -1 t=0 is globally optimal in the closed-loop system. Notation: We will always use t as the time step of dynamics and denote a subsequence trajectory until time s as xs {xt} s t=0 , with x {xt} T t=0 as the whole. For any real-valued time-dependent function Ft, we denote its derivatives evaluated on a given state-control pair (xt ∈ R n and ut ∈ R m ) as ∇x t Ft ∈ R n , ∇ 2 x t Ft ∈ R n×n , ∇x t u t Ft ∈ R n×m , or simply F t x , F t xx , and F t xu for brevity. The vector-tensor product, i.e. the contraction mapping on the dimension of the vector space, is denoted by Vx • fxx n i=1 [Vx]i [fxx]i, where [Vx]i is the i-th element of the vector Vx and [fxx]i is the Hessian matrix corresponding to that element.



Terminology mapping

