DDPNOPT: DIFFERENTIAL DYNAMIC PROGRAMMING NEURAL OPTIMIZER

Abstract

Interpretation of Deep Neural Networks (DNNs) training as an optimal control problem with nonlinear dynamical systems has received considerable attention recently, yet the algorithmic development remains relatively limited. In this work, we make an attempt along this line by reformulating the training procedure from the trajectory optimization perspective. We first show that most widely-used algorithms for training DNNs can be linked to the Differential Dynamic Programming (DDP), a celebrated second-order method rooted in the Approximate Dynamic Programming. In this vein, we propose a new class of optimizer, DDP Neural Optimizer (DDP-NOpt), for training feedforward and convolution networks. DDPNOpt features layer-wise feedback policies which improve convergence and reduce sensitivity to hyper-parameter over existing methods. It outperforms other optimal-control inspired training methods in both convergence and complexity, and is competitive against state-of-the-art first and second order methods. We also observe DDPNOpt has surprising benefit in preventing gradient vanishing. Our work opens up new avenues for principled algorithmic design built upon the optimal control theory.

1. INTRODUCTION

In this work, we consider the following optimal control problem (OCP) in the discrete-time setting: Central to the research along this line is the interpretation of DNNs as discrete-time nonlinear dynamical systems, where each layer is viewed as a distinct time step (Weinan, 2017). The dynamical system perspective provides a mathematically-sound explanation for existing DNN models (Lu et al., 2019) . It also leads to new architectures inspired by numerical differential equations and physics (Lu et al., 2017; Chen et al., 2018; Greydanus et al., 2019) . In this vein, one may interpret the training as the parameter identification (PI) of nonlinear dynamics. However, PI typically involves (i) searching time-independent parameters (ii) given trajectory measurements at each time step (Voss et al., 2004; Peifer & Timmer, 2007) . Neither setup holds in piratical DNNs training, which instead optimizes time-(i.e. layer-) varying parameters given the target measurements only at the final stage. min ū J( ū; x 0 ) := φ(x T ) + T -1 An alternative perspective, which often leads to a richer analysis, is to recast network weights as control variables. Through this lens, OCP describes w.l.o.g. the training objective composed of layerwise loss (e.g. weight decay) and terminal loss (e.g. cross-entropy). This perspective (see Table 1 ) has been explored recently to provide theoretical statements for convergence and generalization (Weinan et al., 2018; Seidman et al., 2020) . On the algorithmic side, while OCP has motivated new



• • • , T } is minimized. Problems with the form of OCP appear in multidisciplinary areas since it describes a generic multi-stage decision making problem (Gamkrelidze, 2013), and have gained commensurate interest recently in deep learning (Weinan, 2017; Liu & Theodorou, 2019).

