LEARNING THE STEP-SIZE POLICY FOR THE LIMITED-MEMORY BROYDEN-FLETCHER-GOLDFARB-SHANNO ALGORITHM

Abstract

We consider the problem of how to learn a step-size policy for the Limited-Memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm. This is a limited computational memory quasi-Newton method widely used for deterministic unconstrained optimization but currently avoided in large-scale problems for requiring step sizes to be provided at each iteration. Existing methodologies for the step size selection for L-BFGS use heuristic tuning of design parameters and massive re-evaluations of the objective function and gradient to find appropriate step-lengths. We propose a neural network architecture with local information of the current iterate as the input. The step-length policy is learned from data of similar optimization problems, avoids additional evaluations of the objective function, and guarantees that the output step remains inside a pre-defined interval. The corresponding training procedure is formulated as a stochastic optimization problem using the backpropagation through time algorithm. The performance of the proposed method is evaluated on the training of classifiers for the MNIST database for handwritten digits and for CIFAR-10. The results show that the proposed algorithm outperforms heuristically tuned optimizers such as ADAM, RMSprop, L-BFGS with a backtracking line search and L-BFGS with a constant step size. The numerical results also show that a learned policy can be used as a warm-start to train new policies for different problems after a few additional training steps, highlighting its potential use in multiple large-scale optimization problems.

1. INTRODUCTION

Consider the unconstrained optimization problem minimize x f (x) where f : R n → R is an objective function that is differentiable for all x ∈ R n , with n being the number of decision variables forming x. Let ∇ x f (x 0 ) be the gradient of f (x) evaluated at some x 0 ∈ R n . A general quasi-Newton algorithm for solving this problem iterates In this paper, we develop a policy that learns to suitably determine step sizes t k when the product H k g k is calculated by the Limited-Memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm (Liu and Nocedal, 1989) . The main contributions of the paper are: x k+1 = x k -t k H k g k (2) for an initial x 0 ∈ R n until a given stop criterion is met. At the k-th iteration, g k = ∇ x f (x k ) is the gradient, H k is 1. We propose a neural network architecture defining this policy taking as input local information of the current iterate. In contrast with more standard strategies, this policy is tuning-free and avoids re-evaluations of the objective function and gradients at each step. The training procedure is formulated as a stochastic optimization problem and can be performed by easily applying backpropagation through time (TBPTT).



a positive-definite matrix satisfying the secant equation(Nocedal and Wright, 2006,  p. 137) and t k is the step size.

