Oblivious Sketching-based Central Path Method for Solving Linear Programming Problems

Abstract

In this work, we propose a sketching-based central path method for solving linear programmings, whose running time matches the state of art results Cohen et al. (2019b); Lee et al. (2019). Our method opens up the iterations of the central path method and deploys an "iterate and sketch" approach towards the problem by introducing a new coordinate-wise embedding technique, which may be of independent interest. Compare to previous methods, the work Cohen et al. (2019b) enjoys feasibility while being nonoblivious, and Lee et al. ( 2019) is oblivious but infeasible, and relies on dense sketching matrices such as subsampled randomized Hadamard/Fourier transform matrices. Our method enjoys the benefits of being both oblivious and feasible, and can use sparse sketching matrix Nelson & Nguyên (2013) to speed up the online matrix-vector multiplication. Our framework for solving LP naturally generalizes to a broader class of convex optimization problems including empirical risk minimization.

1. Introduction

Linear programming is one of the fundamental models widely used in both theory and practice. It has been extensively applied in many fields such as economics Tintner (1955) ; Dorfman et al. (1987 ), operations research Delson & Shahidehpour (1992) , compressed sensing Donoho (2006); Candes et al. (2006) , medical studies Mangasarian et al. (1990; 1995) 2019) only maintains an infeasible update in each iteration and requires the usage of dense sketching matrices, which will ruin the potential sparsity structure of the original linear programs. Thus, a natural question to ask is:



We use O * hides n o(1) and logO(1) (1/δ) factors.1



, adversarial deep learningWong & Kolter (2018);Weng et al. (2018), etc., due to its simple and intuitive structure. The problem of solving linear programmings has been studied since the 19-th centurySierksma & Zwols (2015).Consider solving a general linear program in standard form min Ax=b,x≥0 c x of size A ∈ R d×n without redundant constraints. For the generic case d = Ω(n) we considered in this paper, the state of art results take a total running time ofO * (n ω + n 2.5-α/2 + n 2+1/6 ) 1 toobtain a solution of δ accuracy in current matrix multiplication time Cohen et al. (2019b); Lee et al. (2019), where ω is the exponent of matrix multiplication whose current value is roughly 2.373 Williams (2012); Le Gall (2014), and α is the dual exponent of matrix multiplication whose current value is 0.31 Le Gall & Urrutia (2018). The breakthrough work due to Cohen, Lee, and Song Cohen et al. (2019b) improves the long standing running time of O * (n 2.5 ) since 1989 Vaidya (1989). For the current ω and α, Cohen et al. (2019b) algorithm takes O * (n 2.373 ) time. For the current state-of-art results, the work Cohen et al. (2019b) involves a non-oblivious sampling technique, whose sampling set and size changes along the iterations. It avoids the possibilities of implementing expensive calculations in the preprocessing stage and also makes it harder to extend to other classical optimization problems. On the other hand, the work Lee et al. (

