SOLVING CONSTRAINED VARIATIONAL INEQUALITIES VIA A FIRST-ORDER INTERIOR POINT-BASED METHOD

Abstract

We develop an interior-point approach to solve constrained variational inequality (cVI) problems. Inspired by the efficacy of the alternating direction method of multipliers (ADMM) method in the single-objective context, we generalize ADMM to derive a first-order method for cVIs, that we refer to as ADMM-based interiorpoint method for constrained VIs (ACVI). We provide convergence guarantees for ACVI in two general classes of problems: (i) when the operator is ξ-monotone, and (ii) when it is monotone, some constraints are active and the game is not purely rotational. When the operator is, in addition, L-Lipschitz for the latter case, we match known lower bounds on rates for the gap function of O(1/

√

K) and O(1/K) for the last and average iterate, respectively. To the best of our knowledge, this is the first presentation of a first-order interior-point method for the general cVI problem that has a global convergence guarantee. Moreover, unlike previous work in this setting, ACVI provides a means to solve cVIs when the constraints are nontrivial. Empirical analyses demonstrate clear advantages of ACVI over common first-order methods. In particular, (i) cyclical behavior is notably reduced as our methods approach the solution from the analytic center, and (ii) unlike projection-based methods that zigzag when near a constraint, ACVI efficiently handles the constraints.

1. INTRODUCTION

We are interested in the constrained variational inequality problem (Stampacchia, 1964) : find x ⋆ ∈ X s.t. ⟨x -x ⋆ , F (x ⋆ )⟩ ≥ 0, ∀x ∈ X , (cVI) where X is a subset of the Euclidean n-dimensional space R n , and where F : X → R n is a continuous map. Finding (an element of) the solution set S ⋆ X ,F of cVI is a key problem in multiple fields such as economics and game theory. More pertinent to machine learning, CVIs generalize standard single-objective optimization, complementarity problems (Cottle & Dantzig, 1968), zerosum games (von Neumann & Morgenstern, 1947; Rockafellar, 1970) and multi-player games. For example, solving cVI is the optimization problem underlying reinforcement learning (e.g., Omidshafiei et al., 2017) -and generative adversarial networks (Goodfellow et al., 2014) . Moreover, even when training one set of parameters with one loss f , that is F (x) ≡ ∇ x f (x), a natural way to improve the model's robustness in some regard is to introduce an adversary to perturb the objective or the input, or to consider the worst sample distribution of the empirical objective. As has been noted in many problem domains, including robust classification (Mazuelas et al., 2020 ), adversarial training (Szegedy et al., 2014 ), causal inference (Christiansen et al., 2020) , and robust objectives (e.g., Rothenhäusler et al., 2018) , this leads to a min-max structure, which is an instance of the cVI problem. To see this, consider two sets of parameters (agents), x 1 ∈ X 1 and x 2 ∈ X 2 , that share a loss/utility function, f : X 1 × X 2 → R, which the first agent aims to minimize * All authors contributed equally. Link to source code: https://github.com/Chavdarova/ACVI. 1

