ON THE IMPOSSIBILITY OF GLOBAL CONVERGENCE IN MULTI-LOSS OPTIMIZATION

Abstract

Under mild regularity conditions, gradient-based methods converge globally to a critical point in the single-loss setting. This is known to break down for vanilla gradient descent when moving to multi-loss optimization, but can we hope to build some algorithm with global guarantees? We negatively resolve this open problem by proving that desirable convergence properties cannot simultaneously hold for any algorithm. Our result has more to do with the existence of games with no satisfactory outcomes, than with algorithms per se. More explicitly we construct a two-player game with zero-sum interactions whose losses are both coercive and analytic, but whose only simultaneous critical point is a strict maximum. Any 'reasonable' algorithm, defined to avoid strict maxima, will therefore fail to converge. This is fundamentally different from single losses, where coercivity implies existence of a global minimum. Moreover, we prove that a wide range of existing gradient-based methods almost surely have bounded but non-convergent iterates in a constructed zero-sum game for suitably small learning rates. It nonetheless remains an open question whether such behavior can arise in high-dimensional games of interest to ML practitioners, such as GANs or multi-agent RL.

1. INTRODUCTION

Problem Setting. As multi-agent architectures proliferate in machine learning, it is becoming increasingly important to understand the dynamics of gradient-based methods when optimizing multiple interacting goals, otherwise known as differentiable games. This framework encompasses GANs (Goodfellow et al., 2014) , intrinsic curiosity (Pathak et al., 2017) , imaginative agents (Racanière et al., 2017) , synthetic gradients (Jaderberg et al., 2017) , hierarchical reinforcement learning (Wayne & Abbott, 2014; Vezhnevets et al., 2017) and multi-agent RL in general (Busoniu et al., 2008) . The interactions between learning agents make for vastly more complex mechanics: naively applying gradient descent on each loss simultaneously is known to diverge even in simple bilinear games. Related Work. A large number of methods have recently been proposed to alleviate the failings of simultaneous gradient descent: adaptations of single-loss algorithms such as Extragradient (EG) (Azizian et al., 2019) and Optimistic Mirror Descent (OMD) (Daskalakis et al., 2018) , Alternating Gradient Descent (AGD) for finite regret (Bailey et al., 2019) , Consensus Optimization (CO) for GAN training (Mescheder et al., 2017) , Competitive Gradient Descent (CGD) based on solving a bilinear approximation of the loss functions (Schaefer & Anandkumar, 2019) , Symplectic Gradient Adjustment (SGA) based on a novel decomposition of game mechanics (Balduzzi et al., 2018; Letcher et al., 2019a) , and opponent-shaping algorithms including Learning with Opponent-Learning Awareness (LOLA) (Foerster et al., 2018) and its convergent counterpart, Stable Opponent Shaping (SOS) (Letcher et al., 2019b) . Let A be this set of algorithms. Each has shown promising theoretical implications and empirical results, but none offers insight into global convergence in the non-convex setting, which includes the vast majority of machine learning applications. One of the main roadblocks compared with single-loss optimization has been noted by Schaefer & Anandkumar (2019) : "a convergence proof in the nonconvex case analogue to Lee et al. (2016) is still out of reach in the competitive setting. A major obstacle to this end is the identification of a suitable measure of progress (which is given by the function value in the single agent setting), since norms of gradients can not be expected to decay monotonously for competitive dynamics in non-convex-concave games." It has been established that Hamiltonian Gradient Descent converges in two-player zero-sum games under a "sufficiently bilinear" condition by Abernethy et al. (2019) , but this algorithm is unsuitable for optimization as it cannot distinguish between minimization and maximization (Hsieh et al., 2020, Appendix C.4) . Global convergence has also been established for some algorithms in a few special cases: potential and Hamiltonian games (Balduzzi et al., 2018) , zero-sum games satisfying the twosided Polyak-Łojasiewicz condition (Yang et al., 2020) , zero-sum linear quadratic games (Zhang et al., 2019) and zero-sum games whose loss and first three derivatives are bounded (Mangoubi & Vishnoi, 2020) . These are significant contributions with several applications of interest, but do not include any of the architectures mentioned above. Finally, Balduzzi et al. (2020) show that GD dynamics are bounded under a 'negative sentiment' assumption in smooth markets, which do include GANs -but this does not imply convergence, as we will show. On the other hand, failure of global convergence has been shown for the Multiplicative Weights Update method by Palaiopanos et al. (2017) , for policy-gradient algorithms by Mazumdar et al. (2020) , and for simultaneous and alternating gradient descent (simGD and AGD) by Vlatakis-Gkaragkounis et al. (2019) ; Bailey et al. (2019) , with interesting connections to Poincaré recurrence. Nonetheless, nothing is claimed about other optimization methods. Farnia & Ozdaglar (2020) show that GANs may have no Nash equilibria, but it does not follow that algorithms fail to converge since there may be locally-attracting but non-Nash critical points (Mazumdar et al., 2019, Example 2) . Finally, Hsieh et al. (2020) uploaded a preprint just after the completion of this work with a similar focus to ours. They prove that generalized Robbins-Monro schemes may converge with arbitrarily high probability to spurious attractors. This includes simGD, AGD, stochastic EG, optimistic gradient and Kiefer-Wolfowitz. However, Hsieh et al. (2020) focus on the possible occurrence of undesirable convergence phenomena for stochastic algorithms. We instead prove that desirable convergence properties cannot simultaneously hold for all algorithms (including deterministic). Moreover, their results apply only to decreasing step-sizes whereas ours include constant step-sizes. These distinctions are further highlighted by Hsieh et al. (2020) in the further related work section. Taken together, our works give a fuller picture of the failure of global convergence in multi-loss optimization. Contribution. We prove that global convergence in multi-loss optimization is fundamentally incompatible with the 'reasonable' requirement that algorithms avoid strict maxima and converge only to critical points. We construct a two-player game with zero-sum interactions whose losses are coercive and analytic, but whose only critical point is a strict maximum (Theorem 1). Reasonable algorithms must either diverge to infinite losses or cycle (bounded non-convergent iterates). One might hope that global convergence could at least be guaranteed in games with strict minima and no other critical points. On the contrary we show that strict minima can have arbitrarily small regions of attraction, in the sense that reasonable algorithms will fail to converge there with arbitrarily high probability for fixed initial parameter distribution (Theorem 2). Finally, restricting the game class even further, we construct a zero-sum game in which all algorithms in A (as defined in Appendix A) are proven to cycle (Theorem 3). It may be that cycles do not arise in high-dimensional games of interest including GANs. Proving or disproving this is an important avenue for further research, but requires that we recognise the impossibility of global guarantees in the first place.

2.1. SINGLE LOSSES: GLOBAL CONVERGENCE OF GRADIENT DESCENT

Given a continuously differentiable function f : R d → R, let θ k+1 = θ k -α∇f (θ k ) be the iterates of gradient descent with learning rate α, initialised at θ 0 . Under standard regularity conditions, gradient descent converges globally to critical points: Proposition 1. Assume f ∈ C 2 has compact sublevel sets and is either analytic or has isolated critical points. For any θ 0 ∈ R d , define U 0 = {f (θ) ≤ f (θ 0 )} and let L < ∞ be a Lipschitz constant for ∇f in U 0 . Then for any 0 < α < 2/L we have lim k θ k = θ for some critical point θ. The requirements for convergence are relatively mild: 1. f has compact sublevel sets iff f is coercive, lim θ →∞ f (θ) = ∞, which mostly holds in machine learning since f is a loss function. 2. f has isolated critical points if it is a Morse function (nondegenerate Hessian at critical points), which holds for almost all C 2 functions. More precisely, Morse functions form an open, dense subset of all functions f ∈ C 2 (R d , R) in the Whitney C 2 -topology. 3. Global Lipschitz continuity is not assumed, which would fail even for cubic polynomials. The goal of this paper is to prove that similar (even weaker) guarantees cannot be obtained in the multi-loss setting -not only for GD, but for any reasonable algorithm. This has to do with the more complex nature of gradient vector fields arising from multiple losses.

2.2. DIFFERENTIABLE GAMES

Following Balduzzi et al. (2018) , we frame the problem of multi-loss optimization as a differentiable game among cooperating and competing agents/players. These may simply be different internal components of a single system, like the generator and discriminator in GANs. Definition 1. A differentiable game is a set of n agents with parameters θ = (θfoot_0 , . . . , θ n ) ∈ R d and twice continuously differentiable losses L i : R d → R, where θ i ∈ R di for each i and i d i = d. Losses are not assumed to be convex/concave in any of the parameters. In practice, losses need only be differentiable almost-everywhere: think of neural nets with rectified linear units. If n = 1, the 'game' is simply to minimise a given loss function. We write ∇ i L k = ∇ θ i L k and ∇ ij L k = ∇ θ j ∇ θ i L k for any i, j, k, and define the simultaneous gradient of the game ξ = ∇ 1 L 1 , . . . , ∇ n L n T ∈ R d as the concatenation of each player's gradient. If each agent independently minimises their loss using GD with learning rate α, the parameter update for all agents is given by θ ← θ -αξ(θ). We call this simultaneous gradient descent (simGD), or GD for short. We call θ a critical point if ξ( θ) = 0. Now introduce the 'Hessian' (or Jacobian) of the game as the block matrix H = ∇ξ =    ∇ 11 L 1 • • • ∇ 1n L 1 . . . . . . . . . ∇ n1 L n • • • ∇ nn L n    ∈ R d×d . Importantly note that H is not symmetric in general unless n = 1, in which case we recover the usual Hessian H = ∇ 2 L. However H can be decomposed into symmetric and anti-symmetric components as H = S + A (Balduzzi et al., 2018) . A second useful decomposition has appeared recently in (Letcher et al., 2019b) and (Schaefer & Anandkumar, 2019) These were named (strict) stable fixed points by Balduzzi et al. (2018) , but the term is usually reserved in dynamical systems to the larger class defined by Hessian eigenvalues with positive real parts, which is implied but not equivalent to H 0 for non-symmetric matrices. In particular, strict minima are (differential) Nash equilibria as defined by Mazumdar et al. (2019) , since diagonal blocks must also be positive definite: ∇ ii L i ( θ) 0. The converse does not hold. Algorithm class. This paper is concerned with any algorithm whose iterates are obtained by initialising θ 0 and applying a function F to the previous iterates, namely θ k+1 = F (θ k , . . . , θ 0 ). This holds for all gradient-based methods (deterministic or stochastic); most of them are only functions of the current iterate θ k , so that θ k = F k (θ 0 ). All probabilistic statements in this paper assume that θ 0 is initialised following any bounded and continuous measure ν on R d . Continuity is a weak requirement and widely holds across machine learning, while boundedness mostly holds in practice since the bounded region can be made large enough to accommodate required initial points. For single-player games, the goal of such algorithms is for θ k to converge to a local (perhaps global) minimum as k → ∞. The goal is less clear for differentiable games, but is generally to reach a minimum or a Nash equilibrium. In the case of GANs the goal might be to reach parameters that produce realistic images, which is more challenging to define formally. Throughout the text we use the term (limit) cycle to mean bounded but non-convergent iterates. This terminology is used because bounded iterates are non-convergent if and only if they have at least two accumulation points, between which they must 'cycle' infinitely often. This is not to be taken literally: the set of accumulation points may not even be connected. Hsieh et al. (2020) provide a more complete characterisation of these cycles. Game class. Expecting global guarantees in all differentiable games is excessive, since every continuous dynamical system arises as simultaneous GD on the loss functions of a differentiable game (Balduzzi et al., 2020, Lemma 1) . For this reason, the aforementioned authors have introduced a vastly more tractable class of games called markets. Definition 3. A (smooth) market is a differentiable game where interactions between players are pairwise zero-sum, namely, L i (θ) = L i (θ i ) + j =i g ij (θ i , θ j ) with g ij (θ i , θ j ) + g ji (θ j , θ i ) = 0 for all i, j. This generalises zero-sum games while remaining amenable to optimization and aggregation, meaning that "we can draw conclusions about the gradient-based dynamics of the collective by summing over properties of its members" (Balduzzi et al., 2020) . Moreover, this class captures a large number of applications including GANs and related architectures, intrinsic curiosity modules, adversarial training, task-suites and population self-play. One would modestly hope for some reasonable algorithm to converge globally in markets. We will prove that even this is too much to ask.

2.3. REASONABLE ALGORITHMS

We wish to prove that global convergence is at odds with weak, 'reasonable' desiderata. The first requirement is that fixed points of an optimization algorithm F are critical points. Formally, F (θ) = θ =⇒ ξ(θ) = 0 . (R1) If not, some agent i could strictly improve its losses by following the gradient -∇ i L i = 0. There is no reason for a gradient-based algorithm to stop improving if its gradient is non-zero. The second requirement is that algorithms avoid strict maxima. Analogous to strict minima, they are defined for single losses by a negative-definite Hessian H ≺ 0. Converging to such a point θ is the opposite goal of any meaningful algorithm since moving anywhere away from θ decreases the loss. There are multiple ways of generalising this concept for multiple losses, but Proposition 2 below justifies that H ≺ 0 is the weakest one. Proposition 2. Write λ(A) = Re(Spec(A)) for real parts of the eigenvalues of a matrix A. We have the following implications, and none of them are equivalences. max λ(H) < 0 min λ(H) < 0 H ≺ 0 min λ(S) < 0 max λ(H d ) < 0 min λ(H d ) < 0 Definition 4. A critical point θ is a (strict, local) maximum if H( θ) ≺ 0. Imposing that algorithms avoid strict maxima is therefore the weakest possible requirement of its kind. Note that the bottom-left implication Proposition 2 is equivalent to ∇ ii L i ≺ 0 for all i, so strict maxima are also strict maxima of each player's individual loss function. Players can all decrease their losses by moving anywhere away from them. It is exceedingly reasonable to ask that optimization algorithms avoid these points almost surely. Formally, we require that for any strict maximum θ and bounded region U there are hyperparameters such that µ {θ 0 ∈ U | lim k θ k = θ} = 0 . (R2) µ denotes Lebesgue measure. Hyperparameters may depend on the given game and the region U , as is typical for learning rates in gradient-based methods. Definition 5 (Reason). An algorithm is reasonable if it satisfies R1 and R2. Reason is not equivalent to rationality or self-interest. Reason is much weaker, imposing only that agents are well-behaved regarding strict maxima even if their individual behavior is not selfinterested. For instance, SGA agents do not behave out of self-interest (Balduzzi et al., 2018) .

3.1. REASONABLE ALGORITHMS FAIL TO CONVERGE GLOBALLY

Our main contribution is to show that global guarantees do not exist for any reasonable algorithm. First recall that global convergence should not be expected in all games, since there may be a divergent direction with minimal loss (imagine minimising L = e x ). It should however be asked that algorithms have bounded iterates in coercive games, defined by coercive losses lim θ →∞ L i (θ) = ∞ for all i. Indeed, unbounded iterates in coercive games would lead to infinite losses for all agents, the worst possible outcome. Given bounded iterates, convergence should hold if the Hessian is nondegenerate at critical points (which must therefore be isolated, recall Proposition 1). We call such a game nondegenerate. This condition can also be replaced by analyticity of the loss. In the spirit of weakest assumptions, we ask for convergence when both conditions hold. Definition 6 (Globality). An algorithm is global if, in a coercive, analytic and nondegenerate game, for any fixed θ 0 , iterates θ k are bounded and converge for suitable hyperparameters. (G1) Note that GD is global for single-player games by Proposition 1. Unfortunately, reason and globality are fundamentally at odds as soon as we move to two-player markets. Theorem 1. There is a coercive, nondegenerate, analytic two-player market M whose only critical point is a strict maximum. In particular, algorithms only have four possible outcomes in M: 1. Iterates are unbounded, and all players diverge to infinite loss. Proof. Consider the analytic market M given by L 1 (x, y) = x 6 /6 -x 2 /2 + xy + 1 4 y 4 1 + x 2 - x 4 1 + y 2 L 2 (x, y) = y 6 /6 -y 2 /2 -xy - 1 4 y 4 1 + x 2 - x 4 1 + y 2 . We prove in Appendix D that M is coercive, nondegenerate, and has a unique critical point at the origin, which is a strict maximum. Constructing an algorithm with global guarantees is therefore doomed to be unreasonable in that it will converge to strict maxima or non-critical points in M. None of the outcomes of M are satisfactory. The first three are highly objectionable, as already discussed. The fourth is less obvious, and may even have game-theoretic significance (Papadimitriou & Piliouras, 2019) , but is counter-intuitive from an optimization standpoint. Terminating the iteration would lead to a non-critical point, much like the third outcome. Even if we let agents update parameters continuously as they play a game or solve a task, they will have oscillatory behavior and fail to produce consistent outcomes (e.g. when generating an image or playing Starcraft). The hope for machine learning is that such predicaments do not arise in applications we care about, such as GANs or intrinsic curiosity. This may well be the case, but proving or disproving global convergence in these specific settings is beyond the scope of this paper. Remark. Why can this approach not be used to disprove global convergence for single losses? One reason is that we cannot construct a coercive loss with no critical points other than strict maxima: coercive losses, unlike games, always have a global minimum.

3.2. WHAT IF THERE ARE STRICT MINIMA?

One might wonder if it is purely the absence of strict minima that causes non-convergence, since strict minima are locally attracting under gradient dynamics. Can we guarantee global convergence if we impose existence of a minimum, and more, the absence of any other critical points? Unfortunately, strict minima may have an arbitrarily small region of attraction. Assuming parameters are initialised following any bounded continuous measure ν on R d , we can always modify M by deforming a correspondingly small region around the origin, turning it into a minimum while leaving the dynamics unchanged outside of this region. For a fixed initial distribution, any reasonable algorithm can therefore enter a limit cycle or diverge to infinite losses with arbitrarily high probability. Theorem 2. Given a reasonable algorithm with bounded continuous distribution on θ 0 and a real number > 0, there exists a coercive, nondegenerate, almost-everywhere analytic two-player market M σ with a strict minimum and no other critical points, such that θ k either cycles or diverges to infinite losses for both players with probability at least 1 -. Proof. Let 0 < σ < 0.1 and define f σ (θ) = (x 2 + y 2 -σ 2 )/2 if θ ≥ σ (y 2 -3x 2 )(x 2 + y 2 -σ 2 )/(2σ 2 ) otherwise, where θ = (x, y) and θ = x 2 + y 2 is the standard L2-norm. Note that f σ is continuous since lim θ →σ + f σ (x, y) = 0 = lim θ →σ -f σ (x) . Now consider the two-player market M σ given by L 1 (x, y) = x 6 /6 -x 2 + f σ (x, y) + xy + 1 4 y 4 1 + x 2 - x 4 1 + y 2 L 2 (x, y) = y 6 /6 -f σ (x, y) -xy - 1 4 y 4 1 + x 2 - x 4 1 + y 2 . We prove in Appendix E that M σ is a coercive, nondegenerate, almost-everywhere analytic game whose only critical point is a strict minimum at the origin. We then prove that θ k cycles or diverges with probability at least 1 -, and plot iterates for each algorithm in A.

3.3. HOW DO EXISTING ALGORITHMS BEHAVE?

Any algorithm will either fail to be reasonable or global in M. Nonetheless, it would be interesting to determine the specific failure that each algorithm in A exhibits. Each of them is defined in Appendix A, writing α for the learning rate and γ for the Consensus Optimization hyperparameter. We expect each algorithm to be reasonable and moreover to have bounded iterates in M for suitably small hyperparameters. If this holds, they must cycle by Theorem 1. This was witnessed experimentally across 1000 runs for α = γ = 0.01, with every run resulting in cycles. A single such run is illustrated in Figure 1 . Algorithms may follow one of the three other outcomes for other hyperparameters, for instance diverging to infinite loss if α is too large or converging to the strict maximum for CO if γ is too large. The point here is to characterise the 'regular' behavior which can be seen as that occurring for sufficiently small hyperparameters. Instead of proving that algorithms must cycle in M, we construct a zero-sum game N with similar properties as M and prove below that algorithms in A almost surely fail to converge there for small α, γ. This is stronger than proving the analogous result for M, since N belongs to the even smaller class of zero-sum games which one might have hoped was well-behaved. In this light, one might wish to extend Theorem 1 to zero-sum games. However, zero-sum games cannot be coercive since L 1 → ∞ implies L 2 → -∞. It is therefore unclear whether global guarantees should be expected. Note however that N will be weakly-coercive in the sense that lim θ i →∞ L i (θ i , θ -i ) = ∞ for all i and fixed θ -i . Theorem 3. There is a weakly-coercive, nondegenerate, analytic two-player zero-sum game N whose only critical point is a strict maximum. Algorithms in A almost surely have bounded nonconvergent iterates in N for α, γ sufficiently small. Proof. Consider the analytic zero-sum game N given by L 1 = xy -x 2 /2 + y 2 /2 + x 4 /4 -y 4 /4 = -L 2 . We prove in Appendix F that N is weakly-coercive, nondegenerate, and has a unique critical point at the origin which is a strict maximum. We prove that algorithms in A have the origin as unique fixed points, with negative-definite Jacobian for α, γ small, hence failing to converge almost surely. We moreover prove that algorithms have bounded non-convergent iterates in N for α, γ sufficiently small. Iterates are plotted for a single run of each algorithm in Figure 3 with α = γ = 0.01. As in M, the behavior of each algorithm may differ for larger hyperparameters. All algorithms may have unbounded iterates or converge to the strict maximum for large α, while EG and OMD may even converge to a non-critical point (see proof). All such outcomes are unsatisfactory, though unbounded iteration will not result in positive infinite losses for both players since L 1 = -L 2 .

3.4. COROLLARY: THERE ARE NO SUITABLE MEASURES OF PROGRESS

A crucial step in proving global convergence of GD on single losses is showing that the set of accumulation points is a subset of critical points, using the function value as a 'measure of progress'. The fact that this fails for differentiable games implies that there can be no suitable measures of progress for reasonable algorithms with bounded iterates. We formalise this below, answering the question of Schaefer & Anandkumar (2019) quoted in the introduction. Definition 7. A measure of progress for an algorithm given by θ k+1 = F (θ k ) is a continuous map M : R d → R, bounded below, such that M (F (θ)) ≤ M (θ) and M (F (θ)) = M (θ) iff F (θ) = θ. Measures of progress are very similar to descent functions, as defined by Luenberger & Ye (1984) , and somewhat akin to Lyapunov functions. The function value f is a measure of progress for singleloss GD under the usual regularity conditions, while the gradient norm ξ is a measure of progress for GD in strictly convex differentiable games: ξ(θ -αξ) 2 = ξ 2 -αξ T H t ξ + o(α) ≤ ξ 2 for small α. Unfortunately, games like M prevent the existence of such measures in general. Corollary 1. There are no measures of progress for reasonable algorithms which produce bounded iterates in M or N . Assuming the algorithm to be reasonable is necessary: any map is a measure of progress for the unreasonable algorithm F (θ) = θ. Assuming the algorithm to have bounded iterates in M or N is necessary: M (θ) = exp (-θ • 1 ) is a measure of progress for the reasonable but always-divergent algorithm F (θ) = θ + 1, where 1 is the constant vector of ones.

4. CONCLUSION

We have proven that global convergence is fundamentally at odds with weak, desirable requirements in multi-loss optimization. Any reasonable algorithm can cycle or diverge to infinite losses, even in two-player markets. This arises because coercive games, unlike losses, may have no critical points other than strict maxima. However, this is not the only point of failure: strict minima may have arbitrarily small regions of attraction, making convergence arbitrarily unlikely. Limit cycles are not necessarily bad: they may even have game-theoretic significance (Papadimitriou & Piliouras, 2019) . This paper nonetheless shows that some games have no satisfactory outcome in the usual sense, even in the class of two-player markets. Players should neither escape to infinite losses, nor converge to strict maxima or non-critical points, so cycling may be the lesser evil. The community is accustomed to optimization problems whose solutions are single points, but cycles may have to be accepted as solutions in themselves. The hope for machine learning practitioners is that local minima with large regions of attraction prevent limit cycles from arising in applications of interest, including GANs. Proving or disproving this is an interesting and important avenue for further research, with real implications on what to expect when agents learn while interacting with others. Cycles may for instance be unacceptable in self-driving cars, where oscillatory predictions may have life-threatening implications.

APPENDIX A ALGORITHMS AND EXPERIMENT HYPERPARAMETERS

Each algorithm in A cited in the 'Related Work' section can be defined as F (θ) = θ -αG(θ) for some continuous G : R d → R d . We have already seen that simultaneous GD is given by G GD = ξ. The only examples in this paper are two-player games, for which AGD is given by G AGD = ξ 1 (θ 1 , θ 2 ) ξ 2 (θ 1 -αξ 1 , θ 2 ) The other algorithms are given by G EG = ξ • (id -αξ) G OMD = 2ξ(θ k ) -ξ(θ k-1 ) G SGA = (I + λA T )ξ G CO = (I + γH T )ξ G CGD = (I + αH o ) -1 ξ G LA = (I -αH o )ξ G LOLA = (I -αH o )ξ -α diag(H T o ∇L) G SOS = (I -αH o )ξ -pα diag(H T o ∇L) . For OMD, the previous iterate can be uniquely recovered as θ k-1 = (id -αξ) -1 (θ k ) using the proximal point algorithm if H ≤ L and α < 1/L, giving G OM D = 2ξ -ξ • (id -αξ) -1 . In all experiments we initialise θ 0 following a standard normal distribution and use a learning rate α = 0.01, with γ = 0.01 for CO. Learning rates α i could be chosen to be different for each player i, but we set them to be equal throughout this paper for simplicity. Claims regarding the behavior of each algorithm for sufficiently small α mean that all α i should be sufficiently small. The λ parameter for SGA is obtained by the alignment criterion introduced in the original paper, λ = sign ξ, H T ξ A T ξ, H T ξ . Similarly, the p parameter for SOS is given by a two-part criterion which need not be described here. Accompanying code for all experiments can be found at https://github.com/aletcher/ impossibility-global-convergence.

B PROOF OF PROPOSITION 1

We first prove a lemma and state a standard optimization result. Lemma 0. Let G ∈ C 1 (U, R d ) for an open set U . If G is L-Lipschitz then sup θ∈U ∇G(θ) ≤ L. The proof is an adaptation of (Panageas & Piliouras, 2017, Lemma 7) for non-convex sets. Proof. Fix any θ ∈ U and > 0. Since U is open, the ball B r (θ) of radius r centered at θ is contained in U for some r > 0. By Taylor expansion, for any unit vector θ , G(θ + rθ ) -G(θ) ≥ r ∇G(θ)θ -o(r) ≥ r ∇G(θ)θ -r for r sufficiently small. Since G is L-Lipschitz, we obtain r ∇G(θ)θ ≤ G(θ + rθ ) -G(θ) + r ≤ r(L + ) . Since was arbitrary, ∇G(θ)θ ≤ L for any unit θ . By definition of the norm, we obtain ∇G(θ) = sup θ =1 ∇G(θ)θ ≤ L for all θ ∈ U and hence sup θ∈U ∇G(θ) ≤ L. Proposition ( (Lange, 2013, Prop. 12.4.4 ) and (Absil et al., 2005, Th. 4.1)). Assume f has L-Lipschitz gradient and is either analytic or has isolated critical points. Then for any 0 < α < 2/L and θ 0 ∈ R d we have lim k θ k = ∞ or lim k θ k = θ for some critical point θ. If f moreover has compact sublevel sets then the latter holds, lim k θ k = θ. We can now prove Proposition 1, which avoids requiring Lipschitz continuity by proving that iterates are contained in the sublevel set given by θ 0 for appropriate learning rate α. Proposition 1. Assume f ∈ C 2 has compact sublevel sets and is either analytic or has isolated critical points. For any θ 0 ∈ R d , define U 0 = {f (θ) ≤ f (θ 0 )} and let L < ∞ be a Lipschitz constant for ∇f in U 0 . Then for any 0 < α < 2/L we have lim k θ k = θ for some critical point θ. Proof. Note that ∇f ∈ C 1 , so f has L-Lipschitz gradient inside any compact set U for some finite L, and sup θ∈U ∇ 2 f (θ) ≤ L by Lemma 0. Now define U α = {θ -tα∇f (θ) | t ∈ [0, 1], θ ∈ U 0 } and the continuous function L(α) = sup θ∈Uα ∇ 2 f (θ) . Notice that U 0 ⊂ U α for all α. We prove that αL(α) < 2 implies U α = U 0 and in particular, L(α) = L(0). By Taylor expansion, f (θ -tα∇f ) = f (θ) -α ∇f (θ) 2 + t 2 α 2 2 ∇f (θ) T ∇ 2 f (θ -t α∇f )f (θ) for some t ∈ [0, t] ⊂ [0, 1]. Since θ -t α∇f ∈ U α , it follows that f (θ -tα∇f ) ≤ f (θ) -α ∇f (θ) 2 (1 -αL(α)/2) ≤ f (θ) for all αL(α) < 2. In particular, θ -tα∇f ∈ U 0 and hence U α = U 0 . We conclude that αL(α) < 2 implies L(α) = L(0), implying in turn αL(0) < 2. We now claim the converse, namely that αL(0) < 2 implies αL(α) < 2. For contradiction, assume otherwise that there exists α L(0) < 2 with α L(α ) ≥ 2. Since αL(α) is continuous and 0L(0) = 0 < 2, there exists ᾱ ≤ α such that ᾱL(0) < 2 and ᾱL( ᾱ) = 2. This is in contradiction with continuity: 2 = ᾱL(ᾱ) = lim α→ ᾱ-αL(α) = lim α→ ᾱ-αL(0) = ᾱL(0) . Finally we conclude that U α = U 0 for all αL(0) < 2, and in particular, for all αL < 2. Finally, θ k ∈ U 0 implies θ k+1 ∈ U α = U 0 and hence θ k ∈ U 0 by induction. The result now follows by applying the previous proposition to f | U0 .

C PROOF OF PROPOSITION 2

Proposition 2. Write λ(A) = Re(Spec(A)) for real parts of the eigenvalues of a matrix A. We have the following implications, and none of them are equivalences. max λ(H) < 0 min λ(H) < 0 H ≺ 0 min λ(S) < 0 max λ(H d ) < 0 min λ(H d ) < 0 The top row is dynamics-based, governed by the collective Hessian, while the bottom row is gametheoretic whereby H d = ∇ ii L i decomposes into agentwise Hessians. The left and right triangles collide respectively to strict maxima and saddles for single losses, since H = S = H d = ∇ 2 L. Proof. First note that H ≺ 0 ⇐⇒ S ≺ 0 ⇐⇒ max λ(S) < 0, so the leftmost term can be replaced by max λ(S) < 0. We begin with the leftmost implications. If max λ(S) < 0 then S ≺ 0 by symmetry of S, implying both H ≺ 0 since u T Hu = u T Su for all u ∈ R d , and negative definite diagonal blocks ∇ 2 L i i ≺ 0; finally H d ≺ 0. In particular this implies max λ(H) < 0 and max λ(H d ) ≺ 0 since real parts of eigenvalues of a negative definite matrix are negative. The rightmost implications follow as above by contraposition: if min λ(S) ≥ 0 then S 0, which implies H 0 and H d 0 and hence min λ(H) ≥ 0, min λ(H d ) ≥ 0. The top and bottom implications are trivial. The diagonal implications hold by a trace argument: i λ i (H) = T r(H) = Tr(H d ) = i λ i (H d ) , hence max λ(H) < 0 implies the LHS is negative and thus i λ i (H d ) < 0. It follows that λ i (H d ) < 0 for some i and finally min λ(H d ) < 0. The other diagonal holds identically. We now prove that no implication is an equivalence. For the leftmost implications, H = -1 2 2 -1 has max λ(H d ) = -1 < 0 while max λ(S) = 3 > 0, H = 2 4 -4 -4 has max λ(H) = -1 < 0 while max λ(S) = 2 > 0. This also proves the diagonal implications: the first matrix has min λ(H d ) = -1 < 0 but max λ(H) = 3 > 0, and the second matrix has min λ(H) = -1 < 0 but max λ(H d ) = 2 > 0. For the rightmost implications, swap the sign of the diagonal elements for the two matrices above. The top and bottom implications are trivially not equivalences: H = H d = 1 0 0 -1 has min λ(H) = min λ(H d ) = -1 < 0 but max λ(H) = max λ(H d ) = 1 > 0.

D PROOF OF THEOREM 1

The variable changes (x , y ) = (y, -x) , (x , y ) = (-y, x) , (x , y ) = (-x, -y) ( †) will be useful, taking the positive quadrant x, y ≥ 0 to the other three. Theorem 1. There is a coercive, nondegenerate, analytic two-player market M whose only critical point is a strict maximum. In particular, algorithms only have four possible outcomes in M: 1. Iterates are unbounded, and all players diverge to infinite loss. For intuition purposes, M was constructed by noticing that there is no necessary reason for the local minima of two coercive losses to coincide: the gradients of each loss may only simultaneously vanish at a local maximum in each player's respective coordinate. The highest-order terms (first and last) provide coercivity in both coordinates while still having zero-sum interactions. The -x 2 and -y 2 terms yield a strict local maximum at the origin, while the ±xy terms provide opposite incentives around the origin, preventing any other simultaneous critical point to arise. Proof. Write θ = (x, y) and consider the analytic market M given by L 1 = x 6 /6 -x 2 /2 + xy + 1 4 y 4 1 + x 2 - x 4 1 + y 2 L 2 = y 6 /6 -y 2 /2 -xy - 1 4 y 4 1 + x 2 - x 4 1 + y 2 Published as a conference paper at ICLR 2021 with simultaneous gradient ξ =   x 5 -x + y -y 4 x 2(1+x 2 ) 2 -x 3 1+y 2 y 5 -y -x -x 4 y 2(1+y 2 ) 2 -y 3 1+x 2   . We prove 'by hand' that the origin θ = 0 is the only critical point (solution to ξ = 0). See further down for an easier approach based on Sturm's theorem, computer-assisted though equally rigorous. We can assume x, y ≥ 0 since any other solution can be obtained by a quadrant variable change ( †). Now assume for contradiction that ξ = 0 with y = 0. 1. We first show that y > 1. Indeed, 0 = ξ 2 = y 5 -y -x - x 4 y 2(1 + y 2 ) 2 - y 3 1 + x 2 < y 5 -y = y(y 4 -1) implies y > 1 since y ≥ 0. 2. We now show that y < 1.5. First assume for contradiction that x ≥ y, then ξ 1 = y -x + x 5 - xy 4 2(1 + x 2 ) 2 - x 3 1 + y 2 > 1 -x + x 5 -x 5 /8 -x 3 /2 := h(x) . Now h (x) = 35 8 x 4 - 3 2 x 2 -1 has unique positive root x 0 = 6 + 2 √ 79 35 and h(x) → ∞ as x → ∞, hence h attains its minimum at x 0 and plugging x 0 yields a contradiction ξ 1 > h(x 0 ) > 0 . We conclude that x < y, but combining this with x ≥ 0 yields ξ 2 > -2y + y 5 -y 5 /8 -y 3 = y(7y 4 /8 -y 2 -2) > 7y 4 /8 -y 2 -2 > 0 for all y ≥ 1.5, since the rightmost polynomial is positive at y = 1.5 and has positive derivative 7y 3 /2 -2y = y(7y 2 /2 -2) ≥ 7(1.5) 2 /2 -2 > 0 . We must therefore have y < 1.5 as required.

3.. It remains only to show that ξ

1 > 0 for all 1 < y < 1.5. First notice that f x (y) = ξ 1 (x, y) is concave in y for any fixed x ≥ 0 since f x (y) = 1 - 2y 3 x (1 + x 2 ) 2 + 2x 3 y (1 + y 2 ) 2 and so f x (y) = - 6y 2 x (1 + x 2 ) 2 + 2x 3 1 + y 2 -4y 2 (1 + y 2 ) 3 = - 6y 2 x (1 + x 2 ) 2 -2x 3 3y 2 -1 (1 + y 2 ) 3 ≤ 0 for y > 1. It follows that f x attains its infimum on the boundary y ∈ {1, 1.5}, so it suffices to check that ξ 1 (x, 1) > 0 and ξ 1 (x, 1.5) > 0 for all x ≥ 0. First notice that g(x) := x 2(1 + x 2 ) 2 satisfies g (x) = 1 + x 2 -4x 2 2(1 + x 2 ) 2 = 1 -3x 2 2(1 + x 2 ) 2 , Published as a conference paper at ICLR 2021 which has a unique positive root at x 0 = 1/ √ 3. This critical point of g must be a maximum since g(x) > 0 for x > 0 and g(x) → 0 as x → ∞. It follows that g(x) ≤ g(x 0 ) = 1 2 √ 3(1 + 1/3) 2 = 3 √ 3/32 . We now obtain ξ 1 (x, 1) ≥ x 5 -x 3 /2 -x + 1 -3 √ 3/32 := p(x) and ξ 1 (x, 1.5) ≥ x 5 -4x 3 /13 -x + 1.5 -(1.5) 4 3 √ 3/32 := q(x) . Notice that p (x) = 5x 4 -3x 2 /2 -1 has unique positive root x 0 = 3 + √ 89 20 and p(x) → ∞ as x → ∞, hence p attains its minimum at x 0 and plugging x 0 yields ξ 1 (x, 1) ≥ p(x 0 ) > 0 . Similarly for q we have q (x) = 5x 4 -12x 2 /13 -1 has unique positive root x 0 = 6 + √ 881 65 and plugging x 0 yields ξ 1 (x, 1.5) ≥ q(x 0 ) > 0 . We conclude that ξ 1 (x, y) ≥ min(ξ 1 (x, 1), ξ 1 (x, 1.5)) > 0 and the contradiction is complete, hence y = 0. Finally ξ 2 = 0 = x, so θ = 0 is the unique critical point as required. Now the Hessian at θ is H( θ) = -1 1 -1 -1 , which is negative definite since S( θ) = -I ≺ 0, so θ is a nondegenerate strict maximum and M is nondegenerate. It remains only to prove coercivity of M, namely coercivity of L 1 and L 2 . Coercivity of L 1 follows by noticing that the dominant terms are x 6 /6 and y 4 /(1 + x 2 ). Formally, first note that x 4 1+y 2 ≤ x 4 , hence L 1 ≥ x 6 /6 -x 4 /4 -x 2 /2 + xy + 1 4 y 4 1 + x 2 . Now xy ≥ -|xy| ≥ -(2x 2 + y 2 /8) by Young's inequality, hence L 1 ≥ x 6 /6 -x 4 /4 -5x 2 /2 -y 2 /8 + 1 4 y 4 1 + x 2 . For any sequence θ → ∞, either |x| → ∞ or |x| is bounded above by some k ∈ R and |y| → ∞. In the latter case, we have lim θ →∞ L 1 ≥ lim |y|→∞ -k 4 /4 -5k 2 /2 -y 2 /8 + y 4 4(1 + k 2 ) = ∞ since the leading term y 4 is of even degree and has positive coefficient, so we are done. Otherwise, for |x| → ∞, we pursue the previous inequality to obtain L 1 ≥ x 6 /6 -x 4 /4 -5x 2 /2 + y 2 8 2y 2 1 + x 2 -1 . Now notice that y 2 ≥ x 2 ≥ 1 implies L 1 ≥ x 6 /6 -x 4 /4 -5x 2 /2 + x 2 8 x 2 -1 1 + x 2 ≥ x 6 /6 -x 4 /4 -5x 2 /2 -x 2 /8 . On the other hand, x 2 ≥ y 2 also implies L 1 ≥ x 6 /6 -x 4 /4 -5x 2 /2 -x 2 /8 by discarding the first (positive) term in the brackets. Both cases lead to the same inequality and hence, for any sequence with |x| → ∞, lim θ →∞ L 1 ≥ lim |x|→∞ x 6 /6 -x 4 /4 -5x 2 /2 -x 2 /8 = ∞ since the leading term x 6 has even degree and positive coefficient. Hence L 1 is coercive, and the same argument holds for L 2 by swapping x and y. As required we have constructed a coercive, nondegenerate, analytic two-player market M whose only critical point is a strict maximum. In particular, any algorithm either has unbounded iterates with infinite losses or bounded iterates. If they are bounded, they either fail to converge or converge. If they converge, they either converge to a non-critical point or a critical point, which can only be the strict maximum. [For an alternative proof that θ = 0 is the only critical point, we may take advantage of computer algebra systems to find the exact number of real roots using the resultant matrix and Sturm's theorem. Singular (Decker et al., 2019 ) is one such free and open-source system for polynomial computations, backed by published computer algebra references. In particular, the rootsur library used below is based on the book by Basu et al. (2006) . First convert the equations into polynomials: 2(1 + x 2 ) 2 (1 + y 2 )(x 5 -x + y) -y 4 x(1 + y 2 ) -2x 3 (1 + x 2 ) 2 = 0 2(1 + y 2 ) 2 (1 + x 2 )(y 5 -y -x) -x 4 y(1 + x 2 ) -2y 3 (1 + y 2 ) 2 = 0 . We compute the resultant matrix determinant of the system with respect to y, a univariate polynomial P in x whose zeros are guaranteed to contain all solutions in x of the initial system. We then use the Sturm sequence of P to find its exact number of real roots. This is implemented with the Singular code below, whose output is 1. We know that θ = 0 is a real solution, so θ must be the unique critical point.] E PROOF OF THEOREM 2 Theorem 2. Given a reasonable algorithm with bounded continuous distribution on θ 0 and a real number > 0, there exists a coercive, nondegenerate, almost-everywhere analytic two-player market M σ with a strict minimum and no other critical points, such that θ k either cycles or diverges to infinite losses for both players with probability at least 1 -. Proof. We modify the construction from Theorem 1 by deforming a small region around the maximum to replace it with a minimum. First let 0 < σ < 0.1 and define f σ (θ) = (x 2 + y 2 -σ 2 )/2 if θ ≥ σ (y 2 -3x 2 )(x 2 + y 2 -σ 2 )/(2σ 2 ) otherwise, where θ = (x, y) and θ = x 2 + y 2 is the standard L2-norm. Note that f σ is continuous since lim θ →σ + f σ (θ) = 0 = lim θ →σ - f σ (θ) . Now consider the two-player market M σ given by L 1 = x 6 /6 -x 2 + f σ + xy + 1 4 y 4 1 + x 2 - x 4 1 + y 2 L 2 = y 6 /6 -f σ -xy - 1 4 y 4 1 + x 2 - x 4 1 + y 2 . The resulting losses are continuous but not differentiable; however, they are analytic (in particular smooth) almost everywhere, namely, for all θ not on the circle of radius σ. This is sufficient for the purposes of gradient-based optimization, noting that neural nets also fail to be everywheredifferentiable in the presence of rectified linear units. We claim that M σ has a single critical point at the origin θ = 0. First note that ξ Mσ = ξ M0 =   x 5 -x + y -y 4 x 2(1+x 2 ) 2 -x 3 1+y 2 y 5 -y -x -x 4 y 2(1+y 2 ) 2 -y 3 1+x 2   = ξ M for all θ ≥ σ, where M is the game from Theorem 1. It was proved there that the only real solution to ξ = 0 is the origin, which does not satisfy θ ≥ σ. Any critical point must therefore satisfy θ < σ, for which ξ = ξ Mσ =   x 5 + x + y -2x(3x 2 + y 2 )/σ 2 -y 4 x 2(1+x 2 ) 2 -x 3 1+y 2 y 5 + y -x -2y(y 2 -x 2 )/σ 2 -x 4 y 2(1+y 2 ) 2 -y 3 1+x 2   . First note that θ = 0 is a critical point; we prove that there are no others. The continuous parameter σ prevents us from using a formal verification system, so we must work 'by hand'. Warning: the proof is a long inelegant string of case-by-case inequalities. Assume for contradiction that ξ = 0 with θ = 0. First note that θ < σ implies |x|, |y| < σ, and x = 0 or y = 0 implies x = y = 0 using ξ 1 = 0 or ξ 2 = 0 respectively. We can therefore assume 0 < |x|, |y| < σ. We can moreover assume that x > 0, the opposite case following by the quadrant change of variables (x , y ) = (-x, -y). 1. We begin with the case σ/2 ≤ x < σ. First notice that x + y -2x(3x 2 + y 2 )/σ 2 = x(1 -6x 2 /σ 2 ) + y(1 -2xy/σ 2 ) ≤ x(1 -3/2) + y(1 -y/σ) and the rightmost term attains its maximum value for y = σ/2, hence x + y -2x(3x 2 + y 2 )/σ 2 ≤ -x/2 + σ/4 ≤ 0 . This implies ξ 1 ≤ x 5 - y 4 x 2(1 + x 2 ) 2 - x 3 1 + y 2 < x 5 - x 3 1 + y 2 < x 3 1 -y 2 - 1 1 + y 2 = -x 3 y 4 1 + y 2 < 0 using x 2 + y 2 < 1, which is a contradiction to ξ = 0. 2. We proceed with the case x < σ/2 and |y| ≤ σ/2. First, y < 0 implies the contradiction ξ 2 < y -2y 3 /σ 2 - x 4 y 2(1 + y 2 ) 2 - y 3 1 + x 2 < y/2 -y σ 4 2 5 + σ 2 2 2 < y 1 2 - 1 2 5 - 1 2 2 < 0 , so we can assume y > 0. In particular we have (1 -2y(y + x)/σ 2 ) > 0. If y ≤ x, we also obtain ξ 2 < y 5 + (y -x) 1 -2y(y + x)/σ 2 - y 3 1 + x 2 < y 3 y 2 - 1 1 + x 2 < -y 3 x 4 1 + x 2 < 0 , so we can assume x < y. There are again two cases to distinguish. If x < σ/2 -bσ 2 with b = 0.08, x(1 -6x 2 /σ 2 ) + y(1 -2xy/σ 2 ) > x(1 -3(1/2 -σb)) + x(1 -(1/2 -σb)) > 4σbx which implies the contradiction ξ 1 > 4σbx - y 4 x 2(1 + x 2 ) 2 - x 3 1 + y 2 > σx 4b - σ 4 2 5 - σ 2 2 2 > σx 4b - 1 2 5 - 1 2 2 > 0 . Finally assume x ≥ σ/2 -bσ 2 . Then we have (y -x)(1 -2y(x + y)/σ 2 ) < bσ 2 (1 -4x 2 /σ 2 ) < bσ 2 (1 -(1 -2σb) 2 ) = 4σ 3 b 2 (1 -σb) < 4σ 3 b 2 and obtain ξ 2 < y 5 + 4σ 3 b 2 - y 3 1 + x 2 < σ 3 σ 2 /2 5 + 4b 2 - (1/2 -σb) 3 1 + σ 2 /4 . We claim that the rightmost term is negative. Indeed, the quantity inside the brackets has derivative σ/2 4 + (1/2 -σb) 2 (1 + σ 2 /4) 2 3b(1 + σ 2 /4) + σ(1/2 -σb)/2 > 0 and so its supremum across σ ∈ [0, 0.1] must be attained at σ = 0.1. We obtain the contradiction ξ 2 < σ 3 0.01/2 5 + 4b 2 - (1/2 -b) 3 1 + 0.01/4 < 0 for b = 0.08 and σ > 0, as required. 3. Finally, consider the case x < σ/2 and |y| > σ/2. First, y < 0 implies the contradiction ξ 1 < x + y -2x(3x 2 + y 2 )/σ 2 < -2x(3x 2 + y 2 ) < 0 so we can assume y > 0. Now assume y < σ -x(1 + σ 2 ). Then x(1 -6x 2 /σ 2 ) + y(1 -2xy/σ 2 ) > -x/2 + y(1 -y/σ) > -x/2 + x(1 + σ 2 ) > x(1/2 + σ 2 ) , which yields the contradiction ξ 1 > x 1 2 + σ 2 - y 4 2(1 + x 2 ) 2 - x 2 1 + y 2 > x 1/2 + σ 2 -σ 4 -σ 2 /4 > x(1/2 -1/4) > 0 . We can therefore assume y ≥ σ -x(1 + σ 2 ). We have (y -x)(1 -2y(y + x)/σ 2 ) < (y -x)(1 -(y + x)/σ) ≤ (y -x)(1 -(1 -σx)) < σx(y -x) which attains its maximum in x at x = y/2, hence ξ 2 < y 5 - y 3 1 + x 2 + σy 2 4 < σy 2 4 4σ 2 - 2 1 + σ 2 + 4 . Finally we obtain the contradiction ξ 2 < σy 2 4 5σ 2 + 4σ 4 -1 1 + σ 2 < 0 for all σ < 0.1. All cases lead to contradictions, so we conclude that θ is the only critical point, with positive definite Hessian H( θ) = 1 1 -1 1 0 , hence θ is a strict minimum. Now notice that M 0 has the same dominant terms as M from Theorem 1, so coercivity of M 0 follows from the same argument. Since M σ is identical to M 0 outside the σ-ball B σ = {(x, y) ∈ R 2 | θ < σ}, coercivity of M 0 implies coercivity of M σ for any σ. Fix any reasonable algorithm F , any bounded continuous measure ν on R d with initial region U , and any > 0. We abuse notation somewhat and write F k σ (θ 0 ) for the kth iterate of F in M σ with initial parameters θ 0 . We claim that there exists σ > 0 such that Since θ is the only critical point and M σ is coercive, this implies bounded but non-convergent iterates or divergent iterates with infinite losses with probability at least 1 -, proving the theorem. P ν θ 0 ∈ U and lim k F k σ (θ 0 ) = θ < . To begin, µ(B σ ) → 0 as σ → 0 implies that we can pick σ > 0 such that P ν (θ 0 ∈ B σ ) < /2 by continuity of ν with respect to Lebesgue measure. Now let Ū be the closure of U and define D = Ū ∩ { θ ≥ σ }. Note that D is compact since Ū is compact and closed subsets of a compact set are compact. F is reasonable, D is bounded and θ = 0 is a strict maximum in M 0 , so there are hyperparameters such that the stable set Z = {θ 0 ∈ D | lim k F k 0 (θ 0 ) = 0} has zero measure. We claim that Z δ := {θ 0 ∈ D | inf k∈N F k 0 (θ 0 ) < δ} has arbitrarily small measure as δ → 0. Assume for contradiction that there exists α > 0 such that µ(Z δ ) ≥ α for all δ > 0. Then Z δ ⊂ Z δ and µ(Z δ ) ≤ µ(D) < ∞ for all δ < δ implies µ n∈N Z 1 n = lim n→∞ µ Z 1 n ≥ α by Nelson (2015, Exercise 1.19 ). On the other hand, n∈N Z 1 n = Z 0 yields the contradiction 0 = µ(Z 0 ) ≥ α. We conclude that Z δ has arbitrarily small measure, hence there exists δ > 0 such that P ν (θ 0 ∈ Z δ ) < /2 by continuity of ν. Now let σ = min{σ , δ} and notice that θ 0 ∈ D \ Z δ =⇒ inf k F k 0 (θ 0 ) ≥ δ ≥ σ =⇒ inf k F k σ (θ 0 ) ≥ σ , where the last implication holds since M σ and M 0 are indistinguishable in { θ ≥ σ}, so the algorithm must have identical iterates F k σ (θ 0 ) = F k 0 (θ 0 ) for all k. It follows by contraposition that lim k F k σ (θ 0 ) = θ implies inf k F k σ (θ 0 ) < σ and so θ 0 ∈ Z δ or θ 0 / ∈ D. Finally we obtain P ν θ 0 ∈ U and lim k F k σ (θ 0 ) = θ = P ν (θ 0 ∈ U ∩ Z δ or θ 0 ∈ U \ D) ≤ P ν (θ 0 ∈ U ∩ Z δ ) + P ν (θ 0 ∈ U \ D) ≤ P ν (θ 0 ∈ Z δ ) + P ν (θ 0 ∈ B σ ) < /2 + /2 = as required. We plot iterates for a single run of each algorithm in Figure 3 with α = γ = 0.01.

F PROOF OF THEOREM 3

Theorem 3. There is a weakly-coercive, nondegenerate, analytic two-player zero-sum game N whose only critical point is a strict maximum. Algorithms in A almost surely have bounded nonconvergent iterates in N for α, γ sufficiently small. Proof. Consider the analytic zero-sum game N given by L 1 = xy -x 2 /2 + y 2 /2 + x 4 /4 -y 4 /4 = -L 2 with simultaneous gradient ξ = y -x + x 3 -x -y + y 3 and Hessian H = -1 + 3x 2 1 -1 -1 + 3y 2 . We show that the only solution to ξ = 0 is the origin. First we can assume x, y ≥ 0 since any other solution can be obtained by a quadrant variable change ( †). Now assume for contradiction that y = 0, then ξ 2 = 0 = -x -y + y 3 ≤ -y + y 3 = y(y 2 -1) implies y ≥ 1 and hence ξ 1 = 0 = y -x + x 3 ≥ 1 -x + x 3 = (x + 1)(x -1) 2 + x 2 > 0 which is a contradiction. It follows that y = 0 and hence ξ 2 = 0 = x as required. Now the origin has invertible, negative-definite Hessian H(0) = -1 1 -1 -1 ≺ 0 so the unique critical point is a strict maximum. The game is nondegenerate since the only critical point has invertible Hessian. The game is weakly-coercive since L 1 (x, ȳ) → ∞ for any fixed ȳ by domination of the x 4 term; similarly for L 2 (x, y) by domination of the y 4 term. Bounded iterates: strategy. We begin by showing that all algorithms have bounded iterates in N for α, γ sufficiently small. For each algorithm F , our strategy is to show that there exists r > 0 such that for any s > 0 we have F (θ) < θ for all r < θ < s and α, γ sufficiently small. This will be enough to prove bounded iteration upon bounded initialisation. Denote by B r the ball of radius r centered at the origin. GD. We have θ T ξ = x(y -x + x 3 ) + y(-x -y + y 3 ) = x 4 -x 2 + y 4 -y 2 = (x 2 -1) 2 + (y 2 -1) 2 + x 2 + y 2 -2 > 1 for all θ 2 = x 2 + y 2 > 3. For any s > 0 we obtain F (θ) 2 = θ -αξ 2 = θ 2 -2αθ T ξ + α 2 ξ 2 < θ -α 2 -α ξ 2 < θ 2 for all √ 3 < θ < s and α sufficiently small, namely 0 < α < 2/ sup θ∈Bs ξ 2 . EG. For any s > 0 and √ 4 < θ < s we have θ -αξ(θ) 2 > 4 -2αθ T ξ > 3 for α < 1/ sup θ∈Bs 2θ T ξ. Now using θ T ξ > 1 for all θ 2 > 3 by the argument for GD above, F (θ) 2 = θ 2 -2αθ T ξ(θ -αξ(θ)) + α 2 ξ(θ -αξ(θ)) 2 = θ 2 -2α(θ -αξ(θ)) T ξ(θ -αξ(θ)) + O(α 2 ) < θ 2 -α (2 -O(α)) < θ 2 for α sufficiently small. for all 3 < θ < s and α < 2/ sup θ∈Bs G SGA . Bounded iterates: conclusion. Now assume as usual that θ 0 is initalised in any bounded region U . For each algorithm we have found r such that for any s > 0 we have F (θ) < θ for all r < θ < s and α, γ sufficiently small. Now pick r ≥ r such that U ⊂ B r . Define the bounded region V = {θ -tG(θ) | t ∈ [0, 1], θ ∈ B r } . and pick s ≥ r such that V ⊂ B s . By the above we have F (θ) < θ for all r < θ < s and α, γ sufficiently small. In particular, fix any α, γ < 1 satisfying this condition. We claim that F (θ) ∈ B s for all θ ∈ B s . Indeed, either θ ∈ B r implies F (θ) = θ -αG(θ) ∈ V ⊂ B s or θ / ∈ B r implies F (θ) < θ < s and so F (θ) ∈ B s . We conclude that θ 0 ∈ U ⊂ B s implies bounded iterates θ k = F k (θ) ∈ B s for all k. Non-convergence: strategy. We show that all methods in A have the origin as unique fixed points for α, γ sufficiently small. Fixed points of each gradient-based method are given by G = 0, where G is given in Appendix A, and we moreover show that the Jacobian ∇G at the origin is negativedefinite. Non-convergence will follow from this for α sufficiently small. GD. Fixed points of simultaneous GD correspond by definition to critical points: G GD = ξ = 0 ⇐⇒ θ = 0 . The Jacobian of G at 0 is ∇ξ = H = -1 1 -1 -1 ≺ 0 . AGD. We have G AGD = 0 ⇐⇒ ξ 1 = 0 ξ 2 (θ 1 -αξ 1 , θ 2 ) = 0 ⇐⇒ ξ 1 = 0 ξ 2 = 0 ⇐⇒ ξ = 0 ⇐⇒ θ = 0 . Now ξ 2 (x -αξ 1 (x, y), y) = -(x -α(y -x + x 3 )) -y + y 3 = x(-1 -α) + y(-1 + α) + αx 3 + y 3 so the Jacobian at the origin is J AGD = -1 1 -1 -α -1 + α with symmetric part S AGD = -1 -α/2 -α/2 -1 + α which has negative trace for all α < 2 and positive determinant -α 2 /2 -α + 1 = -(α + 1) 2 /2 + 3/2 > -9/8 + 3/2 > 0 for all α < 1/2, which together imply negative eigenvalues and hence S AGD ≺ 0. Recall that a matrix is negative-definite iff its symmetric part is, hence J AGD ≺ 0 for all α < 1/2.

EG. We have

G EG = ξ • (id -αξ) = 0 ⇐⇒ id -αξ = 0 ⇐⇒ x -α(y -x + x 3 ) = 0 y -α(-x -y + y 3 ) = 0 . We have shown that any bounded initialisation results in bounded iterates for EG for α sufficiently small. Let U be this bounded region and assume for contradiction that id -αξ = 0 with x, y = 0 (noting that x = 0 implies y = 0 by the first equation and vice-versa). We can assume x, y > 0 since any other solution can be obtained by a quadrant change of variable ( †). We first prove that x, y < 1 for 0 < α < 1/ sup θ∈U {y -x + x 3 }. Indeed we have 0 = ξ 1 > x -α sup θ∈U > x -1 hence x < 1. A similar derivation holds for y, hence 0 < x, y < 1. But now x ≥ y implies 0 = ξ 1 ≥ x -α(y -y + x 3 ) = x(1 -αx 2 ) ≥ x(1 -α) > 0 for α < 1 while x < y implies 0 = ξ 2 ≥ y -α(-x -x + y 3 ) = y(1 -αy 2 ) ≥ y(1 -α) > 0 and the contradiction is complete, hence θ = 0 is the only fixed point of EG. Now J EG = H(I -αH) = -1 1 -1 -1 1 + α -α α 1 + α = -1 1 + 2α -1 -2α -1 with S EG = -I ≺ 0, hence J EG ≺ 0 for all α. OMD. By Daskalakis & Panageas (2018, Remark 1.5), fixed points of OMD must satisfy ξ = 0 by viewing OMD as mapping pairs (θ k , θ k-1 ) to pairs (θ k+1 , θ k ), hence θ = 0. Now J OM D = 2H -H(I -αH) -1 = 2 -1 1 -1 -1 - 1 1 + 2α + 2α 2 -1 -2α 1 -1 -1 -2α . Now notice that 1 + 2α 1 + 2α + 2α 2 ≤ 1 and so S OM D = -2 + 1+2α 1+2α+2α 2 0 0 -2 + 1+2α 1+2α+2α 2 ≺ 0 for all α. CO. We have G CO = (I + γH T )ξ = 0 ⇐⇒ ξ = 0 ⇐⇒ θ = 0 for all γ since the matrix (I + γH T ) = 1 -γ -γ γ 1 -γ is always invertible with determinant (1 -γ) 2 + γ 2 > 0. Now J CO = (I + γH T )H = 1 -γ -γ γ 1 -γ -1 1 -1 -1 = -1 + 2γ 1 -1 -1 + 2γ ≺ 0 for all γ < 1/2. SGA. We have G SGA = (I + λA T )ξ = 0 ⇐⇒ ξ = 0 ⇐⇒ θ = 0 since antisymmetric A with eigenvalues ia, a ∈ R implies that I + λA T is always invertible with eigenvalues 1 + iλa = 0. Now recall that λ is given by λ = sign ξ, H T ξ A T , H T ξ = sign ξ T H T ξ • ξ T AH T ξ . We have H T = -1 + 3x 2 -1 1 -1 + 3y 2 ≺ 0 and AH T = 1 -1 + 3y 2 1 -3x 2 1 0 for all θ sufficiently small, hence ξ T H T ξ ≤ 0 and ξ T AH T ξ ≥ 0 and thus λ = sign ξ, H T ξ A T , H T ξ = sign ξ T H T ξ • ξ T AH T ξ ≤ 0 around the origin. Now J SGA = (I + λA T )H = 1 -λ λ 1 -1 1 -1 -1 = -1 + λ 1 + λ -1 -λ -1 + λ ≺ 0 for all λ < 1, which holds in particular for λ ≤ 0. CGD. Note that  H o = 0 1 -1 0 = A J SOS = J LA = -1 + α 1 + α -1 -α -1 + α ≺ 0 for all α < 1. Non-convergence: conclusion. We conclude that all algorithms in A have the origin as unique fixed points, with negative-definite Jacobian, for α, γ sufficiently small. If a method converges, it must therefore converge to the origin. We show that this occurs with zero probability. One may invoke the Stable Manifold Theorem from dynamical systems, but there is a more direct proof. Take any algorithm F in A and let U be the initialisation region. We prove that the stable set Z = {θ 0 ∈ U | lim k F k (θ 0 ) = 0} has Lebesgue measure zero for α sufficiently small. First assume for contradiction that θ k → 0 with θ k = 0 for all k. Then G(θ k ) = G(0) + ∇G(0)θ k + O( θ k 2 ) = ∇G( θ)(θ k ) + O( θ k 2 ) since G(0) = 0, and we obtain θ k+1 2 = θ k -αG(θ k ) 2 = θ k 2 -2αθ T k G(θ k ) + α 2 G(θ k ) 2 ≥ θ k 2 -2αθ T k ∇G(0)θ k + O( θ k 3 ) > θ k 2 for all k sufficiently large, since ∇G(0) ≺ 0. This is a contradiction to θ k → 0, so θ k → 0 implies θ k = 0 for some k and so, writing F U : U → R d for the restriction of F to U , Z ⊂ ∪ ∞ k=0 F -k U ({0}) . We claim that F U is a C 1 local diffeomorphism, and a diffeomorphism onto its image. Now G U is C 1 with bounded domain, hence L-Lipschitz for some finite L. By Lemma 0, the eigenvalues of ∇G in U satisfy |λ| ≤ ∇G ≤ L, hence ∇F U = I -α∇G U has eigenvalues 1 -αλ ≥ 1 -α|λ| ≥ 1 -αL > 0. It follows that ∇F U is invertible everywhere, so F U is a local diffeomorphism by the Inverse Function Theorem (Spivak, 1971, Th. 2.11) . To prove that F U : U → F (U ) is a diffeomorphism, it is sufficient to show injectivity of F U . Assume for contradiction that F U (θ) = F U (θ ) with θ = θ . Then by definition, θ -θ = α(G U (θ ) -G U (θ)) and so θ -θ = α G U (θ ) -G U (θ) ≤ αL θ -θ < θ -θ , a contradiction. We conclude that F U is a diffeomorphism onto its image with continuously differentiable inverse F -1 U , hence F -1 U is locally Lipschitz and preserves measure zero sets. It follows by induction that µ(F -k U ({0})) = 0 for all k, and so µ(Z) ≤ µ ∪ ∞ k=0 F -k U ({0}) = 0 since countable unions of measure zero sets have zero measure. Since θ 0 follows a continuous distribution ν, we conclude P ν lim k F k (θ 0 ) = 0 = 0 as required. Since all algorithms were also shown to produce bounded iterates, they almost surely have bounded non-convergent iterates for α, γ sufficiently small. The proof is complete; iterates are plotted for a single run of each algorithm in Figure 3 with α = γ = 0.01. 

G PROOF OF COROLLARY 1

Corollary 1. There are no measures of progress for reasonable algorithms which produce bounded iterates in M or N .



For non-symmetric matrices, positive definiteness is defined as H 0 iff u T Hu > 0 for all non-zero u ∈ R d . This is equivalent to the symmetric part S of H being positive definite.



Figure 1: Algorithms in A fail to converge in M with α = γ = 0.01. Single run with standard normal initialisation, 3000 iterations. The behavior of SGA is slightly different, explained by the presence of a non-continuous parameter λ jumping between ±1 according to an alignment criterion.

LIB "solve.lib"; LIB "rootsur.lib"; ring r = (0,x),(y),dp; poly p1 = 2 * (1+xˆ2)ˆ2 * (1+yˆ2) * (xˆ5-x+y)-yˆ4 * x * (1+yˆ2)-2 * xˆ3 * (1+xˆ2)ˆ2; poly p2 = 2 * (1+yˆ2)ˆ2 * (1+xˆ2) * (yˆ5-y-x)-xˆ4 * y * (1+xˆ2)-2 * yˆ3 * (1+yˆ2)ˆ2; ideal i = p1,p2; poly f = det(mp_res_mat(i)); ring s = 0,(x,y),dp; poly f = imap(r, f); nrroots(f);

Figure 2: Algorithms in A fail to converge in M σ with σ = α = γ = 0.01. Single run with standard normal initialisation, 3000 iterations.

is antisymmetric, hence I + αH o is always invertible as for SGA andG CGD = (I + αH o ) -1 ξ = 0 ⇐⇒ ξ = 0 ⇐⇒ θ = 0 . Now J CGD = (I + αH o ) As above, G LA = (I -αH o )ξ = 0 ⇐⇒ ξ = 0 ⇐⇒ θ = 0 since (I -αH o ) is always invertible. Now J LA = (I -αH o )H = (I -αA)H = -1 + α 1 + α -1 -α -1 + α -x + x 3 -y + x -x 3 x + y -y 3 -x -y + y 3 = -x -y + y 3 -y + x -x 3 = H o ξand soG LOLA = (I -αH o )ξ -α diag H T o ∇L = (I -2αH o )ξ ⇐⇒ ξ = 0 ⇐⇒ θ = 0 as for LA. Similarly, substituting 2α for α in the derivation for LA yieldsJ LOLA = (I -2αH o )H ≺ 0 for all α < 1/2.SOS.As for LOLA we haveG SOS = (I -αH o )ξ -pα diag H T o ∇L = (I -α(1 + p)H o )ξ ⇐⇒ ξ = 0 ⇐⇒ θ = 0for any α, p. Now p( θ) = 0 for fixed points θ byLetcher et al. (2019b, Lemma D.7), hence

Figure 3: Algorithms in A fail to converge in N with α = γ = 0.01. Single run with standard normal initialisation, 3000 iterations.

