ORDER MATTERS: AGENT-BY-AGENT POLICY OPTI-MIZATION

Abstract

While multi-agent trust region algorithms have achieved great success empirically in solving coordination tasks, most of them, however, suffer from a nonstationarity problem since agents update their policies simultaneously. In contrast, a sequential scheme that updates policies agent-by-agent provides another perspective and shows strong performance. However, sample inefficiency and lack of monotonic improvement guarantees for each agent are still the two significant challenges for the sequential scheme. In this paper, we propose the Agent-byagent Policy Optimization (A2PO) algorithm to improve the sample efficiency and retain the guarantees of monotonic improvement for each agent during training. We justify the tightness of the monotonic improvement bound compared with other trust region algorithms. From the perspective of sequentially updating agents, we further consider the effect of agent updating order and extend the theory of non-stationarity into the sequential update scheme. To evaluate A2PO, we conduct a comprehensive empirical study on four benchmarks: StarCraftII, Multiagent MuJoCo, Multi-agent Particle Environment, and Google Research Football full game scenarios. A2PO consistently outperforms strong baselines.

1. INTRODUCTION

Trust region learning methods in reinforcement learning (RL) (Kakade & Langford, 2002) have achieved great success in solving complex tasks, from single-agent control tasks (Andrychowicz et al., 2020) to multi-agent applications (Albrecht & Stone, 2018; Ye et al., 2020) . The methods deliver superior and stable performances because of their theoretical guarantees of monotonic policy improvement. Recently, several works that adopt trust region learning in multi-agent reinforcement learning (MARL) have been proposed, including algorithms in which agents independently update their policies using trust region methods (de Witt et al., 2020; Yu et al., 2022) and algorithms that coordinate agents' policies during the update process (Wu et al., 2021; Kuba et al., 2022) . Most algorithms update the agents simultaneously, that is, all agents perform policy improvement at the same time and cannot observe the change of other agents, as shown in Fig. 1c . The simultaneous update scheme brings about the non-stationarity problem, i.e., the environment dynamic changes from one agent's perspective as other agents also change their policies (Hernandez-Leal et al., 2017) . Figure 1 : The taxonomy on the rollout scheme and the policy update scheme.

Rollout

In contrast to the simultaneous update scheme, algorithms that sequentially execute agent-byagent updates allow agents to perceive changes made by preceding agents, presenting another perspective for analyzing inter-agent interaction (Gemp et al., 2022) . Bertsekas (2021) proposed a sequential update framework, named Rollout and Policy Iteration for a Single Agent (RPISA) in this paper, which performs a rollout every time an agent updates its policy (Fig. 1a ). RPISA effectively turns non-stationary MARL problems into stationary single agent reinforcement learning (SARL) ones. It retains the theo-retical properties of the chosen SARL base algorithm, such as the monotonic improvement (Kakade & Langford, 2002) . However, it is sample-inefficient since it only utilizes 1/n of the collected samples to update n agents' policies. On the other hand, heterogeneous Proximal Policy Optimization (HAPPO) (Kuba et al., 2022) sequentially updates agents based on their local advantages estimated from the same rollout samples (Fig. 1b ). Although it avoids the waste of collected samples and has a monotonic improvement on the joint policy, the policy improvement of a single agent is not theoretically guaranteed. Consequently, one update may offset previous agents' policy improvement, reducing the overall joint policy improvement. In this paper, we aim to combine the merits of the existing single rollout and sequential policy update schemes. Firstly, we show that naive sequential update algorithms with a single rollout can lose the monotonic improvement guarantee of PPO for a single agent's policy. To tackle this problem, we propose a surrogate objective with a novel off-policy correction method, preceding-agent offpolicy correction (PreOPC), which retains the monotonic improvement guarantee on both the joint policy and each agent's policy. Then we further show that the joint monotonic bound built on the single agent bound is tighter than those of other simultaneous update algorithms and is tightened during updating the agents at a stagefoot_0 . This leads to Agent-by-agent Policy Optimization (A2PO), a novel sequential update algorithm with single rollout scheme (Fig. 1b ). Further, we study the significance of the agent update order and extend the theory of non-stationarity to the sequential update scheme. We test A2PO on four popular cooperative multi-agent benchmarks: StarCraftII, multi-agent MuJoCo, multi-agent particle environment, and Google Research Football full game scenarios. On all benchmark tasks, A2PO consistently outperforms strong baselines with a large margin in both performance and sample efficiency and shows an advantage in encouraging interagent coordination. To sum up, the main contributions of this work are as follows: 1. Monotonic improvement bound. We prove that the guarantees of monotonic improvement on each agent's policy could be retained under the single rollout scheme with the off-policy correction method PreOPC we proposed. We further prove that the monotonic bound on the joint policy achieved given theoretical guarantees of each agent is the tightest among single rollout algorithms, yielding effective policy optimization. 2. A2PO algorithm. We propose A2PO, the first agent-by-agent sequential update algorithm that retains the monotonic policy improvement on both each agent's policy and the joint policy and does not require multiple rollouts when performing policy improvement. 3. Agent update order. We further investigate the connections between the sequential policy update scheme, the agent update order, and the non-stationarity problem, which motivates two novel methods: a semi-greedy agent selection rule for optimization acceleration and an adaptive clipping parameter method for alleviating the non-stationarity problem.

2. RELATED WORKS

Trust Region Policy Optimization (TRPO) (Schulman et al., 2015) and Proximal Policy Optimization (PPO) (Schulman et al., 2017) are popular trust region algorithms with strong performances, benefiting from the guarantee of monotonic policy improvement (Kakade & Langford, 2002) . Several recent works delve deeper into understanding these methods (Wang et al., 2019; Liu et al., 2019; Wang et al., 2020) . In the multi-agent scenarios, de Witt et al. ( 2020 (2022) . However, these MARL algorithms suffer from the non-stationarity problem as they update agents simultaneously. The environment dynamic changes from one agent's perspective as others also change their policies. Consequently, agents suffer from the high variance of gradients and require more samples for convergence (Hernandez-Leal et al., 2017) . To alleviate the non-stationarity problem, Multi-Agent Mirror descent policy algorithm with Trust region decomposition (MAMT) (Li et al., 2022b) factorizes the trust regions of the joint policy and constructs the connections among the factorized trust regions, approximately constraining the diversity of joint policy.



We define a stage as a period during which all the agents have been updated once (Fig.1).



) and Papoudakis et al. (2020) empirically studied the performance of Independent PPO in multi-agent tasks. Yu et al. (2022) conducted a comprehensive benchmark and analyzed the factor influential to the performance of Multi-agent PPO (MAPPO), a variant of PPO with centralized critics. Coordinate PPO (CoPPO) (Wu et al., 2021) integrates the value decomposition (Sunehag et al., 2017) and approximately performs a joint policy improvement with monotonic improvement. Several further trials to implement trust region methods are discussed in Wen et al. (2021); Li & He (2020); Sun et al. (2022); Ye et al.

