ORDER MATTERS: AGENT-BY-AGENT POLICY OPTI-MIZATION

Abstract

While multi-agent trust region algorithms have achieved great success empirically in solving coordination tasks, most of them, however, suffer from a nonstationarity problem since agents update their policies simultaneously. In contrast, a sequential scheme that updates policies agent-by-agent provides another perspective and shows strong performance. However, sample inefficiency and lack of monotonic improvement guarantees for each agent are still the two significant challenges for the sequential scheme. In this paper, we propose the Agent-byagent Policy Optimization (A2PO) algorithm to improve the sample efficiency and retain the guarantees of monotonic improvement for each agent during training. We justify the tightness of the monotonic improvement bound compared with other trust region algorithms. From the perspective of sequentially updating agents, we further consider the effect of agent updating order and extend the theory of non-stationarity into the sequential update scheme. To evaluate A2PO, we conduct a comprehensive empirical study on four benchmarks: StarCraftII, Multiagent MuJoCo, Multi-agent Particle Environment, and Google Research Football full game scenarios. A2PO consistently outperforms strong baselines.

1. INTRODUCTION

Trust region learning methods in reinforcement learning (RL) (Kakade & Langford, 2002) have achieved great success in solving complex tasks, from single-agent control tasks (Andrychowicz et al., 2020) to multi-agent applications (Albrecht & Stone, 2018; Ye et al., 2020) . The methods deliver superior and stable performances because of their theoretical guarantees of monotonic policy improvement. Recently, several works that adopt trust region learning in multi-agent reinforcement learning (MARL) have been proposed, including algorithms in which agents independently update their policies using trust region methods (de Witt et al., 2020; Yu et al., 2022) and algorithms that coordinate agents' policies during the update process (Wu et al., 2021; Kuba et al., 2022) . Most algorithms update the agents simultaneously, that is, all agents perform policy improvement at the same time and cannot observe the change of other agents, as shown in Fig. 1c . The simultaneous update scheme brings about the non-stationarity problem, i.e., the environment dynamic changes from one agent's perspective as other agents also change their policies (Hernandez-Leal et al., 2017) . 

Rollout

Figure 1 : The taxonomy on the rollout scheme and the policy update scheme. In contrast to the simultaneous update scheme, algorithms that sequentially execute agent-byagent updates allow agents to perceive changes made by preceding agents, presenting another perspective for analyzing inter-agent interaction (Gemp et al., 2022) . Bertsekas (2021) proposed a sequential update framework, named Rollout and Policy Iteration for a Single Agent (RPISA) in this paper, which performs a rollout every time an agent updates its policy (Fig. 1a ). RPISA effectively turns non-stationary MARL problems into stationary single agent reinforcement learning (SARL) ones. It retains the theo-

