BOOSTING MULTIAGENT REINFORCEMENT LEARN-ING VIA PERMUTATION INVARIANT AND PERMUTA-TION EQUIVARIANT NETWORKS

Abstract

The state space in Multiagent Reinforcement Learning (MARL) grows exponentially with the agent number. Such a curse of dimensionality results in poor scalability and low sample efficiency, inhibiting MARL for decades. To break this curse, we propose a unified agent permutation framework that exploits the permutation invariance (PI) and permutation equivariance (PE) inductive biases to reduce the multiagent state space. Our insight is that permuting the order of entities in the factored multiagent state space does not change the information. Specifically, we propose two novel implementations: a Dynamic Permutation Network (DPN) and a Hyper Policy Network (HPN). The core idea is to build separate entity-wise PI input and PE output network modules to connect the entity-factored state space and action space in an end-to-end way. DPN achieves such connections by two separate module selection networks, which consistently assign the same input module to the same input entity (guarantee PI) and assign the same output module to the same entity-related output (guarantee PE). To enhance the representation capability, HPN replaces the module selection networks of DPN with hypernetworks to directly generate the corresponding module weights. Extensive experiments in SMAC, SMACv2, Google Research Football, and MPE validate that the proposed methods significantly boost the performance and the learning efficiency of existing MARL algorithms. Remarkably, in SMAC, we achieve 100% win rates in almost all hard and super-hard scenarios (never achieved before).

1. INTRODUCTION

Multiagent Reinforcement Learning (MARL) has successfully addressed many real-world problems (Vinyals et al., 2019; Berner et al., 2019; Hüttenrauch et al., 2017) . However, MARL algorithms still suffer from poor sample-efficiency and poor scalability due to the curse of dimensionality, i.e., the joint state-action space grows exponentially as the agent number increases (Li et al., 2022) . A way to solve this problem is to properly reduce the size of the state-action space (van der Pol et al., 2021; Li et al., 2021) . In this paper, we study how to utilize the permutation invariance (PI) and permutation equivariance (PE)foot_0 inductive biases to reduce the state space in MARL. Let G be the set of all permutation matricesfoot_1 of size m × m and g be a specific permutation matrix of G. A function f : X → Y where X = [x 1 , . . . x m ] T , is PI if permutation of the input components does not change the function output, i.e., f (g [x 1 , . . . x m ] T ) = f ([x 1 , . . . x m ] T ), ∀g ∈ G. In contrast, a function f : X → Y where X = [x 1 , . . . x m ] T and Y = [y 1 , . . . y m ] T , is PE if permutation of the input components also permutes the outputs with the same permutation g, i.e., f (g [x 1 , . . . x m ] T ) = g [y 1 , . . . y m ] T , ∀g ∈ G. For functions that are not PI or PE, we uniformly denote them as permutation-sensitive functions. A multiagent environment typically consists of m individual entities, including n learning agents and m -n non-player objects. The observation o i of each agent i is usually composed of the features of the m entities, i.e., [x 1 , . . . x m ], where x i ∈ X represents each entity's features and X is the feature space. If simply representing o i as a concatenation of [x 1 , . . . x m ] in a fixed order, the observation space will be |X | m . A prior knowledge is that although there are m! different orders of these entities, they inherently have the same information. Thus building functions that are insensitive to the entities' orders can significantly reduce the observation space by a factor of 1 m! . To this end, in this paper, we exploit both PI and PE functions to design more sample efficient MARL algorithms. To achieve PI, there are two types of previous methods. The first employs the idea of data augmentation, e.g., Ye et al. (2020) propose data augmented MADDPG, which generates more training data by shuffling the order of the input components and forcedly maps these generated data to the same output through training. But it is inefficient to train a permutation-sensitive function to output the same value when taking features in different orders as inputs. The second type applies naturally PI architectures, such as Deep Sets (Li et al., 2021) and GNNs (Wang et al., 2020b; Liu et al., 2020) , to MARL. These models use shared input embedding layers and entity-wise pooling layers to achieve PI. However, using shared embedding layers limits the model's representational capacity and may result in poor performance (Wagstaff et al., 2019) . For PE, to the best of our knowledge, it has drawn relatively less attention in MARL community and few works exploit this property. In general, the architecture of an agent's policy network can be considered as three parts: ❶ an input layer, ❷ a backbone network (main architecture) and ❸ an output layer. To achieve PI and PE, we follow the minimal modification principle and propose a light-yet-efficient agent permutation framework, where we only modify the input and output layers while keeping backbone unchanged. Thus our method can be more easily plugged into existing MARL methods. The core idea is that, instead of using shared embedding layers, we build non-shared entity-wise PI input and PE output network modules to connect the entity-factored state space and action space in an end-to-end way. Specifically, we propose two novel implementations: a Dynamic Permutation Network (DPN) and a Hyper Policy Network (HPN). To achieve PI, DPN builds a separate module selection network, which consistently selects the same input module for the same input entity no matter where the entity is arranged and then merges all input modules' outputs by sum pooling. Similarly, to achieve PE, it builds a second module selection network, which always assigns the same output module to the same entity-related output. However, one restriction of DPN is that the number of network modules is limited. As a result, the module assigned to each entity may not be the best fit. To relax the restriction and enhance the representational capability, we further propose HPN which replaces the module selection networks of DPN with hypernetworks and directly generates the network parameters of the corresponding modules (by taking each entity's own features as input). Entities with different features are processed by modules with entity-specific parameters. Therefore, the model's representational capability is improved while ensuring the PI and PE properties. Extensive evaluations in SMAC, SMACv2, Google Research Football and MPE validate that DPN and HPN can be easily integrated into many existing MARL algorithms (both value-based and policy-based) and significantly boost their learning efficiency and converged performance. Remarkably, we achieve 100% win-rates in almost all hard and super-hard scenarios of SMAC,



For brevity, we use PI/PE as abbreviation of permutation invariance/permutation equivariance (nouns) or permutation-invariant/permutation-equivariant (adjectives) depending on the context. A permutation matrix has exactly a single unit value in every row and column and zeros everywhere else.



Figure 1: A motivation example in SMAC.

