ERL-RE 2 : EFFICIENT EVOLUTIONARY REINFORCE-MENT LEARNING WITH SHARED STATE REPRESENTA-TION AND INDIVIDUAL POLICY REPRESENTATION

Abstract

Deep Reinforcement Learning (Deep RL) and Evolutionary Algorithms (EA) are two major paradigms of policy optimization with distinct learning principles, i.e., gradient-based v.s. gradient-free. An appealing research direction is integrating Deep RL and EA to devise new methods by fusing their complementary advantages. However, existing works on combining Deep RL and EA have two common drawbacks: 1) the RL agent and EA agents learn their policies individually, neglecting efficient sharing of useful common knowledge; 2) parameter-level policy optimization guarantees no semantic level of behavior evolution for the EA side. In this paper, we propose Evolutionary Reinforcement Learning with Two-scale State Representation and Policy Representation (ERL-Re 2 ), a novel solution to the aforementioned two drawbacks. The key idea of ERL-Re 2 is two-scale representation: all EA and RL policies share the same nonlinear state representation while maintaining individual linear policy representations. The state representation conveys expressive common features of the environment learned by all the agents collectively; the linear policy representation provides a favorable space for efficient policy optimization, where novel behavior-level crossover and mutation operations can be performed. Moreover, the linear policy representation allows convenient generalization of policy fitness with the help of the Policy-extended Value Function Approximator (PeVFA), further improving the sample efficiency of fitness estimation. The experiments on a range of continuous control tasks show that ERL-Re 2 consistently outperforms advanced baselines and achieves the State Of The Art (SOTA). Our code is available on https://github.com/yeshenpy/ERL-Re2.

1. INTRODUCTION

Reinforcement learning (RL) has achieved many successes in robot control (Yuan et al., 2022) , game AI (Hao et al., 2022; 2019) , supply chain (Ni et al., 2021) and etc (Hao et al., 2020) . With function approximation like deep neural networks, the policy can be learned efficiently by trial-and-error with reliable gradient updates. However, RL is widely known to be unstable, poor in exploration, and struggling when the gradient signals are noisy and less informative. By contrast, Evolutionary Algorithms (EA) (Bäck & Schwefel, 1993) are a class of black-box optimization methods, which is demonstrated to be competitive with RL (Such et al., 2017) . EA model natural evolution processes by maintaining a population of individuals and searching for favorable solutions by iteration. In each iteration, individuals with high fitness are selected to produce offspring by inheritance and variation, while those with low fitness are eliminated. Different from RL, EA are gradient-free and offers several strengths: strong exploration ability, robustness, and stable convergence (Sigaud, 2022). Despite the advantages, one major bottleneck of EA is the low sample efficiency due to the iterative evaluation of the population. This issue becomes more stringent when the policy space is large (Sigaud, 2022). Since EA and RL have distinct and complementary advantages, a natural idea is to combine these two heterogeneous policy optimization approaches and devise better policy optimization algorithms. Many efforts in recent years have been made in this direction (Khadka & Tumer, 2018; Khadka et al., 2019; Bodnar et al., 2020; Wang et al., 2022; Shen et al., 2020) . One representative work is ERL (Khadka & Tumer, 2018) which combines Genetic Algorithm (GA) (Mitchell, 1998) and DDPG (Lillicrap et al., 2016) . ERL maintains an evolution population and a RL agent meanwhile. The population and the RL agent interact with each other in a coherent framework: the RL agent

