CAUSALWORLD: A ROBOTIC MANIPULATION BENCHMARK FOR CAUSAL STRUCTURE AND TRANS-FER LEARNING

Abstract

Despite recent successes of reinforcement learning (RL), it remains a challenge for agents to transfer learned skills to related environments. To facilitate research addressing this problem, we propose CausalWorld, a benchmark for causal structure and transfer learning in a robotic manipulation environment. The environment is a simulation of an open-source robotic platform, hence offering the possibility of sim-to-real transfer. Tasks consist of constructing 3D shapes from a set of blocks -inspired by how children learn to build complex structures. The key strength of CausalWorld is that it provides a combinatorial family of such tasks with common causal structure and underlying factors (including, e.g., robot and object masses, colors, sizes). The user (or the agent) may intervene on all causal variables, which allows for fine-grained control over how similar different tasks (or task distributions) are. One can thus easily define training and evaluation distributions of a desired difficulty level, targeting a specific form of generalization (e.g., only changes in appearance or object mass). Further, this common parametrization facilitates defining curricula by interpolating between an initial and a target task. While users may define their own task distributions, we present eight meaningful distributions as concrete benchmarks, ranging from simple to very challenging, all of which require long-horizon planning as well as precise low-level motor control. Finally, we provide baseline results for a subset of these tasks on distinct training curricula and corresponding evaluation protocols, verifying the feasibility of the tasks in this benchmark.

1. INTRODUCTION

do(floor_color='white', block_size=0.065, …etc) Benchmarks have played a crucial role in advancing entire research fields, for instance computer vision with the introduction of CIFAR-10 and ImageNet (Krizhevsky et al., 2009; 2012) . When it comes to the field of reinforcement learning (RL), similar breakthroughs have been achieved in domains such as game playing (Mnih et al., 2013; Silver et al., 2017) , learning motor control for high-dimensional simulated robots (Akkaya et al., 2019) , multi-agent settings (Baker et al., 2019; Berner et al., 2019) and for studying transfer in the context of meta-learning (Yu et al., 2019) . Nevertheless, trained agents often fail to transfer the knowledge about the learned skills from a training environment to a different but related environment sharing part of the underlying structure. This can be attributed to the fact that it is quite common to evaluate an agent on the training environments themselves, which leads to overfitting on these narrowly defined environments (Whiteson et al., 2011) , or that algorithms are com- pared using highly engineered and biased reward functions which may result in learning suboptimal policies with respect to the desired behaviour; this is particularly evident in robotics. In existing benchmarks (Yu et al., 2019; Goyal et al., 2019a; Cobbe et al., 2018; Bellemare et al., 2013; James et al., 2020) the amount of shared causal structure between the different environments is mostly unknown. For instance, in the Atari Arcade Learning environments, it is unclear how to quantify the underlying similarities between different Atari games and we generally do not know to which degree an agent can be expected to generalize. To overcome these limitations, we introduce a novel benchmark in a robotic manipulation environment that we call CausalWorld. It features a diverse set of environments that, in contrast to previous designs, share a large set of parameters and parts of the causal structure. Being able to intervene on these parameters (individually or collectively) permits the experimenter to evaluate agents' generalization abilities with respect to different types and magnitudes of changes in the environment. These parameters can be varied gradually, which yields a continuum of similar environments. This allows for fine-grained control of training and test distributions and the design of learning curricula. A remarkable skill that humans learn to master early on in their life is building complex structures using spatial-reasoning and dexterous manipulation abilities (Casey et al., 2008; Caldera et al., 1999; Kamii et al., 2004) . Playing with toy blocks constitutes a natural environment for children to develop important visual-spatial skills, helping them 'generalize' in building complex composition designs from presented or imagined goal structures (Verdine et al., 2017; Nath & Szücs, 2014; Dewar, 2018; Richardson et al., 2014) . Inspired by this, CausalWorld is designed to aid in learning and investigating these skills in a simulated robotic manipulation environment corresponding to the open-source TriFinger robot platform from Wüthrich et al. ( 2020), which can be built in the real world. Tasks are formulated as building 3D goal shapes using a set of available blocks by manipulating them -as seen in Fig. 1 . This yields a diverse familiy of tasks, ranging from relatively simple (e.g. pushing a single object) to extremely hard (e.g. building a complex structure from a large number of objects). CausalWorld improves upon previous benchmarks by exposing a large set of parameters in the causal generative model of the environments, such as weight, shape and appearance of the building blocks and the robot itself. The possibility of intervening on any of these properties at any point in time allows one to set up training curricula or to evaluate an agent's generalization capability with respect to different parameters. Furthermore, in contrast to previous benchmarks (Chevalier-Boisvert et al., 2018; Cobbe et al., 2018) , researchers may build their own real-world platform of this simulator at low cost, as detailed in Wüthrich et al. (2020) , and transfer their trained policies to the real world.



Figure 1: Example of dointerventions on exposed variables in CausalWorld.

Figure 2: Example tasks from the task generators provided in the benchmark. The goal shape is visualized in opaque red and the blocks in blue.

