ATTENTION-DRIVEN ROBOTIC MANIPULATION

Abstract

Despite the success of reinforcement learning methods, they have yet to have their breakthrough moment when applied to a broad range of robotic manipulation tasks. This is partly due to the fact that reinforcement learning algorithms are notoriously difficult and time consuming to train, which is exacerbated when training from images rather than full-state inputs. As humans perform manipulation tasks, our eyes closely monitor every step of the process with our gaze focusing sequentially on the objects being manipulated. With this in mind, we present our Attention-driven Robotic Manipulation (ARM) algorithm, which is a general manipulation algorithm that can be applied to a range of real-world sparserewarded tasks without any prior task knowledge. ARM splits the complex task of manipulation into a 3 stage pipeline: (1) a Q-attention agent extracts interesting pixel locations from RGB and point cloud inputs, (2) a next-best pose agent that accepts crops from the Q-attention agent and outputs poses, and (3) a control agent that takes the goal pose and outputs joint actions. We show that current state-of-the-art reinforcement learning algorithms catastrophically fail on a range of RLBench tasks, whilst ARM is successful within a few hours.

1. INTRODUCTION

Despite their potential, continuous-control reinforcement learning (RL) algorithms have many flaws: they are notoriously data hungry, often fail with sparse rewards, and struggle with long-horizon tasks. The algorithms for both discrete and continuous RL are almost always evaluated on benchmarks that give shaped rewards (Brockman et al., 2016; Tassa et al., 2018) , a privilege that is not feasible for training real-world robotic application across a broad range of tasks. Motivated by the observation that humans focus their gaze close to objects being manipulated (Land et al., 1999) , we propose an Attention-driven Robotic Manipulation (ARM) algorithm that consists of a series of algorithmagnostic components, that when combined, results in a method that is able to perform a range of challenging, sparsely-rewarded manipulation tasks. Our algorithm operates through a pipeline of modules: our novel Q-attention module first extracts interesting pixel locations from RGB and point cloud inputs by treating images as an environment, and pixel locations as actions. Using the pixel locations we crop the RGB and point cloud inputs, significantly reducing input size, and feed this to a next-best-pose continuous-control agent that outputs 6D poses, which is trained with our novel confidence-aware critic. These goal poses are then used by a control algorithm that continuously outputs motor velocities. As is common with sparsely-rewarded tasks, we improve initial exploration through the use of demonstrations. However, rather than simply inserting these directly into the replay buffer, we use a keyframe discovery strategy that chooses interesting keyframes along demonstration trajectories that is fundamental to training our Q-attention module. Rather than storing the transition from an initial state to a keyframe state, we use our demo augmentation method which also stores the transition from intermediate points along a trajectories to the keyframe states; thus greatly increasing the proportion of initial demo transitions in the replay buffer. All of these improvements result in an algorithm that starkly outperforms other state-of-the-art methods when evaluated on 10 RLBench (James et al., 2020) tasks (Figure 1 ) that range in difficulty. To summarise, we propose the following contributions: (1) An off-policy hard attention mechanism that is learned via Q-Learning, rather than the on-policy hard attention and soft attention that is commonly seen in the NLP and vision community; (2) A confidence-aware Q function that predicts 

2. RELATED WORK

The use of reinforcement learning (RL) is prevalent in many areas of robotics, including legged robots (Kohl & Stone, 2004; Hwangbo et al., 2019) , aerial vehicles (Sadeghi & Levine, 2017), and manipulation tasks, such as pushing (Finn & Levine, 2017), peg insertion (Levine et al., 2016; Zeng et al., 2018; Lee et al., 2019 ), throwing (Ghadirzadeh et al., 2017; Zeng et al., 2020 ), ballin-cup (Kober & Peters, 2009 ), cloth manipulation (Matas et al., 2018 ), and grasping (Kalashnikov et al., 2018; James et al., 2019b) . Despite the abundance of work in this area, there has yet to be a general manipulation method that can tackle a range of challenging, sparsely-rewarded tasks without needing access to privileged simulation-only abilities (e.g. reset to demonstrations (Nair et al., 2018 ), asymmetric actor-critic (Pinto et al., 2018 ), reward shaping (Rajeswaran et al., 2018) , and auxiliary tasks (James et al., 2017) ). Crucial to our method is the proposed Q-attention. Soft and hard attention are prominent methods in both natural language processing (NLP) (Bahdanau et al., 2015; Vaswani et al., 2017; Devlin et al., 2018) and computer vision (Xu et al., 2015; Zhang et al., 2019) . Soft attention deterministically multiplies an attention map over the image feature map, whilst hard attention uses the attention map stochastically to sample one or a few features on the feature map (which is optimised by maximising an approximate variational lower bound or equivalently via (on-policy) REINFORCE (Williams, 1992) ). Given that we perform non-differentiable cropping, our Q-attention is closest to hard attention, but with the distinction that we learn this in an off-policy way. This is key, as 'traditional' hard attention is unable to be used in an off-policy setting. We therefore see Q-attention as an off-policy hard attention. We elaborate further on these differences in Section 4.1. Our proposed confidence-aware critic (used to train the next-best pose agent) takes its inspiration from the pose estimation community (Wang et al., 2019; Wada et al., 2020) . There exists a small amount of work in estimating uncertainty with Q-learning in discrete domains (Clements et al., 2019; Hoel et al., 2020) ; our work uses a continuous Q-function to predict both Q and confidence values for each pixel, which lead to improved stability when training, and is not used during action selection. Our approach makes use of demonstrations, which has been applied in a number of works (Vecerik et al., 2017; Matas et al., 2018; Kalashnikov et al., 2018; Nair et al., 2018) , but while successful, they make limited use of the demonstrations and still can take many samples to converge. Rather than simply inserting these directly into the replay buffer, we instead make sure of our keyframe discovery and demo augmentation to maximise demonstration utility.



Figure1: The 10 RLBench tasks used for evaluation. Current state-of-the-art reinforcement learning algorithms catastrophically fail on all tasks, whilst our method succeeds within a modest number of steps. Note that the positions of objects are placed randomly at the beginning of each episode.

