PROGRAMMATICALLY GROUNDED, COMPOSITIONALLY GENERALIZABLE ROBOTIC MANIPULATION

Abstract

Robots operating in the real world require both rich manipulation skills as well as the ability to semantically reason about when to apply those skills. Towards this goal, recent works have integrated semantic representations from large-scale pretrained vision-language (VL) models into manipulation models, imparting them with more general reasoning capabilities. However, we show that the conventional pretraining-finetuning pipeline for integrating such representations entangles the learning of domain-specific action information and domain-general visual information, leading to less data-efficient training and poor generalization to unseen objects and tasks. To this end, we propose PROGRAMPORT, a modular approach to better leverage pretrained VL models by exploiting the syntactic and semantic structures of language instructions. Our framework uses a semantic parser to recover an executable program, composed of functional modules grounded on vision and action across different modalities. Each functional module is realized as a combination of deterministic computation and learnable neural networks. Program execution produces parameters to general manipulation primitives for a robotic end-effector. The entire modular network can be trained with end-to-end imitation learning objectives. Experiments show that our model successfully disentangles action and perception, translating to improved zero-shot and compositional generalization in a variety of manipulation behaviors.

1. INTRODUCTION

Robotic manipulation models that map directly from raw pixels to actions are capable of learning diverse and complex behaviors through imitation. To enable more abstract goal specification, many such models also take as input natural language instructions. However, this vision-language manipulation setting introduces a new problem: the agent must jointly learn to ground language tokens to its perceptual inputs, and correspond this grounded understanding with the desired actions. Moreover, to fully leverage the flexibility of language, the agent must handle novel vocabulary and compositions not explicitly seen during training, but specified at test time (Fig. 1 ). 



Figure 1: Zero-Shot and Compositional Generalization: Our framework, PROGRAMPORT, is capable of generalizing to combinations of unseen objects and manipulation behaviors at test time.

