LEARNING TO GENERATE ALL FEASIBLE ACTIONS

Abstract

Several machine learning (ML) applications are characterized by searching for an optimal solution to a complex task. The search space for this optimal solution is often very large, so large in fact that this optimal solution is often not computable. Part of the problem is that many candidate solutions found via ML are actually infeasible and have to be discarded. Restricting the search space to only the feasible solution candidates simplifies finding an optimal solution for the tasks. Further, the set of feasible solutions could be re-used in multiple problems characterized by different tasks. In particular, we observe that complex tasks can be decomposed into subtasks and corresponding skills. We propose to learn a reusable and transferable skill by training an actor to generate all feasible actions. The trained actor can then propose feasible actions, among which an optimal one can be chosen according to a specific task. The actor is trained by interpreting the feasibility of each action as a target distribution. The training procedure minimizes a divergence of the actor's output distribution to this target. We derive the general optimization target for arbitrary f-divergences using a combination of kernel density estimates, resampling, and importance sampling. We further utilize an auxiliary critic to reduce the interactions with the environment. A preliminary comparison to related strategies shows that our approach learns to visit all the modes in the feasible action space, demonstrating the framework's potential for learning skills that can be used in various downstream tasks.

1. INTRODUCTION

Complex tasks can often be decomposed into multiple subtasks, with corresponding skills that solve these subtasks. Learning reusable and transferable skills is an active area of research (Kalashnikov et al. (2021); Chebotar et al. (2021); Deisenroth et al. (2014) ). However, given a subtask, learning or even defining the corresponding skill is not straightforward. Consider a robotic scenario where a robot is tasked to grasp an object and handle it in downstream tasks. Different downstream tasks can have different optimal grasps if the object has multiple feasible grasping poses. Therefore, a grasping skill cannot be trained based on optimality definitions of individual tasks. However, a grasping algorithm that learned all feasible grasps could support all possible downstream tasks even without explicit knowledge thereof during training. The downstream tasks can then select their respective optimal grasp among the proposed feasible options. Therefore, we consider a skill to be defined by the set of all feasible actions of a subtask. We propose a novel method to train a generative neural network to generate all feasible actions of a subtask by interacting with an environment. The interaction loop is adopted from Contextual Bandit (CB) (Langford et al. (2008) ) and Reinforcement Learning (RL) (Sutton & Barto (2018)): the environment presents a state for which the actor selects an action, which is tested in the environment, yielding either a success or failure outcome. As in CB, we limit ourselves to one-step interactions as opposed to sequential multi-step interactions common in RL. However, we do not minimize regret, typically done in CB. Instead, we optimize the final policy as in RL. Unlike CB and RL, the approach does not aim to find one optimal solution for a given problem but aims to learn all feasible ones. By interpreting the feasibility of each action given a state as a posterior probability distribution over the actions, a target probability density function (pdf) is defined. The actor is trained to minimize a divergence of its output distribution to this target pdf. The training algorithm in the method proposed can be used with any given f-divergence, including Reverse Kullback-Leibler (RKL), Forward Kullback-Leibler (FKL), and Jensen-Shannon (JS). The possibility to use FKL and JS is instrumental 1

