MIND THE GAP: OFFLINE POLICY OPTIMIZATION FOR IMPERFECT REWARDS

Abstract

Reward function is essential in reinforcement learning (RL), serving as the guiding signal to incentivize agents to solve given tasks, however, is also notoriously difficult to design. In many cases, only imperfect rewards are available, which inflicts substantial performance loss for RL agents. In this study, we propose a unified offline policy optimization approach, RGM (Reward Gap Minimization), which can smartly handle diverse types of imperfect rewards. RGM is formulated as a bi-level optimization problem: the upper layer optimizes a reward correction term that performs visitation distribution matching w.r.t. some expert data; the lower layer solves a pessimistic RL problem with the corrected rewards. By exploiting the duality of the lower layer, we derive a tractable algorithm that enables sampled-based learning without any online interactions. Comprehensive experiments demonstrate that RGM achieves superior performance to existing methods under diverse settings of imperfect rewards. Further, RGM can effectively correct wrong or inconsistent rewards against expert preference and retrieve useful information from biased rewards.

1. INTRODUCTION

Reward plays an imperative role in every reinforcement learning (RL) problem. It encodes the desired task behaviors, serving as a guiding signal to incentivize agents to learn and solve a given task. As widely recognized in RL studies, a desirable reward function should not only define the task the agent learns to solve, but also offers the "bread crumbs" that allow the agent to efficiently learn to solve the task (Abel et al., 2021; Singh et al., 2009; Sorg, 2011) . However, due to task complexity and human cognitive biases (Hadfield-Menell et al., 2017) , accurately describing a complex task using numerical rewards is often difficult or impossible (Abel et al., 2021; Li et al., 2019) . In most practical settings, the rewards are typically "imperfect" and hard to be fixed through reward tuning when online interactions are costly or dangerous (Zhan et al., 2022) . Such imperfect rewards are widespread in real-world applications and can appear in forms such as partially correct rewards, sparse rewards, mismatched rewards from other tasks, and completely incorrect rewards (see Figure 1 for an intuitive illustration ). These rewards either fail to incentivize agents to learn correct behaviors or cannot provide effective signals to speed up the learning process. Consequently, it is of great importance and practical value to devise a versatile method that can perform robust offline policy optimization under diverse settings of imperfect rewards. Reward shaping (Ng et al., 1999) is the most common approach to tackling imperfect rewards, but it requires tremendous human efforts and numerous online evaluations. Another possible avenue is imitation learning (IL) (Pomerleau, 1988; Kostrikov et al., 2019) or offline inverse reinforcement learning methods (IRL) (Jarboui & Perchet, 2021) , by directly imitating or deriving new rewards from expert behaviors. However, these methods heavily depend on the quantity and quality of expert demonstrations and offline datasets, which are often beyond reach in practice. Another key challenge is how to precisely measure the discrepancy between the given reward in the data and the true reward of the task. As evaluating the learned policy's behavior under a specific reward function through environment interactions becomes impossible under the offline setting, let alone revising the reward. In this paper, we investigate the challenge of learning effective offline RL policies under imperfect rewards, when environment interactions are not possible. We first formally define the relative gap between the given and perfect rewards based on state-action visitation distribution matching (referred to as reward gap), and formulate the problem as a bi-level optimization problem. In the upper layer, the imperfect rewards are adjusted by a reward correction term, which is learned by minimizing the reward gap toward expert behaviors. In the lower layer, we solve a pessimistic RL problem to obtain the optimized policy under the corrected rewards. By exploiting Lagrangian duality of the lower-level problem, the overall optimization procedure can be tractably solved in a fully-offline manner without any online interactions. We call this approach Reward Gap Minimization (RGM). Compared to existing methods, RGM can: 1) evaluate and minimize the reward gap without any online interactions; 2) eliminate the strong dependency on human efforts and numerous expert demonstrations; and 3) handle diverse types of reward settings (e.g., perfect, partially correct, sparse, multi-task data sharing, incorrect) in a unified framework for reliable offline policy optimization. Through extensive experiments on D4RL datasets (Fu et al., 2020) , sparse reward tasks, multi-task data sharing tasks and a discrete-space navigation task, we demonstrate that RGM can achieve superior performance across diverse settings of imperfect rewards. Furthermore, we show that RGM effectively corrects wrong/inconsistent rewards against expert preference and effectively retrieves useful information from biased rewards, making it an ideal tool for practical applications where reward functions are difficult to design.

2. RELATED WORK ON DIFFERENT REWARD SETTINGS

We here briefly summarize relevant methodological approaches that handle different types of rewards. Perfect rewards. Directly applying offline RL algorithms is a natural choice when rewards are assumed to be perfect for the given task (Fujimoto et al., 2019; Kumar et al., 2019; 2020; Xu et al., 2021; Fujimoto & Gu, 2021; Kostrikov et al., 2021a; b; Xu et al., 2022a; Li et al., 2023; Lee et al., 2021; Bai et al., 2021; Xu et al., 2023) . However, specifying perfect rewards requires deep understanding of the task and domain expertise. Even given the perfect rewards, some offline RL methods still need to shift the rewards to achieve the best performance (Kostrikov et al., 2021a; Kumar et al., 2020) , which is shown to be equivalent to engineering the initialization of Q-function estimation that encourages conservative exploitation under offline learning (Sun et al., 2022) . Partially correct rewards. Reward shaping is the most common approach to handle partially correct rewards, by modifying the original reward function to incorporate task-specific domain knowledge (Dorigo & Colombetti, 1994; Randløv & Alstrøm, 1998; Ng et al., 1999; Marom & Rosman, 2018; Wu & Lin, 2018) . However, these approaches follow a trial-and-error paradigm and require tremendous human efforts. Recent approaches such as population-based method (Jaderberg et al., 2019) , optimal reward framework (Chentanez et al., 2004; Sorg et al., 2010; Zheng et al., 2018) and automatic reward shaping (Hu et al., 2020; Devidze et al., 2021; Marthi, 2007) can automatically shape the rewards when online interaction is allowed. However, to the best of the authors' knowledge, no reward shaping or correction mechanism exists for offline policy optimization. Researchers have to discard the given imperfect rewards and resort to other stopgaps like offline IL under offline settings. Sparse rewards. Sparse rewards can be seen as a special case of partially correct rewards. The key challenge of offline policy optimization for sparse rewards is how to effectively back-propagate the sparse signals to stitch up suboptimal trajectories (Levine et al., 2020) . Recent works (Kostrikov et al., 2021b; Kumar et al., 2020) use reward shaping to densify the sparse rewards for better performance.



Figure 1: Diverse settings of imperfect rewards.

availability

//github.com/Facebear

