THRESHOLDED LEXICOGRAPHIC ORDERED MULTI-OBJECTIVE REINFORCEMENT LEARNING

Abstract

Lexicographic multi-objective problems, which impose a lexicographic importance order over the objectives, arise in many real-life scenarios. Existing Reinforcement Learning work directly addressing lexicographic tasks has been scarce. The few proposed approaches were all noted to be heuristics without theoretical guarantees as the Bellman equation is not applicable to them. Additionally, the practical applicability of these prior approaches also suffers from various issues such as not being able to reach the goal state. While some of these issues have been known before, in this work we investigate further shortcomings, and propose fixes for improving practical performance in many cases. We also present a policy optimization approach using our Lexicographic Projection Optimization (LPO) algorithm that has the potential to address these theoretical and practical concerns. Finally, we demonstrate our proposed algorithms on benchmark problems.

1. INTRODUCTION

The need for multi-objective reinforcement learning (MORL) arises in many real-life scenarios and the setting cannot be reduced to single-objective reinforcement learning tasks in general Vamplew et al. (2022) . However, solving multiple objectives requires overcoming certain inherent difficulties. In order to compare candidate solutions, we need to incorporate given user preferences with respect to the different objectives. This can lead to Pareto optimal or non-inferior solutions forming a set of solutions where no solution is better than another in terms of all objectives. Various methods of specifying user preferences have been proposed and evaluated along three main fronts: (a) expressive power, (b) ease of writing, and (c) the availability of methods for solving problems with such preferences. For example, writing preference specifications that result in a partial order of solutions instead of a total order makes the specification easier for the user but may not be enough to describe a unique preference. Three main motivating scenarios differing on when the user preference becomes available or used have been studied in the literature. (1) User preference is known beforehand and is incorporated into the problem a priori. (2) User preference is used a posteriori, i.e., firstly a set of representative Pareto optimal solutions is generated, and the user preference is specified over it. (3) An interactive setting where the user preference is specified gradually during the search and the search is guided accordingly. The most common specification method for the a priori scenario is linear scalarization which requires the designer to assign weights to the objectives and take a weighted sum of the objectives, thus making solutions comparable Feinberg & Shwartz (1994) . The main benefit of this technique is that it allows the use of many standard off the shelf algorithms as it preserves the additivity of the reward functions. However, expressing user preference with this technique requires significant domain knowledge and preliminary work in most scenarios Li & Czarnecki (2019) . While it can be the preferred method when the objectives can be expressed in comparable quantities, e.g. when all objectives have a monetary value, this is not the case most of the time. Usually, the objectives are expressed in incomparable quantities like money, time, and carbon emissions. Additionally, a composite utility combining the various objectives, and an approximation of that with linear scalarization limits us to a subset of the Pareto optimal set. To address these drawbacks of linear scalarization, several other approaches have been proposed and studied. Nonlinear scalarization methods like Chebyshev Perny & Weng (2010) are more expressive and can capture all of the solutions in the Pareto optimal set, however, they do not address the user-friendliness requirement. In this paper, we will focus on an alternative specification method that overcomes both limitations of linear scalarization, named Thresholded Lexicographic Ordering

