SAMPLE COMPLEXITY OF NONPARAMETRIC OFF-POLICY EVALUATION ON LOW-DIMENSIONAL MANIFOLDS USING DEEP NETWORKS

Abstract

We consider the off-policy evaluation problem of reinforcement learning using deep convolutional neural networks. We analyze the deep fitted Q-evaluation method for estimating the expected cumulative reward of a target policy, when the data are generated from an unknown behavior policy. We show that, by choosing network size appropriately, one can leverage any low-dimensional manifold structure in the Markov decision process and obtain a sample-efficient estimator without suffering from the curse of high data ambient dimensionality. Specifically, we establish a sharp error bound for fitted Q-evaluation, which depends on the intrinsic dimension of the state-action space, the smoothness of Bellman operator, and a function class-restricted χ 2 -divergence. It is noteworthy that the restricted χ 2 -divergence measures the behavior and target policies' mismatch in the function space, which can be small even if the two policies are not close to each other in their tabular forms. We also develop a novel approximation result for convolutional neural networks in Q-function estimation. Numerical experiments are provided to support our theoretical analysis.

1. INTRODUCTION

Off-policy Reinforcement Learning (RL) [38, 40] is an important area in decision-making applications, when the data cannot be acquired with arbitrary policies. For example, in clinical decisionmaking problems, experimenting new treatment policies on patients is risky and may raise ethical concerns. Therefore, we are only allowed to generate data using certain policies (or sampling distributions), which have been approved by medical professionals. These so-called "behavior policies" are unknown but could impact our problem of interest, resulting in distribution shift and insufficient data coverage of the problem space. In general, the goal is to design algorithms that need as little data as possible to attain desired accuracy. A crucial problem in off-policy RL is policy evaluation. The goal of Off-Policy Evaluation (OPE) is to estimate the value of a new target policy based on experience data generated by existing behavior policies. Due to the mismatch between behavior and target policies, the off-policy setting is entirely different from the on-policy one, in which policy value can be easily estimated via Monte Carlo. A popular algorithm to solve OPE is the fitted Q-evaluation method (FQE), as an off-policy variant of the fitted Q-iteration [28, 15, 75] . FQE iteratively estimates Q-functions by supervised regression using various function approximation methods, e.g., linear function approximation, and has achieved great empirical success [65, 20, 21] , especially in large-scale Markov decision problems. Complementary to the empirical studies, several works theoretically justify the success of FQE. Under linear function approximation, [31] show that FQE is asymptotically efficient, and [15] further provide a minimax optimal non-asymptotic bound, and [47] provide a variance-aware characterization of the distribution shift via a weighted variant of FQE. [75] analyze FQE with realizable, general differentiable function approximation. [37, 64] tackle OPE for even more general function approximation, but they require stronger assumptions such as full data coverage. [16] focus on on-policy estimation and study a kernel least square temporal difference estimator. Recently, deploying neural networks in FQE has achieved great empirical success, which is largely due to networks' superior flexibility of modeling in high-dimensional complex environments 1

