SAMPLE COMPLEXITY OF NONPARAMETRIC OFF-POLICY EVALUATION ON LOW-DIMENSIONAL MANIFOLDS USING DEEP NETWORKS

Abstract

We consider the off-policy evaluation problem of reinforcement learning using deep convolutional neural networks. We analyze the deep fitted Q-evaluation method for estimating the expected cumulative reward of a target policy, when the data are generated from an unknown behavior policy. We show that, by choosing network size appropriately, one can leverage any low-dimensional manifold structure in the Markov decision process and obtain a sample-efficient estimator without suffering from the curse of high data ambient dimensionality. Specifically, we establish a sharp error bound for fitted Q-evaluation, which depends on the intrinsic dimension of the state-action space, the smoothness of Bellman operator, and a function class-restricted χ 2 -divergence. It is noteworthy that the restricted χ 2 -divergence measures the behavior and target policies' mismatch in the function space, which can be small even if the two policies are not close to each other in their tabular forms. We also develop a novel approximation result for convolutional neural networks in Q-function estimation. Numerical experiments are provided to support our theoretical analysis.

1. INTRODUCTION

Off-policy Reinforcement Learning (RL) [38, 40] is an important area in decision-making applications, when the data cannot be acquired with arbitrary policies. For example, in clinical decisionmaking problems, experimenting new treatment policies on patients is risky and may raise ethical concerns. Therefore, we are only allowed to generate data using certain policies (or sampling distributions), which have been approved by medical professionals. These so-called "behavior policies" are unknown but could impact our problem of interest, resulting in distribution shift and insufficient data coverage of the problem space. In general, the goal is to design algorithms that need as little data as possible to attain desired accuracy. A crucial problem in off-policy RL is policy evaluation. The goal of Off-Policy Evaluation (OPE) is to estimate the value of a new target policy based on experience data generated by existing behavior policies. Due to the mismatch between behavior and target policies, the off-policy setting is entirely different from the on-policy one, in which policy value can be easily estimated via Monte Carlo. A popular algorithm to solve OPE is the fitted Q-evaluation method (FQE), as an off-policy variant of the fitted Q-iteration [28, 15, 75] . FQE iteratively estimates Q-functions by supervised regression using various function approximation methods, e.g., linear function approximation, and has achieved great empirical success [65, 20, 21] , especially in large-scale Markov decision problems. Complementary to the empirical studies, several works theoretically justify the success of FQE. Under linear function approximation, [31] show that FQE is asymptotically efficient, and [15] further provide a minimax optimal non-asymptotic bound, and [47] provide a variance-aware characterization of the distribution shift via a weighted variant of FQE. [75] analyze FQE with realizable, general differentiable function approximation. [37, 64] tackle OPE for even more general function approximation, but they require stronger assumptions such as full data coverage. [16] focus on on-policy estimation and study a kernel least square temporal difference estimator. [65, 20, 21] . Nonetheless, the theory of FQE using deep neural networks has not been fully understood. While there are existing results on FQE with various function approximators [28, 15, 75] , many of them are not immediately applicable to neural network approximation. [18] focus on the online policy learning problem and studies DQN with feed-forward ReLU network; a concurrent work [73] studies offline policy learning with realizable, general differentiable function approximation. Notably, a recent study [51] provide an analysis of the estimation error of nonparametric FQE using feed-forward ReLU network, yet this error bound grows quickly when data dimension is high. Moreover, their result requires full data coverage, i.e., every state-action pair has to eventually be visited in the experience data. Precisely, besides universal function approximation, there are other properties that contribute to the success of neural networks in supervised learning, for example, its ability to adapt to the intrinsic low-dimensional structure of data. While these properties are actively studied in the deep supervised learning literature, they have not been reflected in RL theory. Hence, it is of interest to examine whether these properties still hold in a problem with sequential nature under standard assumptions and how neural networks can take advantage of such low-dimensional structures in OPE. Main results. This paper establishes sample complexity bounds of deep FQE using convolutional neural networks (CNNs). Different from existing results, our theory exploits the intrinsic geometric structures in the state-action space. This is motivated by the fact that in many practical highdimensional applications, especially image-based ones [59, 11, 76] , the data are actually governed by a much smaller number of intrinsic free parameters [2, 55, 32] . See an example in Figure 1 . Consequently, we model the state-action space as a d-dimensional Riemannian manifold embedded in R D with d ≪ D. Under some standard regularity conditions, we show CNNs can efficiently approximate Q-functions and allow for fast-rate policy value estimation-free of the curse of ambient dimensionality D. Moreover, our results do not need strong data coverage assumptions. In particular, we develop a function class-restricted χ 2 -divergence to quantify the mismatch between the visitation distributions induced by behavior and target policies. The function class can be viewed as a smoothing factor of the distribution mismatch, since the function class may be insensitive to certain differences in the two distributions. Our approximation theory and mismatch characterization significantly sharpen the dimension dependence of deep FQE. In detail, our theoretical results are summarized as follows: (I) Given a target policy π, we measure the distribution shift between the experience data distribution {q data h } H h=1 and the visitation distribution of target policy {q π h } H h=1 by κ = 1 H H h=1 χ 2 Q (q π h , q data h ) + 1, where χ 2 Q (q π h , q data h ) is the restricted χ 2 -divergence between q π h and q data h defined as χ 2 Q (q π h , q data h ) = sup f ∈Q E q π h [f ] 2 E q data h [f 2 ] -1 with Q being a function space relevant to our algorithm. (II) We prove that the value estimation error of a target policy π is E|v π -v π | = O κH 2 K -α 2α+d , where K is the effective sample size of experience data sampled by the behavior policy (more details in Section 3), H is the length of the horizon, α is the smoothness parameter of the Bellman operator,



Figure 1: An example of state-action space with low-dimensional structures. The states of OpenAI Gym Bipedal Walker can be visually displayed in high resolution (e.g., 200 × 300), while they are internally represented by a 24-tuple [29].

