PAC REINFORCEMENT LEARNING FOR PREDICTIVE STATE REPRESENTATIONS

Abstract

In this paper we study online Reinforcement Learning (RL) in partially observable dynamical systems. We focus on the Predictive State Representations (PSRs) model, which is an expressive model that captures other well-known models such as Partially Observable Markov Decision Processes (POMDP). PSR represents the states using a set of predictions of future observations and is defined entirely using observable quantities. We develop a novel model-based algorithm for PSRs that can learn a near optimal policy in sample complexity scaling polynomially with respect to all the relevant parameters of the systems. Our algorithm naturally works with function approximation to extend to systems with potentially large state and observation spaces. We show that given a realizable model class, the sample complexity of learning the near optimal policy only scales polynomially with respect to the statistical complexity of the model class, without any explicit polynomial dependence on the size of the state and observation spaces. Notably, our work is the first work that shows polynomial sample complexities to compete with the globally optimal policy in PSRs. Finally, we demonstrate how our general theorem can be directly used to derive sample complexity bounds for special models including m-step weakly-revealing and m-step decodable tabular POMDPs, POMDPs with low-rank latent transition, and POMDPs with linear emission and latent transition.

1. INTRODUCTION

Efficient exploration strategies in reinforcement learning have been well investigated on many models from tabular models [25, 2] to models with general function approximation [10, 27, 30, 16, 42] . These works have focused on fully observable Markov decision processes (MDPs); however, their algorithms do not result in statistically efficient algorithms in partially observable Markov decision processes (POMDPs). Since the markovian properties of dynamics are often questionable in practice, POMDPs are known to be useful models that capture environments in real life. While strategic exploration in POMDPs was less investigated due to its difficulty, it has been actively studied in recent few years [20, 3, 29] . In our work, we consider Predictive state representation (PSR) [36, 41, 24] that is a more general model of controlled dynamical systems than POMDPs. PSRs are specified by the probability of a sequence of future observations/actions (referred to as a test) conditioned on the past history. Unlike the POMDP model, PSR directly predicts the future given the past without modeling the latent state/dynamics. PSRs can model every POMDP, but potentially result in much more compact representations; there are dynamical systems that have finite PSR ranks, but that cannot be modeled by any POMDPs with finite latent states [36, 24] . PSRs are not only general but also amenable to learning and scalable. First, PSRs can be efficiently learned from exploratory data using a spectral learning algorithm [6] motivated by methodof-moments [23] . This learning algorithm allows us to perform fast closed-form sequential filtering, unlike EM-type algorithms that would be the most natural algorithm derived from POMDP perspectives. Secondly, while original PSRs are defined in the tabular setting, PSRs also support rich functional forms through kernel mean embedding [4] . Variants of PSRs equipped with neural networks have been proposed as well [43, 9, 46, 49] . In spite of the abovementioned advances in research on PSRs made in the recent two decades, strategic exploration without exploratory data has been barely investigated. To make PSRs more practical, it is of significant importance to understand how to perform efficient strategic exploration. 1

