IMPACT OF REPRESENTATION LEARNING IN LINEAR BANDITS

Abstract

We study how representation learning can improve the efficiency of bandit problems. We study the setting where we play T linear bandits with dimension d concurrently, and these T bandit tasks share a common k( d) dimensional linear representation. For the finite-action setting, we present a new algorithm which achieves O(T

√

kN + √ dkN T ) regret, where N is the number of rounds we play for each bandit. When T is sufficiently large, our algorithm significantly outperforms the naive algorithm (playing T bandits independently) that achieves O(T √ dN ) regret. We also provide an Ω(T √ kN + √ dkN T ) regret lower bound, showing that our algorithm is minimax-optimal up to poly-logarithmic factors. Furthermore, we extend our algorithm to the infinite-action setting and obtain a corresponding regret bound which demonstrates the benefit of representation learning in certain regimes. We also present experiments on synthetic and realworld data to illustrate our theoretical findings and demonstrate the effectiveness of our proposed algorithms.

1. INTRODUCTION

This paper investigates the benefit of using representation learning for sequential decision-making problems. Representation learning learns a joint low-dimensional embedding (feature extractor) from different but related tasks and then uses a simple function (often a linear one) on top of the embedding (Baxter, 2000; Caruana, 1997; Li et al., 2010) The mechanism behind is that since the tasks are related, we can extract the common information more efficiently than treating each task independently. Empirically, representation learning has become a popular approach for improving sample efficiency across various machine learning tasks (Bengio et al., 2013) . In particular, recently, representation learning has become increasingly more popular in sequential decision-making problems (Teh et al., 2017; Taylor & Stone, 2009; Lazaric & Restelli, 2011; Rusu et al., 2015; Liu et al., 2016; Parisotto et al., 2015; Higgins et al., 2017; Hessel et al., 2019; Arora et al., 2020; D'Eramo et al., 2020) . For example, many sequential decision-making tasks share the same environment but have different reward functions. Thus a natural approach is to learn a succinct representation that describes the environment and then make decisions for different tasks on top of the learned representation. While representation learning is already widely applied in sequential decision-making problems empirically, its theoretical foundation is still limited. One important problem remains open: When does representation learning provably improve efficiency of sequential decision-making problems? We take a step to characterize the benefit of representation learning in sequential decision-making problems. We tackle the above problem in the linear bandits setting, one of the most fundamental and popular settings in sequential decision-making problems. This model is widely used in applications as such clinical treatment, manufacturing process, job scheduling, recommendation systems, etc (Dani et al., 2008; Chu et al., 2011) . We study the multi-task version of linear bandits, which naturally models the scenario where one needs to deal with multiple different but closely related sequential decision-making problems concurrently. We will mostly focus on the finite-action setting. Specifically, we have T tasks, each of which is governed by an unknown linear coefficient θ t ∈ R d . At the n-th round, for each task t ∈ [T ], the player chooses an action a n,t that belongs to a finite set, and receive a reward r n,t with expectation E r n,t = θ t , x n,t,an,t where x n,t,an,t represents the context of action a n,t . For this problem, a straightforward approach is to treat each task independently, which leads to O(T √ dN )foot_0 total regret. Can we do better? Clearly, if the tasks are independent, then by the classical Ω( √ dN ) per task lower bound for linear bandit, it is impossible to do better. We investigate how representation learning can help if the tasks are related. Our main assumption is the existence of an unknown linear feature extractor B ∈ R d×k with k d and a set of linear coefficients {w t } T t=1 such that θ t = Bw t . Under this assumption, the tasks are closely related as B is a shared linear feature extractor that maps the raw contexts x n,t,a ∈ R d to a low-dimensional embedding B x n,t,a ∈ R k . In this paper, we focus on the regime where k d, N, T . This regime is common in real-world problems, e.g., computer vision, where the input dimension is high, the number of data is large, many task are related, and there exists a low-dimension representation among these tasks that we can utilize. Problems with similar assumptions have been studied in the supervised learning setting (Ando & Zhang, 2005) . However, to our knowledge, this formulation has not been studied in the bandit setting. Our Contributions We give the first rigorous characterization on the benefit of representation learning for multi-task linear bandits. Our contributions are summarized below. • We design a new algorithm for the aforementioned problem. Theoretically, we show our algorithm incurs O( √ dkT N + T √ kN ) total regret in N rounds for all T tasks. Therefore, our algorithm outperforms the naive approach with O(T √ dN ) regret. To our knowledge, this is the first theoretical result demonstrating the benefit of representation learning for bandits problems. • To complement our upper bound, we also provide an Ω( √ dkT N +T √ kN ) lower bound, showing our regret bound is tight up to polylogarithmic factors. • We further design a new algorithm for the infinite-action setting, which has a regret O(d 1.5 k √ T N + kT √ N ), which outperforms the naive approach with O(T d √ N ) regret in the regime where T = Ω(dk 2 ). • We provide simulations and an experiment on MNIST dataset to illustrate the effectiveness of our algorithms and the benefits of representation learning. Organization This paper is organized as follows. In Section 2, we discuss related work. In Section 3, we introduce necessary notation, formally set up our problem, and describe our assumptions. In Section 4, we present our main algorithm for the finite-action setting and its performance guarantee. In Section 5, we describe our algorithm and its theoretical guarantee for the infinite-action setting. In Section 6, we provide simulation studies and real-world experiments to validate the effectiveness of our approach. We conclude in Section 7 and defer all proofs to the Appendix.

2. RELATED WORK

Here we mainly focus on related theoretical results. We refer readers to Bengio et al. (2013) for empirical results of using representation learning. For supervised learning, there is a long line of works on multi-task learning and representation learning with various assumptions (Baxter, 2000; Ando & Zhang, 2005; Ben-David & Schuller, 2003; Maurer, 2006; Cavallanti et al., 2010; Maurer et al., 2016; Du et al., 2020; Tripuraneni et al., 2020) .



O(•) omits logarithmic factors.

