VARIANCE-AWARE SPARSE LINEAR BANDITS

Abstract

It is well-known that for sparse linear bandits, when ignoring the dependency on sparsity which is much smaller than the ambient dimension, the worst-case minimax regret is Θ

√

dT where d is the ambient dimension and T is the number of rounds. On the other hand, in the benign setting where there is no noise and the action set is the unit sphere, one can use divide-and-conquer to achieve O(1) regret, which is (nearly) independent of d and T . In this paper, we present the first variance-aware regret guarantee for sparse linear bandits: O d T t=1 σ 2 t + 1 , where σ 2 t is the variance of the noise at the t-th round. This bound naturally interpolates the regret bounds for the worst-case constant-variance regime (i.e., σ t ≡ Ω(1)) and the benign deterministic regimes (i.e., σ t ≡ 0). To achieve this variance-aware regret guarantee, we develop a general framework that converts any variance-aware linear bandit algorithm to a variance-aware algorithm for sparse linear bandits in a "black-box" manner. Specifically, we take two recent algorithms as black boxes to illustrate that the claimed bounds indeed hold, where the first algorithm can handle unknown-variance cases and the second one is more efficient.

1. INTRODUCTION

This paper studies the sparse linear stochastic bandit problem, which is a special case of linear stochastic bandits. In linear bandits (Dani et al., 2008) , the agent is facing a sequential decisionmaking problem lasting for T rounds. For the t-th round, the agent chooses an action x t ∈ X ⊆ R d , where X is an action set, and receives a noisy reward r t = ⟨θ * , x t ⟩ + η t where θ * ∈ X is the (hidden) parameter of the game and η t is random zero-mean noise. The goal of the agent is to minimize her regret R T , that is, the difference between her cumulative reward T t=1 ⟨θ * , x t ⟩ and max x∈X T t=1 ⟨θ * , x⟩ (check Eq. ( 1) for a definition). Dani et al. (2008) proved that the minimax optimal regret for linear bandits is Θ(d √ T ) when the noises are independent Gaussian random variables with means 0 and variances 1 and both θ * and the actions x t lie in the unit sphere in R d .foot_0  In real-world applications such as recommendation systems, only a few features may be relevant despite a large candidate feature space. In other words, the high-dimensional linear regime may actually allow a low-dimensional structure. As a result, if we still use the linear bandit model, we will always suffer Ω(d

√

T ) regret no matter how many features are useful. Motivated by this, the sparse linear stochastic bandit problem was introduced (Abbasi- Yadkori et al., 2012; Carpentier & Munos, 2012) . This problem has an additional constraint that the hidden parameter, θ * , is sparse, i.e., ∥θ * ∥ 0 ≤ s for some s ≪ d. However, the agent has no prior knowledge about s and thus the interaction protocol is exactly the same as that of linear bandits. The minimax optimal regret for



Throughout the paper, we will use the notations O(•) and Θ(•) to hide log T, log d, log s (where s is the sparsity parameter, which will be introduced later) and log log 1 δ factors (where δ is the failure probability).

