LEARNING TO REPRESENT ACTION VALUES AS A HYPERGRAPH ON THE ACTION VERTICES

Abstract

Action-value estimation is a critical component of many reinforcement learning (RL) methods whereby sample complexity relies heavily on how fast a good estimator for action value can be learned. By viewing this problem through the lens of representation learning, good representations of both state and action can facilitate action-value estimation. While advances in deep learning have seamlessly driven progress in learning state representations, given the specificity of the notion of agency to RL, little attention has been paid to learning action representations. We conjecture that leveraging the combinatorial structure of multi-dimensional action spaces is a key ingredient for learning good representations of action. To test this, we set forth the action hypergraph networks framework-a class of functions for learning action representations in multi-dimensional discrete action spaces with a structural inductive bias. Using this framework we realise an agent class based on a combination with deep Q-networks, which we dub hypergraph Q-networks. We show the effectiveness of our approach on a myriad of domains: illustrative prediction problems under minimal confounding effects, Atari 2600 games, and discretised physical control benchmarks.

1. INTRODUCTION

Representation learning methods have helped shape recent progress in RL by enabling a capacity for learning good representations of state. This is in spite of the fact that, traditionally, representation learning was less often explored in the RL context. As such, the de facto representation learning techniques which are widely used in RL were developed under other machine learning paradigms (Bengio et al., 2013) . Nevertheless, RL brings some unique problems to the topic of representation learning, with exciting headway being made in identifying and exploring such topics. Action-value estimation is a critical component of the RL paradigm (Sutton & Barto, 2018) . Hence, how to effectively learn estimators for action value from training samples is one of the major problems studied in RL. We set out to study this problem through the lens of representation learning, focusing particularly on learning representations of action in multi-dimensional discrete action spaces. While action values are conditioned on both state and action and as such good representations of both would be beneficial, there has been comparatively little research on learning action representations. We frame this problem as learning a decomposition of the action-value function that is structured in such a way to leverage the combinatorial structure of multi-dimensional discrete action spaces. This structure is an inductive bias which we incorporate in the form of architectural assumptions. We present this approach as a framework to flexibly build architectures for learning representations of multi-dimensional discrete actions by leveraging various orders of their underlying sub-action combinations. Our architectures can be combined in succession with any other architecture for learning state representations and trained end-to-end using backpropagation, without imposing any change to the RL algorithm. We remark that designing representation learning methods by incorporating some form of structural inductive biases is highly common in deep learning, resulting in highly-publicised architectures such as convolutional, recurrent, and graph networks (Battaglia et al., 2018) . We first demonstrate the effectiveness of our approach in illustrative, structured prediction problems. Then, we argue for the ubiquity of similar structures and test our approach in standard RL problems. Our results advocate for the general usefulness of leveraging the combinatorial structure of multidimensional discrete action spaces, especially in problems with larger action spaces.

2.1. REINFORCEMENT LEARNING

We consider the RL problem in which the interaction of an agent and the environment is modelled as a Markov decision process (MDP) (S, A, P, R, S 0 ), where S denotes the state space, A the action space, P the state-transition distribution, R the reward distribution, and S 0 the initial-state distribution (Sutton & Barto, 2018) . At each step t the agent observes a state s t ∈ S and produces an action a t ∈ A drawn from its policy π(. | s t ). The agent then transitions to and observes the next state s t+1 ∈ S, drawn from P (. | s t , a t ), and receives a reward r t+1 , drawn from R(. | s t , a t , s t+1 ). The standard MDP formulation generally abstracts away the combination of sub-actions that are activated when an action a t is chosen. That is, if a problem has an N v -dimensional action space, each action a t maps onto an N v -tuple (a 1 t , a 2 t , . . . , a N v t ), where each a i t is a sub-action from the ith sub-action space. Therefore, the action space could have an underlying combinatorial structure where the set of actions is formed as a Cartesian product of the sub-action spaces. To make this explicit, we express the action space as A . = A 1 × A 2 × • • • × A N v , where each A i is a finite set of sub-actions. Furthermore, we amend our notation for the actions a t into a t (in bold) to reflect that actions are generally combinations of several sub-actions. Within our framework, we refer to each sub-action space A i as an action vertex. As such, the cardinality of the set of action vertices is equal to the number of action dimensions N v . Given a policy π that maps states onto distributions over the actions, the discounted sum of future rewards under π is denoted by the random variable Z π (s, a) = ∞ t=0 γ t r t+1 , where s 0 = s, a 0 = a, s t+1 ∼ P (. | s t , a t ), r t+1 ∼ R(. | s t , a t , s t+1 ), a t ∼ π(. | s t ), and 0 ≤ γ ≤ 1 is a discount factor. The action-value function is defined as Q π (s, a) = E[Z π (s, a)]. Evaluating the action-value function Q π of a policy π is referred to as a prediction problem. In a control problem the objective is to find an optimal policy π * which maximises the action-value function. The thesis of this paper applies to any method for prediction or control provided that they involve estimating an action-value function. A canonical example of such a method for control is Q-learning (Watkins, 1989; Watkins & Dayan, 1992) which iteratively improves an estimate Q of the optimal action-value function Q * via Q(s t , a t ) ← Q(s t , a t ) + α r t+1 + γ max a Q(s t+1 , a ) -Q(s t , a t ) , where 0 ≤ α ≤ 1 is a learning rate. The action-value function is typically approximated using a parameterised function Q θ , where θ is a vector of parameters, and trained by minimising a sequence of squared temporal-difference errors δ 2 t . = r t+1 + γ max a Q θ (s t+1 , a ) -Q θ (s t , a t ) (2) over samples (s t , a t , r t+1 , s t+1 ). Deep Q-networks (DQN) (Mnih et al., 2015) combine Q-learning with deep neural networks to achieve human-level performance in Atari 2600.

2.2. DEFINITION OF HYPERGRAPH

A hypergraph (Berge, 1989) is a generalisation of a graph in which an edge, also known as a hyperedge, can join any number of vertices. Let V = {A 1 , A 2 , . . . , A N v } be a finite set representing the set of action vertices A i . A hypergraph on V is a family of subsets or hyperedges H = {E 1 , E 2 , . . . , E N e } such that E j = ∅ (j = 1, 2, . . . , N e ) , ∪ N e j=1 E j = V. (4) According to Eq. ( 3), each hyperedge E j is a member of E = P(V ) \ ∅, where P(V ), called the powerset of V , is the set of possible subsets on V . The rank r of a hypergraph is defined as the maximum cardinality of any of its hyperedges. We define a c-hyperedge, where c ∈ {1, 2, . . . , N v },

