NEURAL CONSTRAINT INFERENCE: INFERRING EN-ERGY CONSTRAINTS IN INTERACTING SYSTEMS

Abstract

Systems consisting of interacting agents are prevalent in the world, ranging from dynamical systems in physics to complex biological networks. To build systems which can interact robustly in the real world, it is thus important to be able to infer the precise interactions governing such systems. Existing approaches typically discover such interactions by explicitly modeling the feedforward dynamics of the trajectories. In this work, we propose Neural Constraint Inference (NCI) model as an alternative approach to discover such interactions: it discovers a set of relational constraints, represented as energy functions, which when optimized reconstruct the original trajectory. We illustrate how NCI can faithfully predict future trajectory dynamics, achieving more consistent long-rollouts than existing approaches. We show that the constraints discovered by NCI are disentangled and may be intermixed with constraints from other trajectories. Finally, we illustrate how those constraints enable the incorporation of external test-time constraints.

1. INTRODUCTION

Dynamical systems are ubiquitous in both nature and everyday life. Such systems emerge naturally in scientific settings such as chemical pathways and particle dynamics as well as everyday settings such as in sports teams or social events. Such dynamical systems may be decomposed as a set of different interacting components, where the interactions with respect to each other lead to complex dynamics. Modeling the underlying dynamics of such systems is hard: often times we only have access to example trajectories, without knowledge of the underlying interactions or the dynamics that govern them. Consider the scenario given in Figure 1 , consisting of a set of NBA players playing a basketball game. While the motion of individual players may appear stochastic in nature, each player aims to score the basket on the opposite team's side of the court. Thus, we may utilize sets of interactions to explain their behaviors -a group of players on the defensive team serve as a zone defense, preventing players from the opposite team from getting close to the basket. Simultaneously, a group of offensive players moves towards the goal, while a group of defensive players moves to intercept them and prevent them from scoring. By applying our underlying knowledge of these interactions between players, we may forecast the future dynamics of the basketball game significantly more accurately. Most works modeling such complex dynamics do not explicitly disentangle individual interactions between objects. Instead, they rely on a learned network to implicitly disentangle them (Battaglia et al., 2016; Gilmer et al., 2017; van Steenkiste et al., 2018) . In contrast, Kipf et al. (2018) propose Neural Relation Inference (NRI), which learns a structured set of explicit interaction models between objects and show how such explicit interaction modeling enables more effective downstream predictions. In this work, we argue that we should instead model and disentangle interactions between objects as a set of learned relational constraints, with dynamical prediction corresponding to a constraint satisfaction problem. To this end, we propose Neural Constraint Inference (NCI), where we encode each of these constraints as an energy function (Du et al., 2021) . To predict future dynamics with NCI, we then solve a constraint satisfaction problem, where we optimize for a trajectory prediction which minimizes our predicted energy. Prior work on implicit physical simulation has suggested that such implicit physics modeling (i.e. modeling dynamics through a constraint satisfaction problem) is significantly more accurate at simulating strong interactions in dynamics than explicit physics models (i.e. modeling dynamics as explicit feed-forward roll-outs) (Rubanova et al., 2022) . In different experiments, we illustrate how our constraint based decomposition of interactions provides unique benefits over prior learned approaches for decomposing dynamics. First, we illustrate how such a decomposition improves the temporal consistency, achieving significantly lower long-term temporal prediction error. We show that the decomposition is disentangled, enabling us to intermix interactions between separate trajectories together. We further show that constraints can linearly be decoded into underlying ground-truth interactions. Finally, we illustrate that such a decomposition enables us to add flexible test-time constraints to incorporate new changes in the environment. In summary, in this work, we contribute the following: (i). We propose Neural Constraint Inference (NCI), which discovers, in an unsupervised manner, the underlying interactions between particles in a system as a set of energy constraints. (ii). We illustrate how such a constraint decomposition of interactions enables more accurate long-horizon trajectory prediction performance over prior methods. And (iii). we illustrate how such a constraint decompositions of interactions is disentangled and enables the recombination of constraints between separate trajectories, as well as the addition of novel test-time constraints.

2. LITERATURE

Dynamics and Relational Inference Several works in the past years have studied the problem of learning dynamics of a physical system from simulated trajectories with graph neural networks (GNNs) (Guttenberg et al., 2016; Gilmer et al., 2017; van Steenkiste et al., 2018; Lu et al., 2021; Li et al., 2018; Yang et al., 2022) . As an extension of the foundational work of Battaglia et al. ( 2016), interaction networks, Kipf et al. (2018) proposes to infer an explicit interaction structure while simultaneously learning the dynamical model of the interacting systems in an unsupervised manner, by inferring edge classes with a classifier. Selecting models based on observed trajectories is also the base of Alet et al. (2019); Goyal et al. (2019) ; Graber & Schwing (2020). Graber & Schwing (2020) extends Kipf et al. (2018) to temporally dynamic edge constraints, which yields better results in real-world datasets. NCI differs from these approach as the generation procedure uses an optimization solver to satisfy a set of soft constraints. Recent work Rubanova et al. (2022) also explores combining graph networks with energy optimization. However, it lacks the modularity of NCI, and the ability to infer edge types from observation. Instead, a global energy function is learned for all nodes, by leveraging ground truth attributes such as mass. Thus, it has no mechanism to predict trajectories in the absence of those attributes nor when different types of relations are present. In this work, we extend ideas of unsupervised concept learning in EBMs to constraints and apply them to dynamical modelling and relational inference.

3. CONSTRAINTS AS ENERGY BASED MODELS

We will consider constraints as specifying a set X of underlying trajectories x ∈ R T ×D which have a underlying property we desire. In section Section 3.1, we discuss how we can represent constraints



Figure 1: Interactions between NBA players. Complex dynamics, such as the player trajectories in the NBA, may be explained using a simple set of interactions. In this setting, one team of players aims to block a separate team from scoring.

Energy-based models have a long history in machine learning. Early work focuses on density modeling Hinton (2002); Du & Mordatch (2019); Nijkamp et al. (2020) by aiming to learn a function that assigns low energy values to data that belongs to the input distribution. To successfully sample data-points, EBMs have recently relied gradient-based Langevin dynamics Du & Mordatch (2019). Recent works have illustrated that such a gradient-based optimization procedure can enable the composition of energy functions representing different concepts Du et al. (2020) and successfully high-dimensional domains such as images Liu et al. (2021); Nie et al. (2021). Unsupervised discovery of composable energy functions on images was explored in Du et al. (2021).

