AUTOREGRESSIVE GRAPH NETWORK FOR LEARNING MULTI-STEP PHYSICS

Abstract

In this work, we propose a Autoregressive Graph Network (AGN) that learns forward physics using a temporal inductive bias. Currently, temporal state space information is provided as additional input to a GN when generating roll-out physics simulations. While this relatively increases the network's predictive performance over multiple time steps, a temporal model enables the network to induce and learn temporal biases. In dynamical systems, the arrow of time simplifies possible interactions in the sense that we can assume current observations to be dependent on preceding states. Our proposed GN encodes temporal state information using an autoregressive encoder that can parallelly compute latent temporal embeddings over multiple time steps during a single forward pass. We perform case studies that compare multi-step forward predictions against baseline data-driven one-step GNs as well as multi-step sequential models across diverse datasets that feature different particle interactions. Our approach outperforms the baseline GN and physics-induced GNs in 8 out of 10 and in 8 out of 10 particle physics datasets respectively when conditioned on optimal historical states. Further, through an energy analysis we find that our method not only accumulates the least roll-out error but also conserves energy more efficiently than baseline Graph Transformer Network while having an order of magnitude lesser parameters.

1. INTRODUCTION

In the recent years, there has been a growing interest in learning physics with the help of deep learning coupled with other techniques such as inductive biases, physics informed loss functions and meta-learning (Fragkiadaki et al. (2016); Battaglia et al. (2016); Xu et al. (2019); Hall et al. (2021) ). Relational networks such as Graph Networks (GNs) can decompose and learn the dynamics of a physics system on the basis of particle interactions within their neighborhoods (Battaglia et al. (2016); Li et al. (2018); Sanchez-Gonzalez et al. (2020) ). Across science and engineering, particle states often contain system and particle specific properties such as mass, density, velocity, particle type, etc. that are required to approximate the dynamics of a system. In general, given the current state of a system of particles along with particle specific local properties and global system properties, it is possible to apply GNs to predict the trajectory of the system (Sanchez-Gonzalez et al. (2018; 2020) ). Often referred to as the forward problem, it assumes knowledge about the physical properties of particles and therefore utilizes the observations to construct a suitable model that predicts the trajectory of the system of particles. The solution to a typical forward dynamics problem governed by an ODE involving particles can be parameterized using a GNN by learning from the current state or by using a history of previous particle states. There are strong benefits to training on entire sequences or multiple time-steps (Mohajerin (2017); Xu et al. ( 2019)) as one-step GNs tend to be unstable and accumulate error in the long-run. While prior work has shown that concatenating history of previous states enables a trained simulator such as a GNN to predict the next state more accurately, a sequential model captures certain symmetries, e.g., arrow of time, conservation of energy, momentum .etc. Sequential models such as RNN, LSTM, GRU and Transformers have been applied to 1D time series and N -body systems (Chen et al. ( 2018 2022)). While appealing choices to model dynamical systems due to their implicit memory mechanisms, they require sequential computations that come with significant memory overhead as the lookback length and/or the dimensionality of the problem increases.



); Zhang et al. (2020); Han et al. (

