Symmetry Control Neural Networks

Abstract

This paper continues the quest for designing the optimal physics bias for neural networks predicting the dynamics of systems when the underlying dynamics shall be inferred from the data directly. The description of physical systems is greatly simplified when the underlying symmetries of the system are taken into account. In classical systems described via Hamiltonian dynamics this is achieved by using appropriate coordinates, so-called cyclic coordinates, which reveal conserved quantities directly. Without changing the Hamiltonian, these coordinates can be obtained via canonical transformations. We show that such coordinates can be searched for automatically with appropriate loss functions which naturally arise from Hamiltonian dynamics. As a proof of principle, we test our method on standard classical physics systems using synthetic and experimental data where our network identifies the conserved quantities in an unsupervised way and find improved performance on predicting the dynamics of the system compared to networks biasing just to the Hamiltonian. Effectively, these new coordinates guarantee that motion takes place on symmetry orbits in phase space, i.e. appropriate lower dimensional sub-spaces of phase space. By fitting analytic formulae we recover that our networks are utilising conserved quantities such as (angular) momentum.

1. Introduction

Building in a bias to neural networks has been a key mechanism to achieve extra-ordinary performance in tasks such as classification. A standard example is to utilise translation invariance in convolutional neural networks Krizhevsky et al. (2012) and by now building in equivariance to other symmetries such as rotational symmetries has proven to be very successful (e.g. Cohen & Welling (2016) ). Possible motions are constrained due to symmetries of the system. In technical terms, motion takes place on a subspace of phase space. Energy conversation -related to invariance under time translation -has been utilised in the context of Hamiltonian Neural Networks (HNNs) Greydanus et al. (2019) where the energy functional, i.e. the Hamiltonian is inferred from data. This approach has seen large improvements in predicting the dynamics over baseline neural networks which simply try to predict the change of phase-space coordinates in time. Here we extend this approach by learning and incorporating additional constraints due to further symmetries of the system. Coarsely speaking, finding symmetries corresponds to finding good coordinates. In classical mechanics this is achieved by performing canonical transformations and identifying cyclic coordinates which reveal conserved quantities. The aim of this paper is to demonstrate that multiple conserved quantities can indeed be automatically found in this way which has not been demonstrated beforehand. Similar in spirit to learning the Hamiltonian, we formulate loss functions which enforce a representation in terms of cyclic coordinates and use them as the input for our Hamiltonian, differing from previous flow-based approaches searching for these coordinates (Bondesan & Lamacraft, 2019; Li et al., 2020) . We experimentally find as a proof of principle that this mechanism identifies the underlying conserved quantities such as angular momentum, momentum, the splitting into decoupled subsystems, and can find the number of conserved quantities.

