GATED NEURAL ODES: TRAINABILITY, EXPRESSIV-ITY AND INTERPRETABILITY

Abstract

Understanding how the dynamics in biological and artificial neural networks implement the computations required for a task is a salient open question in machine learning and neuroscience. In particular, computations requiring complex memory storage and retrieval pose significant challenge for these networks to implement or learn. Recently, a family of models described by neural ordinary differential equations (nODEs) has emerged as powerful dynamical neural network models capable of capturing complex dynamics. Here, we extend nODEs by endowing them with adaptive timescales using gating interactions. We refer to these as gated neural ODEs (gnODEs). Using a task that requires memory of continuous quantities, we demonstrate the inductive bias of the gnODEs to learn (approximate) continuous attractors. We further show how reduced-dimensional gnODEs retain their modeling power while greatly improving interpretability, even allowing explicit visualization of the structure of learned attractors. We introduce a novel measure of expressivity which probes the capacity of a neural network to generate complex trajectories. Using this measure, we explore how the phase-space dimension of the nODEs and the complexity of the function modeling the flow field contribute to expressivity. We see that a more complex function for modeling the flow field allows a lower-dimensional nODE to capture a given target dynamics. Finally, we demonstrate the benefit of gating in nODEs on several real-world tasks.

1. INTRODUCTION

How can the dynamical motifs exhibited by an artificial or a biological network implement certain computations required for a task? This is a long-standing question in computational neuroscience and machine learning (Vyas et al., 2020; Khona & Fiete, 2022) . Recurrent neural networks (RNNs) have often been used to probe this question (Mante et al., 2013; Vyas et al., 2020; Driscoll et al., 2022) , as they are flexible dynamical systems that can be easily trained (Rumelhart et al., 1986) to perform computational tasks. RNNs, particularly ones that incorporate gating interactions (Hochreiter & Schmidhuber, 1997; Cho et al., 2014) , have been wildly successful in solving complex realworld tasks (Jozefowicz et al., 2015) . While RNN models provide a link between dynamics and computation, how their (typically) highdimensional dynamics implement computation remains hard to interpret. On this note, we may turn to neural ordinary differential equations (nODEs), a class of dynamical models with a velocity field parametrized by a deep neural network (DNN), which can potentially implement more complex computations in lower dimensions than classical RNNs (Chen et al., 2018; Kidger, 2022) .foot_0 This increased complexity in lower latent/phase-space dimensions subsequently helps in extracting interpretable, effective low-dimensional dynamics that may underlie a dataset or task (Kim et al., 2021) . Despite their promise, nODEs remain under-explored in the following crucial aspects. Trainability: Can we improve performance of nODEs by introducing gating interactions (Hochreiter & Schmidhuber, 1997; Cho et al., 2014) to tame gradients in dynamical systems? Expressivity: How does



By classical RNNs, we mean the form of RNNs often considered in the neuroscience, physics and cognitive-science literature, where the interaction between units are additive, and the interaction strengths are represented by a matrix(McCulloch & Pitts, 1943; Sompolinsky et al., 1988; Elman, 1990; Vogels et al., 2005; Sussillo & Abbott, 2009; Song et al., 2016; Yang et al., 2019).

