NEUROMECHANICAL AUTOENCODERS: LEARNING TO COUPLE ELASTIC AND NEURAL NETWORK NONLINEARITY

Abstract

Intelligent biological systems are characterized by their embodiment in a complex environment and the intimate interplay between their nervous systems and the nonlinear mechanical properties of their bodies. This coordination, in which the dynamics of the motor system co-evolved to reduce the computational burden on the brain, is referred to as "mechanical intelligence" or "morphological computation". In this work, we seek to develop machine learning analogs of this process, in which we jointly learn the morphology of complex nonlinear elastic solids along with a deep neural network to control it. By using a specialized differentiable simulator of elastic mechanics coupled to conventional deep learning architectureswhich we refer to as neuromechanical autoencoders-we are able to learn to perform morphological computation via gradient descent. Key to our approach is the use of mechanical metamaterials-cellular solids, in particular-as the morphological substrate. Just as deep neural networks provide flexible and massivelyparametric function approximators for perceptual and control tasks, cellular solid metamaterials are promising as a rich and learnable space for approximating a variety of actuation tasks. In this work we take advantage of these complementary computational concepts to co-design materials and neural network controls to achieve nonintuitive mechanical behavior. We demonstrate in simulation how it is possible to achieve translation, rotation, and shape matching, as well as a "digital MNIST" task. We additionally manufacture and evaluate one of the designs to verify its real-world behavior.

1. INTRODUCTION

Mechanical intelligence, or morphological computation (Paul, 2006; Hauser et al., 2011) , is the idea that the physical dynamics of an actuator may interact with a control system to effectively reduce the computational burden of solving the control task. Biological systems perform morphological computation in a variety of ways, from the compliance of digits in primate grasping (Jeannerod, 2009; Heinemann et al., 2015) , to the natural frequencies of legged locomotion (Collins et al., 2005; Holmes et al., 2006; Ting & McKay, 2007) , to dead fish being able to "swim" in vortices (Beal et al., 2006; Lauder et al., 2007; Eldredge & Pisani, 2008) . Both early (Sims, 1994) and modern (Gupta et al., 2021) work have used artificial evolutionary methods to design mechanical intelligence, but it has remained difficult to design systems de novo that are comparable to biological systems that have evolved over millions of years. We ask: Can we instead learn morphological computation using gradient descent? Morphological computation requires that a physical system be capable of performing complex tasks using, e.g., elastic deformation. The mechanical system's nonlinear properties work in tandem with neural information processing so that challenging motor tasks require less computation. To learn an artificial mechanically-intelligent system, we must therefore be able to parameterize a rich space of mechanisms with the capability of implementing nonlinear physical "functions" that connect input forces or displacements to the desired output behaviors. There are various desiderata for such a Figure 1 : A schematic depiction of a neuromechanical autoencoder. A neural encoder is parameterized by θ, while a mechanical decoder has geometry (morphology) parameterized by ϕ. A task is sampled from a distribution and is fed into the neural network encoder. The neural network produces actuations which displace the mechanical structure to perform the task, in this case being shape matching (Section 3.2). Given a task loss, θ and ϕ are optimized jointly by gradient descent. mechanical design space: 1) it must contain a wide variety of structures with complex nonlinear elastic deformation patterns; 2) its parameters should be differentiable and of fixed cardinality; and 3) the designs should be easily realizable with standard manufacturing techniques and materials. These characteristics are achieved by mechanical metamaterials. Metamaterials are structured materials that have properties unavailable from natural materials. Although metamaterials are often discussed in the context of electromagnetic phenomena, there is substantial interest in the development of mechanical metamaterials in which geometric heterogeneity achieves unusual macroscopic behavior such as a negative Poisson's ratio (Bertoldi et al., 2010) . In biological systems, morphological computation often takes the form of sophisticated nonlinear compliance and deformation, resulting in a physical system that is more robust and easier to control for a variety of tasks (Paul, 2006; Hauser et al., 2011) , This type of behavior is typically not present in off-the-shelf robotic systems and is difficult to design a priori. Mechanical metamaterials, on the other hand, offer a platform for mechanically-intelligent systems using relatively accessible manufacturing techniques, such as 3-D printing. The mechanical metamaterials we explore in this paper are cellular solids: porous structures where different patterns of macroscopic pores can lead to different nonlinear deformation behaviors. By constructing a solid with a large number of such pores, and then parameterizing the pore shapes nonuniformly across the solid, it is possible to achieve a large design space of nonlinear mechanical structures while nevertheless having a differentiable representation of fixed cardinality. The key to modern machine learning has been the development of massively-parametric composable function approximators in the form of deep neural networks; cellular solids provide a natural physical analog and-as we show in this work-can also be learned with automatic differentiation. To make progress towards the goal of learnable morphological computation, in this paper we combine metamaterials with deep neural networks into a framework we refer to as a neuromechanical autoencoder (NMA). While traditional mechanical metamaterials are designed for single tasks and actuations, here we propose designs that can solve problems drawn from a distribution over tasks, using a neural network to determine the appropriate actuations. The neural network "encoder" consumes a representation of the task-in this case, achieving a particular deformation-and nonlinearly transforms this into a set of linear actuations which play the role of the latent encoding. These actuations then displace the boundaries of the mechanical metamaterial inducing another nonlinear transformation due to the complex learned geometry of the pores; the resulting deformation corresponds to the "decoder". By using a differentiable simulator of cellular solids we are able to learn in an end-to-end way both the neural network parameters and the pore shapes so that they can work in tandem. The resulting system exhibits morphological computation in that it learns to split the processing task across the neural network and the physical mechanism. The paper is structured as follows. We first introduce the abstract setup for the neuromechanical autoencoder, followed by a brief description of our mechanics model, geometry representation, and differentiable simulation. Although important for the success of our method, the details of our dis-

