MACHINE LEARNING FORCE FIELDS WITH DATA COST AWARE TRAINING

Abstract

Machine learning force fields (MLFF) have been proposed to accelerate molecular dynamics (MD) simulation, which finds widespread applications in chemistry and biomedical research. Even for the most data-efficient MLFF models, reaching chemical accuracy can require hundreds of frames of force and energy labels generated by expensive quantum mechanical algorithms, which may scale as O(n 3 ) to O(n 7 ), with n being the number of basis functions used and typically proportional to the number of atoms. To address this issue, we propose a multi-stage computational framework -ASTEROID, which enjoys low training data generation cost without significantly sacrificing MLFFs' accuracy. Specifically, ASTEROID leverages a combination of both large cheap inaccurate data and small expensive accurate data. The motivation behind ASTEROID is that inaccurate data, though incurring large bias, can help capture the sophisticated structures of the underlying force field. Therefore, we first train a MLFF model on a large amount of inaccurate training data, employing a bias-aware loss function to prevent the model from overfitting the potential bias of the inaccurate training data. We then fine-tune the obtained model using a small amount of accurate training data, which preserves the knowledge learned from the inaccurate training data while significantly improving the model's accuracy. Moreover, we propose a variant of ASTEROID based on score matching for the setting where the inaccurate training data are unlabelled. Extensive experiments on MD simulation datasets show that ASTER-OID can significantly reduce data generation costs while improving the accuracy of MLFFs.

1. INTRODUCTION

Molecular dynamics (MD) simulation is a key technology driving scientific discovery in fields such as chemistry, biophysics, and materials science (Alder & Wainwright, 1960; McCammon et al., 1977) . By simulating the dynamics of molecules, important macro statistics such as the folding probability of a protein (Tuckerman, 2010) or the density of new materials (Varshney et al., 2008) can be estimated. These macro statistics are an essential part of many important applications such as structure-driven drug design (Hospital et al., 2015) and battery development (Leung & Budzien, 2010) . Most MD simulation techniques share a common iterative structure: MD simulations calculate the forces on each atom in the molecule, and use these forces to move the molecule forward to the next state. The fundamental challenge of MD simulation is how to efficiently calculate the forces at each iteration. An exact calculation requires solving the Schrödinger equation, which is not feasible for many-body systems (Berezin & Shubin, 2012) . Instead approximation methods such as the Lennard-Jones potential (Johnson et al., 1993) , Density Functional Theory (DFT, Kohn (2019)), or Coupled Cluster Single-Double-Triple (CCSD(T), Scuseria et al. (1988) ) are used. CCSD(T) is seen as the gold-standard for force calculation, but is computationally expensive. In particular, CCSD(T) has complexity O(n 7 ) with respect to the number of basis function used along with a huge storage requirement (Chen et al., 2020) . To accelerate MD simulation while maintaining high accuracy, machine learning based force fields have been proposed. These machine learning models take a molecular configuration as input and then predict the forces on each atom in the molecule. These models have been successful, producing force fields with moderate accuracy while drastically reducing computation time (Chmiela et al., 2017) . Built upon the success of machine learning force fields, deep learning techniques for force fields have been developed, resulting in highly accurate force fields parameterized by large neural networks (Gasteiger et al., 2021; Batzner et al., 2022) . Despite this empirical success, a key drawback is rarely discussed in existing literature: in order to train state-of-the-art machine learning force field models, a large amount of costly training data must be generated. For example, to train a model at the CCSD(T) level of accuracy, at least a thousand CCSD(T) calculations must be done to construct the training set. This is computationally expensive due to the method's O(n 7 ) cost. A natural solution to this problem is to train on less data points. However, if the number of training points is decreased, the accuracy of the learned force fields quickly deteriorates. In our experiments, we empirically find that the prediction error and the number of training points roughly follow a power law relationship, with prediction error ∼ 1 Number of Training Points . This can be seen in Figure 1a , where prediction error and train set size are observed to have a linear relationship with a slope of -1 when plotted on a log scale. Another option is to train the force field model on less accurate but computationally cheap reference forces calculated using DFT (Kohn, 2019) or empirical force field methods (Johnson et al., 1993) . However, these algorithms introduce undesirable bias into the force labels, meaning that the trained models will have poor performance. This phenomenon can be seen in Figure 1b , where models trained on large quantities of DFT reference forces are shown to perform poorly relative to force fields trained on moderate quantities of CCSD(T) reference forces. Therefore current methodologies are not sufficient for training force fields models in low resource settings, as training on either small amounts of accurate data (i.e. from CCSD(T)) or large amounts of inaccurate data (i.e. from DFT or empirical force fields) will result in inaccurate force fields. To address this issue, we propose to use both large amounts of inaccurate force field data (i.e. DFT) and small amounts of accurate data (i.e. CCSD(T)) to significantly reduce the cost of the data needed to achieve highly accurate force fields. Our motivation is that computationally cheap inaccurate data, though incurring large bias, can help capture the sophisticated structures of the underlying force field. Moreover, if treated properly, we can further reduce the bias of the obtained model by taking advantage of the accurate data. More specifically, we propose a multi-stage computational framework -datA cosST awarE tRaining of fOrce fIelDs (ASTEROID). In the first stage, small amounts of accurate data are used to identify the bias of force labels in a large but inaccurate dataset. In the second stage, the model is trained on the large inaccurate dataset with a bias aware loss function. Specifically, the loss function generates smaller weights for data points with larger bias, suppressing the effect of label noise on training. This inaccurately trained model serves as a warm start for the third stage, where the force field model is fine-tuned on the small and accurate dataset. Together, these stages allow the model to learn



Figure 1: (a) Log-log plot of the number of training point versus the prediction error for deep force fields (b) Prediction error on CCSD labelled molecules for force fields trained on large amounts of DFT reference forces (100,000 configurations) and moderate amounts of CCSD reference forces (1000 configurations). In both cases the model architecture used is GemNet (Gasteiger et al., 2021).

