KOOPMAN OPERATOR LEARNING FOR ACCELERATING QUANTUM OPTIMIZATION AND MACHINE LEARNING

Abstract

Finding efficient optimization methods plays an important role for quantum optimization and quantum machine learning on near-term quantum computers. While backpropagation on classical computers is computationally efficient, obtaining gradients on quantum computers is not, because the computational complexity scales linearly with the number of parameters and measurements. In this paper, we connect Koopman operator theory, which has been successful in predicting nonlinear dynamics, with natural gradient methods in quantum optimization. We propose a data-driven approach using Koopman operator learning to accelerate quantum optimization and quantum machine learning. We develop two new families of methods: the sliding window dynamic mode decomposition (DMD) and the neural DMD for efficiently updating parameters on quantum computers. We show that our methods can predict gradient dynamics on quantum computers and accelerate the variational quantum eigensolver used in quantum optimization, as well as quantum machine learning. We further implement our Koopman operator learning algorithms on a real IBM quantum computer and demonstrate their practical effectiveness.

1. INTRODUCTION

There has been rapid development of quantum technologies and quantum computation in recent years. A number of efforts are put into demonstrating quantum advantages and speedup. Quantum optimization (Moll et al., 2018) and quantum machine learning (QML) (Biamonte et al., 2017) , as important applications of quantum technologies, have received increased interest. The Variational Quantum Eigensolver (VQE) (Peruzzo et al., 2014b; Tilly et al., 2022) , as a quantum optimization algorithm, has been developed and applied to understanding problems in high energy physics (Klco et al., 2018; Rinaldi et al., 2022) , condensed matter physics (Wecker et al., 2015) , and quantum chemistry (Peruzzo et al., 2014a) . The Variational Quantum Algorithm (VQA) (Cerezo et al., 2021) such as the Quantum Approximate Optimization Algorithm (QAOA) (Farhi et al., 2014; Harrigan et al., 2021) has been applied to the max-cut problem. A recent experiment on 289 qubits has demonstrated a powerful VQA application in classical optimization by benchmarking against a variety of classical algorithms (Ebadi et al., 2022) . QML has also been developed for various tasks including supervised learning (Havlíček et al., 2019 ), unsupervised learning (Kerenidis et al., 2019) and reinforcement learning (Dong et al., 2008) . Theoretical advantages of quantum machine learning have been investigated (Huang et al., 2022b; Liu et al., 2021; 2022) , and experiments on real quantum computers have demonstrated encouraging progress (Huang et al., 2022a; Rudolph et al., 2022) . In the noisy intermediate-scale quantum (NISQ) era (Preskill, 2018) , due to the noisy nature of current quantum computer architectures, hybrid classical-quantum schemes have been proposed for quantum optimization and quantum machine learning and become a prominent approach. The key spirit of the hybrid approach relies on performing optimization and machine learning on parameterized quantum circuits with quantum features while updating the parameters in the circuit is done through classical computers. In classical machine learning, backprogation only requires vector Jacobian calculations which share the same complexity as the forward evaluation. Obtaining gradients under the hybrid scheme is much more challenging. Calculating gradients in quantum computers is challenging for two reasons: (1) gradient calculation typically scales linearly in the number of parameters as O(n params ); and (2) the quantum nature of the gradient itself entails sampling over repeated measurements. Despite various research on quantum optimization and quantum machine learning in simulation, the implementation of gradient-based methods on real quantum computers is computationally inefficient which limits their applications in practice. It is an important open problem in the field to develop scalable and efficient optimization methods for quantum optimization applications and quantum machine learning tasks. In this work, we propose Koopman operator learning for accelerating quantum optimization and QML. The Koopman operator theory is a powerful framework for understanding and predicting nonlinear dynamics through linear dynamics embedded into a higher dimensional space (Mezic, 1994; Mezić & Banaszuk, 2004; Mezic, 2005; Rowley et al., 2009; Brunton et al., 2021) . By viewing parameter optimization on quantum computers as a nonlinear dynamical evolution in the parameter space, we connect gradient dynamics in quantum optimization to the Koopman operator theory. In particular, the quantum natural gradient helps to provide a natural embedding of original parameters through quantum-computer parameterization into a higher-dimensional space, related to linear imaginary-time evolution. We develop new Koopman operator learning algorithms for quantum optimization and QML, including the sliding window dynamic mode decomposition (SW-DMD) and neural-network-based DMD that learns the Koopman embedding via a neural network parameterization. Our approach is data-driven and based on the information of only a few gradient steps such that the cost of prediction does not scale with n params . Our methods enable efficient learning of gradient dynamics for accelerating quantum optimization and quantum machine learning. Our experiments are both on numerical simulations and a real quantum computer. We first demonstrate our Koopman operator learning algorithms for VQE, an important application in quantum optimization. We test the methods for the natural gradient and Adam (Kingma & Ba, 2014b) optimizers on quantum Ising model simulations and demonstrate their success on a real IBM quantum computer with a quasi-gradient-based optimizer Simultaneous Perturbation Stochastic Approximation (SPSA) (Spall, 1992) . Finally, we apply our methods to accelerate QML on the MNIST dataset.

2. RELATED WORK

Koopman operators. Koopman operator theory (Koopman, 1931; v. Neumann, 1932) was first proposed by Koopman and von Neumann in early 1930s to understand dynamical systems. Dynamic mode decomposition (DMD) (Schmid, 2010) was developed to learn the Koopman operator under the linear dynamics assumption of the observed data. Later, more advanced methods such as the extended-DMD based on time-delay embedding (Brunton et al., 2017; Arbabi & Mezic, 2017; Kamb et al., 2020; Tu et al., 2014; Brunton et al., 2016 ), kernel methods (Baddoo et al., 2022) and dictionary learning (Li et al., 2017) were introduced to go beyond the linear dynamics assumption, and achieved better performance. Recently, machine learning methods were integrated into Koopman operator learning where neural networks are used to learn the mapping to a high dimensional space, in which the dynamics becomes linear (Lusch et al., 2018; Li et al., 2019; Azencot et al., 2020; Rice et al., 2020) . The machine learning Koopman operator methods were shown to learn nonlinear differential equation dynamics successfully. In addition to predicting nonlinear dynamics, recently Koopman operator theory was applied to optimize neural network training (Dogra & Redman, 2020; Tano et al., 2020) and pruning (Redman et al., 2021) . These works take the perspective of viewing the optimization process of neural networks as a nonlinear dynamical evolution and uses dynamic mode decomposition to predict the parameter updates in the future. In a more empirical study, Sinha et al. ( 2017) trained a convolutional neural network (CNN) to predict the future weights of neural networks, trained on standard vision tasks. Besides important applications for classical systems, the Koopman operator theory has natural connections to quantum mechanics. Recently, researchers considered Koopman operator theory for quantum control (Goldschmidt et al., 2021 ) and prediction of one particle quantum system evolution (Klus et al., 2022) . Since quantum mechanical systems provide a natural high dimensional Hilbert space through the wave function, the theory was considered for embedding classical equations for learning and solving differential equations (Lin et al., 2022; Giannakis et al., 2022) .

