FIDELITY-BASED DEEP ADIABATIC SCHEDULING

Abstract

Adiabatic quantum computation is a form of computation that acts by slowly interpolating a quantum system between an easy to prepare initial state and a final state that represents a solution to a given computational problem. The choice of the interpolation schedule is critical to the performance: if at a certain time point, the evolution is too rapid, the system has a high probability to transfer to a higher energy state, which does not represent a solution to the problem. On the other hand, an evolution that is too slow leads to a loss of computation time and increases the probability of failure due to decoherence. In this work, we train deep neural models to produce optimal schedules that are conditioned on the problem at hand. We consider two types of problem representation: the Hamiltonian form, and the Quadratic Unconstrained Binary Optimization (QUBO) form. A novel loss function that scores schedules according to their approximated success probability is introduced. We benchmark our approach on random QUBO problems, Grover search, 3-SAT, and MAX-CUT problems and show that our approach outperforms, by a sizable margin, the linear schedules as well as alternative approaches that were very recently proposed.

1. INTRODUCTION

Many of the algorithms developed for quantum computing employ the quantum circuit model, in which a quantum state involving multiple qubits undergoes a series of invertible transformations. However, an alternative model, called Adiabatic Quantum Computation (AQC) (Farhi et al., 2000; McGeoch, 2014) , is used in some of the leading quantum computers, such as those manufactured by D-Wave Systems (Boixo et al., 2014) . AQC algorithms can achieve quantum speedups over classical algorithms (Albash & Lidar, 2018) , and are polynomially equivalent to the quantum circuit model (Aharonov et al., 2008) . In AQC, given a computational problem Q, e.g., a specific instance of a 3SAT problem, a physical system is slowly evolved until a specific quantum state that represents a proper solution is achieved. Each AQC run involves three components: 1. An initial Hamiltonian H b , chosen such that its ground state (in matrix terms, the minimal eigenvector of H b ) is easy to prepare and there is a large spectral gap. This is typically independent of the specific instance of Q.

2.. A final Hamiltonian

H p designed such that its ground state corresponds to the solution of the problem instance Q. 3. An adiabatic schedule, which is a strictly increasing function s(t) that maps a point in time 0 ≤ t ≤ t f , where t f is total computation time, to the entire interval [0, 1] (i.e., s(0) = 0, s(t f ) = 1, and s(t 1 ) < s(t 2 ) iff t 1 < t 2 and vice versa). These three components define a single time-dependent Hamiltonian H(t), which can be seen as an algorithm for solving Q: H(t) = (1 -s(t)) • H b + s(t) • H p (1) At the end of the adiabatic calculation, the quantum state is measured. The square of the overlap between the quantum state and ground state of the final Hamiltonian, is the fidelity, and represents the probability of success in finding the correct solution. An AQC algorithm that is evolved over an insufficient time period (a schedule that is too fast) will have a low fidelity. Finding the optimal schedule, i.e., the one that would lead to a high fidelity and would keep the time complexity of the algorithm minimal is, therefore, of a great value. However, for most problems, an analytical solution for the optimal schedule does not exist (Albash & Lidar, 2018) . Attempts were made to optimize specific aspects of the adiabatic schedule by using iterative methods (Zeng et al., 2015) or by direct derivations (Susa et al., 2018) . Performance was evaluated by examining characteristics of the resulting dynamic (e.g. the minimum energy gap) and no improvement was demonstrated on the full quantum calculation. Previous attempts to employ AI for the task of finding the optimal schedule have relied on Reinforcement Learning (Lin et al., 2020; Chen et al., 2020) . While these methods were able to find schedules that are better than the linear path, they are limited to either learning one path for a family of problems (without considering the specific instance) or to rerunning the AQC of a specific instance Q multiple times in order to optimize the schedule. In our work, supervised learning is employed in order to generalize from a training set of problems and their optimal paths to new problem instances. Training is done offline and the schedule our neural model outputs is a function of the specific problem instance. The problem instance is encoded in our model either based on the final Hamiltonian H p or directly based on the problem. The suggested neural models are tested using several different problem types: Grover search problems, 3SAT and MAX-CUT problems, and randomized QUBO problems. We show that the evolution schedules suggested by our model greatly outperform the naive linear evolution schedule, as well as those schedules provided by the recent RL methods, and allow for much shorter total evolution times.

2. BACKGROUND

The goal of the scheduling task is to find a schedule s(t) that maximizes the probability to get the correct answer for instance Q, using H b and H p over an adiabatic quantum computer. The solution to Q is coded as the lowest energy eigenstate of H p . In order to achieve the solution state with high probability, the system must be evolved "sufficiently slowly". The adiabatic theorem (Roland & Cerf, 2002; Albash & Lidar, 2018; Rezakhani et al., 2009) is used to analyze how fast could this evolution be. It states that the probability to reach the desired state at the end of the adiabatic calculation is 1 - ε 2 for ε << 1 if E 1 (t)| d dt H(t) |E 0 (t) g 2 (t) ≤ ε where the Dirac notation (Tumulka, 2009) 



See appendix A for the conventional matrix notation.



Let t f be the total calculation time. let s(t) be an evolution schedule , such that s(0) = 0, s(t f ) = 1. Applying the adiabatic condition for s(t), we get

