ON THE UNIVERSAL APPROXIMABILITY AND COMPLEXITY BOUNDS OF DEEP LEARNING IN HYBRID QUANTUM-CLASSICAL COMPUTING

Abstract

With the continuously increasing number of quantum bits in quantum computers, there are growing interests in exploring applications that can harvest the power of them. Recently, several attempts were made to implement neural networks, known to be computationally intensive, in hybrid quantum-classical scheme computing. While encouraging results are shown, two fundamental questions need to be answered: (1) whether neural networks in hybrid quantum-classical computing can leverage quantum power and meanwhile approximate any function within a given error bound, i.e., universal approximability; (2) how do these neural networks compare with ones on a classical computer in terms of representation power? This work sheds light on these two questions from a theoretical perspective.

1. INTRODUCTION

Quantum computing has been rapidly evolving (e.g., IBM (2020) recently announced to debut quantum computer with 1,121 quantum bits (qubits) in 2023), but the development of quantum applications is far behind; in particular, it is still unclear what and how applications can take quantum advantages. Deep learning, one of the most prevalent applications, is well known to be computationintensive and therefore their backbone task, neural networks, is regarded as an important task to potentially take quantum advantages. Recent works (Francesco et al., 2019; Tacchino et al., 2020; Jiang et al., 2020) have demonstrated that the shallow neural networks with limited functions can be directly implemented on quantum computers without interfering with classical computers, but as pointed by Broughton et al. (2020) , the near-term Noisy Intermediate-Scale Quantum (NISQ) can hardly disentangle and generalize data in general applications, using quantum computers alone. This year, Google (2020) has put forward a library for hybrid quantum-classical neural networks, which attracts attention from both industry and academia to accelerate quantum deep learning. In a hybrid quantum-classical computing scheme, quantum computers act as hardware accelerators, working together with classical computers, to speedup the neural network computation. The incorporation of classical computers is promising to conduct operations that are hard or costly to be implemented on quantum computers; however, it brings high data communication costs at the interface between quantum and classical computers. Therefore, instead of contiguous communication during execution, a better practice is a "prologue-acceleration-epilogue" scheme: the classical computer prepares data and post-processes data at prologue and epilogue, while only the quantum computer is active during the acceleration process for the main computations. Without explicit explanation, "hybrid model" refers to the prologue-acceleration-epilogue scheme in the rest of the paper. In a classical computing scheme, the universal approximability, i.e., the ability to approximate a wide class of functions with arbitrary small error, and the complexity bounds of different types of neural networks have been well studied (Cybenko, 1989; Hornik et al., 1989; Mhaskar & Micchelli, 1992; Sonoda & Murata, 2017; Yarotsky, 2017; Ding et al., 2019; Wang et al., 2019; Fan et al., 2020) . However, due to the differences in computing paradigms, not all types of neural networks can be directly implemented on quantum computers. As such, it is still unclear whether those can work with hybrid quantum-classical computing and still attain universal approximability. In addition, as quantum computing limits the types of computations to be handled, it is also unknown whether the

