GENQU: A HYBRID SYSTEM FOR LEARNING CLASSI-CAL DATA IN QUANTUM STATES

Abstract

Deep neural network-powered artificial intelligence has rapidly changed our daily life with various applications. However, as one of the essential steps of deep neural networks, training a heavily-weighted network requires a tremendous amount of computing resources. Especially in the post Moore's Law era, the limit of semiconductor fabrication technology has restricted the development of learning algorithms to cope with the increasing high intensity training data. Meanwhile, quantum computing has exhibited its significant potential in terms of speeding up the traditionally compute-intensive workloads. For example, Google illustrates quantum supremacy by completing a sampling calculation task in 200 seconds, which is otherwise impracticable on the world's largest supercomputers. To this end, quantum-based learning becomes an area of interest, with the promising of a quantum speedup. In this paper, we propose GenQu, a hybrid and general-purpose quantum framework for learning classical data through quantum states. We evaluate GenQu with real datasets and conduct experiments on both simulations and real quantum computer IBM-Q. Our evaluation demonstrates that, comparing with classical solutions, the proposed models running on GenQu framework achieve similar accuracy with a much smaller number of qubits, while significantly reducing the parameter size by up to 95.86% and converging speedup by 66.67% faster.

1. INTRODUCTION

In the past decade, machine learning and artificial intelligence powered applications dramatically changed our daily life. Many novel algorithms and models achieve widespread practical successes in a variety of domains such as autonomous cars, healthcare, manufacturing, etc. Despite the wide adoption of ML models, training the machine learning models such as DNNs requires a tremendous amount of computing resources to tune millions of hyper-parameters. Especially in the post Moore's Law era, the limit of semiconductor fabrication technology cannot satisfy the the rapidly increased data volume needed for training, which restricts the development of this field (Thompson et al., 2020) . Encouraged by the recent demonstration of quantum supremacy (Arute et al., 2019) , researchers are searching for a transition from the classical learning to the quantum learning, with the promise of providing a quantum speedup over the classical learning. The current state of quantum-based learning inspires alternative architectures to classical learning's sub-fields, such as Deep Learning (DL) or Support Vector Machine (SVM) (Garg & Ramakrishnan, 2020; Beer et al., 2020; Potok et al., 2018; Levine et al., 2019) , where the quantum algorithm provides improvements over their classical counterparts. For example, there are quite a number of adoptions of quantum learning algorithms in domains of expectation maximization solving (QEM) (Kerenidis et al., 2019) that speeds up the kernel methods to sub-linear time (Li et al., 2019 ), Quantum-SVM (Ding et al., 2019 ), and NLP (Panahi et al., 2019) . Employing quantum systems to train deep learning models is rather developed with a multitude of approaches to creating and mimicking aspects of classical deep learning systems (Verdon et al., 2019; Beer et al., 2020; Chen et al., 2020; Kerenidis et al., 2019) , with the following challenges: (i), such systems are held back by the low qubit count of current quantum computers. (ii), learning in a quantum computer becomes even more difficult due to the lack of efficient classical-to-quantum data encoding methodology (Zoufal et al., 2019; Cortese & Braje, 2019) . (iii), most of the existing studies are based on purely theoretical analysis or simulations, lacking practical usability on near-term quantum devices (NISQ) (Preskill, 2018) . More importantly, the above challenges would presist even when the number of qubits supported in quantum machines get siginificantly increased: when the number of qubits in the quantum system increases, the computational complexity grows exponentially (Kaye et al., 2007) , which quickly leads to tasks that become completely infeasible for simulation and near-term quantum computers. Therefore, discovering the representative power of qubits in quantum based learning system is extremely important, as not only does it allow near-term devices to tackle more complex learning problems, but also it eases the complexity of the quantum state exponentially. However, to tackle the topic of low-qubit counts of current quantum machines is rather sparse: to the best of our knowledge, there is only one paper for the problem of the power of one qubit (Ghobadi et al., 2019) . Within this domain, the learning potential of qubits are under-investigated. In this paper, we propose GenQu, a general-purpose quantum-classic hybrid framework for learning classical data in quantum states. We demonstrate the power of qubits in machine learning by approaching the encoding of data onto a single qubit and accomplish tasks that are impossible for comparative data streams on classical machines, which addressing the challenges (i) and (ii). Enabled by GenQU, we develop a deep neural network architecture for classification problems with only 2 qubits, and a quantum generative architecture for learning distributions with only 1 qubit, and, additionally, We evaluate GenQU with intensive experiments on both IBM-Q real quantum computers and simulators (addressing the challenge (iii)). Our major contributions include: • We propose, GenQu, a hybrid and general-purpose quantum framework that works with near-term quantum computers and has the potential to fit in various learning models with a very low qubit count. • Based on GenQu, we propose three different quantum based learning models to demonstrate the potential of learning data in quantum state. • Through experiments on both simulators and IBM-Q real quantum computers, we show that models in GenQu are able to reduce parameters by up to 95.86% but still achieves similar accuracy in classification with Principal Component Analysis (PCA)(Hoffmann, 2007) MNIST dataset, and converge up to 66.67% faster than traditional neural networks.

2. PRELIMINARIES

2.1 THE QUANTUM BIT (QUBIT) Quantum computers operate on a fundamentally different architecture compared to classical computers. Classical computers operate on binary digits (bits), represented by a 1 or a 0. Quantum computers however, operate on quantum bits (qubits). Qubits can represent a 1 or a 0, or can be placed into a probabilistic mixture of both 1 and 0 simultaneously, namely superposition. Superposition is one of the core principles that allows quantum computers to be able to perform certain tasks significantly faster than that of their traditional counterparts. When discussing a quantum framework, we make use of the bra| and |ket notation, where a bra| indicates a horizontal quantum state vector (1 × n) and |ket indicates a vertical quantum state vector (n × 1). A qubit, as it is some combination of both a |1 and |0 simultaneously, is described as a linear combination between of |0 and |1 . This combination is described in Equation 1.  |Ψ = α|0 + β|1 , |Ψ = α β , |0 = 1 0 , |1 = 0 1 (1)



Figure 1: Bloch Sphere

