Transfer Learning for Quantum Classifiers: An Information-Theoretic Generalization Analysis

17 Jan 2022  ·  Sharu Theresa Jose, Osvaldo Simeone ·

A key component of a quantum machine learning model operating on classical inputs is the design of an embedding circuit mapping inputs to a quantum state. This paper studies a transfer learning setting in which classical-to-quantum embedding is carried out by an arbitrary parametric quantum circuit that is pre-trained based on data from a source task. At run time, a binary quantum classifier of the embedding is optimized based on data from the target task of interest. The average excess risk, i.e., the optimality gap, of the resulting classifier depends on how (dis)similar the source and target tasks are. We introduce a new measure of (dis)similarity between the binary quantum classification tasks via the trace distances. An upper bound on the optimality gap is derived in terms of the proposed task (dis)similarity measure, two R$\'e$nyi mutual information terms between classical input and quantum embedding under source and target tasks, as well as a measure of complexity of the combined space of quantum embeddings and classifiers under the source task. The theoretical results are validated on a simple binary classification example.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here