Clifford Circuits can be Properly PAC Learned if and only if $\textsf{RP}=\textsf{NP}$

Given a dataset of input states, measurements, and probabilities, is it possible to efficiently predict the measurement probabilities associated with a quantum circuit? Recent work of Caro and Datta \cite{2020Caro} studied the problem of PAC learning quantum circuits in an information theoretic sen...

Full description

Bibliographic Details
Main Author: Daniel Liang
Format: Article
Language:English
Published: Verein zur Förderung des Open Access Publizierens in den Quantenwissenschaften 2023-06-01
Series:Quantum
Online Access:https://quantum-journal.org/papers/q-2023-06-07-1036/pdf/
Description
Summary:Given a dataset of input states, measurements, and probabilities, is it possible to efficiently predict the measurement probabilities associated with a quantum circuit? Recent work of Caro and Datta \cite{2020Caro} studied the problem of PAC learning quantum circuits in an information theoretic sense, leaving open questions of computational efficiency. In particular, one candidate class of circuits for which an efficient learner might have been possible was that of Clifford circuits, since the corresponding set of states generated by such circuits, called stabilizer states, are known to be efficiently PAC learnable \cite{rocchetto2018stabiliser}. Here we provide a negative result, showing that proper learning of CNOT circuits with 1/ poly($n$) error is hard for classical learners unless $\textsf{RP = NP}$, ruling out the possibility of strong learners under standard complexity theoretic assumptions. As the classical analogue and subset of Clifford circuits, this naturally leads to a hardness result for Clifford circuits as well. Additionally, we show that if $\textsf{RP = NP}$ then there would exist efficient proper learning algorithms for CNOT and Clifford circuits. By similar arguments, we also find that an efficient proper quantum learner for such circuits exists if and only if $\textsf{NP ⊆ RQP}$. We leave open the problem of hardness for improper learning or $\mathcal{O(1)}$ error to future work.
ISSN:2521-327X