BosonSampling with lost photons
BosonSampling is an intermediate model of quantum computation where linear-optical networks are used to solve sampling problems expected to be hard for classical computers. Since these devices are not expected to be universal for quantum computation, it remains an open question of whether any error-...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
American Physical Society
2016
|
Online Access: | http://hdl.handle.net/1721.1/100976 https://orcid.org/0000-0003-1333-4045 |
Summary: | BosonSampling is an intermediate model of quantum computation where linear-optical networks are used to solve sampling problems expected to be hard for classical computers. Since these devices are not expected to be universal for quantum computation, it remains an open question of whether any error-correction techniques can be applied to them, and thus it is important to investigate how robust the model is under natural experimental imperfections, such as losses and imperfect control of parameters. Here, we investigate the complexity of BosonSampling under photon losses, more specifically, the case where an unknown subset of the photons is randomly lost at the sources. We show that if k out of n photons are lost, then we cannot sample classically from a distribution that is 1/n[superscript Θ(k)] close (in total variation distance) to the ideal distribution, unless a BPP[superscript NP] machine can estimate the permanents of Gaussian matrices in n[superscript O(k)] time. In particular, if k is constant, this implies that simulating lossy BosonSampling is hard for a classical computer, under exactly the same complexity assumption used for the original lossless case. |
---|