Data compression for quantum machine learning
The advent of noisy-intermediate scale quantum computers has introduced the exciting possibility of achieving quantum speedups in machine learning tasks. These devices, however, are composed of a small number of qubits and can faithfully run only short circuits. This puts many proposed approaches fo...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
American Physical Society
2022-10-01
|
Series: | Physical Review Research |
Online Access: | http://doi.org/10.1103/PhysRevResearch.4.043007 |
_version_ | 1827285409819787264 |
---|---|
author | Rohit Dilip Yu-Jie Liu Adam Smith Frank Pollmann |
author_facet | Rohit Dilip Yu-Jie Liu Adam Smith Frank Pollmann |
author_sort | Rohit Dilip |
collection | DOAJ |
description | The advent of noisy-intermediate scale quantum computers has introduced the exciting possibility of achieving quantum speedups in machine learning tasks. These devices, however, are composed of a small number of qubits and can faithfully run only short circuits. This puts many proposed approaches for quantum machine learning beyond currently available devices. We address the problem of compressing classical data into efficient representations on quantum devices. Our proposed methods allow both the required number of qubits and depth of the quantum circuit to be tuned. We achieve this by using a correspondence between matrix-product states and quantum circuits and further propose a hardware-efficient quantum circuit approach, which we benchmark on the Fashion-MNIST dataset. Finally, we demonstrate that a quantum circuit-based classifier can achieve competitive accuracy with current tensor learning methods using only 11 qubits. |
first_indexed | 2024-04-24T10:14:11Z |
format | Article |
id | doaj.art-9c53bb544f8c491ca6afa38ec2a9e8cc |
institution | Directory Open Access Journal |
issn | 2643-1564 |
language | English |
last_indexed | 2024-04-24T10:14:11Z |
publishDate | 2022-10-01 |
publisher | American Physical Society |
record_format | Article |
series | Physical Review Research |
spelling | doaj.art-9c53bb544f8c491ca6afa38ec2a9e8cc2024-04-12T17:25:00ZengAmerican Physical SocietyPhysical Review Research2643-15642022-10-014404300710.1103/PhysRevResearch.4.043007Data compression for quantum machine learningRohit DilipYu-Jie LiuAdam SmithFrank PollmannThe advent of noisy-intermediate scale quantum computers has introduced the exciting possibility of achieving quantum speedups in machine learning tasks. These devices, however, are composed of a small number of qubits and can faithfully run only short circuits. This puts many proposed approaches for quantum machine learning beyond currently available devices. We address the problem of compressing classical data into efficient representations on quantum devices. Our proposed methods allow both the required number of qubits and depth of the quantum circuit to be tuned. We achieve this by using a correspondence between matrix-product states and quantum circuits and further propose a hardware-efficient quantum circuit approach, which we benchmark on the Fashion-MNIST dataset. Finally, we demonstrate that a quantum circuit-based classifier can achieve competitive accuracy with current tensor learning methods using only 11 qubits.http://doi.org/10.1103/PhysRevResearch.4.043007 |
spellingShingle | Rohit Dilip Yu-Jie Liu Adam Smith Frank Pollmann Data compression for quantum machine learning Physical Review Research |
title | Data compression for quantum machine learning |
title_full | Data compression for quantum machine learning |
title_fullStr | Data compression for quantum machine learning |
title_full_unstemmed | Data compression for quantum machine learning |
title_short | Data compression for quantum machine learning |
title_sort | data compression for quantum machine learning |
url | http://doi.org/10.1103/PhysRevResearch.4.043007 |
work_keys_str_mv | AT rohitdilip datacompressionforquantummachinelearning AT yujieliu datacompressionforquantummachinelearning AT adamsmith datacompressionforquantummachinelearning AT frankpollmann datacompressionforquantummachinelearning |