Reinforcement Learning for Many-Body Ground-State Preparation Inspired by Counterdiabatic Driving

The quantum alternating operator ansatz (QAOA) is a prominent example of variational quantum algorithms. We propose a generalized QAOA called CD-QAOA, which is inspired by the counterdiabatic driving procedure, designed for quantum many-body systems and optimized using a reinforcement learning (RL)...

Full description

Bibliographic Details
Main Authors: Jiahao Yao, Lin Lin, Marin Bukov
Format: Article
Language:English
Published: American Physical Society 2021-09-01
Series:Physical Review X
Online Access:http://doi.org/10.1103/PhysRevX.11.031070
_version_ 1818719032314953728
author Jiahao Yao
Lin Lin
Marin Bukov
author_facet Jiahao Yao
Lin Lin
Marin Bukov
author_sort Jiahao Yao
collection DOAJ
description The quantum alternating operator ansatz (QAOA) is a prominent example of variational quantum algorithms. We propose a generalized QAOA called CD-QAOA, which is inspired by the counterdiabatic driving procedure, designed for quantum many-body systems and optimized using a reinforcement learning (RL) approach. The resulting hybrid control algorithm proves versatile in preparing the ground state of quantum-chaotic many-body spin chains by minimizing the energy. We show that using terms occurring in the adiabatic gauge potential as generators of additional control unitaries, it is possible to achieve fast high-fidelity many-body control away from the adiabatic regime. While each unitary retains the conventional QAOA-intrinsic continuous control degree of freedom such as the time duration, we consider the order of the multiple available unitaries appearing in the control sequence as an additional discrete optimization problem. Endowing the policy gradient algorithm with an autoregressive deep learning architecture to capture causality, we train the RL agent to construct optimal sequences of unitaries. The algorithm has no access to the quantum state, and we find that the protocol learned on small systems may generalize to larger systems. By scanning a range of protocol durations, we present numerical evidence for a finite quantum speed limit in the nonintegrable mixed-field spin-1/2 Ising and Lipkin-Meshkov-Glick models, and for the suitability to prepare ground states of the spin-1 Heisenberg chain in the long-range and topologically ordered parameter regimes. This work paves the way to incorporate recent success from deep learning for the purpose of quantum many-body control.
first_indexed 2024-12-17T20:00:29Z
format Article
id doaj.art-fba1d9d8fbc84730afc49b3402694c83
institution Directory Open Access Journal
issn 2160-3308
language English
last_indexed 2024-12-17T20:00:29Z
publishDate 2021-09-01
publisher American Physical Society
record_format Article
series Physical Review X
spelling doaj.art-fba1d9d8fbc84730afc49b3402694c832022-12-21T21:34:30ZengAmerican Physical SocietyPhysical Review X2160-33082021-09-0111303107010.1103/PhysRevX.11.031070Reinforcement Learning for Many-Body Ground-State Preparation Inspired by Counterdiabatic DrivingJiahao YaoLin LinMarin BukovThe quantum alternating operator ansatz (QAOA) is a prominent example of variational quantum algorithms. We propose a generalized QAOA called CD-QAOA, which is inspired by the counterdiabatic driving procedure, designed for quantum many-body systems and optimized using a reinforcement learning (RL) approach. The resulting hybrid control algorithm proves versatile in preparing the ground state of quantum-chaotic many-body spin chains by minimizing the energy. We show that using terms occurring in the adiabatic gauge potential as generators of additional control unitaries, it is possible to achieve fast high-fidelity many-body control away from the adiabatic regime. While each unitary retains the conventional QAOA-intrinsic continuous control degree of freedom such as the time duration, we consider the order of the multiple available unitaries appearing in the control sequence as an additional discrete optimization problem. Endowing the policy gradient algorithm with an autoregressive deep learning architecture to capture causality, we train the RL agent to construct optimal sequences of unitaries. The algorithm has no access to the quantum state, and we find that the protocol learned on small systems may generalize to larger systems. By scanning a range of protocol durations, we present numerical evidence for a finite quantum speed limit in the nonintegrable mixed-field spin-1/2 Ising and Lipkin-Meshkov-Glick models, and for the suitability to prepare ground states of the spin-1 Heisenberg chain in the long-range and topologically ordered parameter regimes. This work paves the way to incorporate recent success from deep learning for the purpose of quantum many-body control.http://doi.org/10.1103/PhysRevX.11.031070
spellingShingle Jiahao Yao
Lin Lin
Marin Bukov
Reinforcement Learning for Many-Body Ground-State Preparation Inspired by Counterdiabatic Driving
Physical Review X
title Reinforcement Learning for Many-Body Ground-State Preparation Inspired by Counterdiabatic Driving
title_full Reinforcement Learning for Many-Body Ground-State Preparation Inspired by Counterdiabatic Driving
title_fullStr Reinforcement Learning for Many-Body Ground-State Preparation Inspired by Counterdiabatic Driving
title_full_unstemmed Reinforcement Learning for Many-Body Ground-State Preparation Inspired by Counterdiabatic Driving
title_short Reinforcement Learning for Many-Body Ground-State Preparation Inspired by Counterdiabatic Driving
title_sort reinforcement learning for many body ground state preparation inspired by counterdiabatic driving
url http://doi.org/10.1103/PhysRevX.11.031070
work_keys_str_mv AT jiahaoyao reinforcementlearningformanybodygroundstatepreparationinspiredbycounterdiabaticdriving
AT linlin reinforcementlearningformanybodygroundstatepreparationinspiredbycounterdiabaticdriving
AT marinbukov reinforcementlearningformanybodygroundstatepreparationinspiredbycounterdiabaticdriving