Reactive Power Control of a Converter in a Hardware-Based Environment Using Deep Reinforcement Learning

Due to the increasing penetration of the power grid with renewable, distributed energy resources, new strategies for voltage stabilization in low voltage distribution grids must be developed. One approach to autonomous voltage control is to apply reinforcement learning (RL) for reactive power inject...

Full description

Bibliographic Details
Main Authors: Ode Bokker, Henning Schlachter, Vanessa Beutel, Stefan Geißendörfer, Karsten von Maydell
Format: Article
Language:English
Published: MDPI AG 2022-12-01
Series:Energies
Subjects:
Online Access:https://www.mdpi.com/1996-1073/16/1/78
_version_ 1797625958823362560
author Ode Bokker
Henning Schlachter
Vanessa Beutel
Stefan Geißendörfer
Karsten von Maydell
author_facet Ode Bokker
Henning Schlachter
Vanessa Beutel
Stefan Geißendörfer
Karsten von Maydell
author_sort Ode Bokker
collection DOAJ
description Due to the increasing penetration of the power grid with renewable, distributed energy resources, new strategies for voltage stabilization in low voltage distribution grids must be developed. One approach to autonomous voltage control is to apply reinforcement learning (RL) for reactive power injection by converters. In this work, to implement a secure test environment including real hardware influences for such intelligent algorithms, a power hardware-in-the-loop (PHIL) approach is used to combine a virtually simulated grid with real hardware devices to emulate as realistic grid states as possible. The PHIL environment is validated through the identification of system limits and analysis of deviations to a software model of the test grid. Finally, an adaptive volt–var control algorithm using RL is implemented to control reactive power injection of a real converter within the test environment. Despite facing more difficult conditions in the hardware than in the software environment, the algorithm is successfully integrated to control the voltage at a grid connection point in a low voltage grid. Thus, the proposed study underlines the potential to use RL in the voltage stabilization of future power grids.
first_indexed 2024-03-11T10:03:51Z
format Article
id doaj.art-55d703cff9bb47fab8f4fe321d23af85
institution Directory Open Access Journal
issn 1996-1073
language English
last_indexed 2024-03-11T10:03:51Z
publishDate 2022-12-01
publisher MDPI AG
record_format Article
series Energies
spelling doaj.art-55d703cff9bb47fab8f4fe321d23af852023-11-16T15:14:14ZengMDPI AGEnergies1996-10732022-12-011617810.3390/en16010078Reactive Power Control of a Converter in a Hardware-Based Environment Using Deep Reinforcement LearningOde Bokker0Henning Schlachter1Vanessa Beutel2Stefan Geißendörfer3Karsten von Maydell4German Aerospace Center (DLR), Institute of Networked Energy Systems, Carl-von-Ossietzky-Str. 15, 26129 Oldenburg, GermanyGerman Aerospace Center (DLR), Institute of Networked Energy Systems, Carl-von-Ossietzky-Str. 15, 26129 Oldenburg, GermanyGerman Aerospace Center (DLR), Institute of Networked Energy Systems, Carl-von-Ossietzky-Str. 15, 26129 Oldenburg, GermanyGerman Aerospace Center (DLR), Institute of Networked Energy Systems, Carl-von-Ossietzky-Str. 15, 26129 Oldenburg, GermanyGerman Aerospace Center (DLR), Institute of Networked Energy Systems, Carl-von-Ossietzky-Str. 15, 26129 Oldenburg, GermanyDue to the increasing penetration of the power grid with renewable, distributed energy resources, new strategies for voltage stabilization in low voltage distribution grids must be developed. One approach to autonomous voltage control is to apply reinforcement learning (RL) for reactive power injection by converters. In this work, to implement a secure test environment including real hardware influences for such intelligent algorithms, a power hardware-in-the-loop (PHIL) approach is used to combine a virtually simulated grid with real hardware devices to emulate as realistic grid states as possible. The PHIL environment is validated through the identification of system limits and analysis of deviations to a software model of the test grid. Finally, an adaptive volt–var control algorithm using RL is implemented to control reactive power injection of a real converter within the test environment. Despite facing more difficult conditions in the hardware than in the software environment, the algorithm is successfully integrated to control the voltage at a grid connection point in a low voltage grid. Thus, the proposed study underlines the potential to use RL in the voltage stabilization of future power grids.https://www.mdpi.com/1996-1073/16/1/78power gridreactive powervoltage controlpower hardware-in-the-loop
spellingShingle Ode Bokker
Henning Schlachter
Vanessa Beutel
Stefan Geißendörfer
Karsten von Maydell
Reactive Power Control of a Converter in a Hardware-Based Environment Using Deep Reinforcement Learning
Energies
power grid
reactive power
voltage control
power hardware-in-the-loop
title Reactive Power Control of a Converter in a Hardware-Based Environment Using Deep Reinforcement Learning
title_full Reactive Power Control of a Converter in a Hardware-Based Environment Using Deep Reinforcement Learning
title_fullStr Reactive Power Control of a Converter in a Hardware-Based Environment Using Deep Reinforcement Learning
title_full_unstemmed Reactive Power Control of a Converter in a Hardware-Based Environment Using Deep Reinforcement Learning
title_short Reactive Power Control of a Converter in a Hardware-Based Environment Using Deep Reinforcement Learning
title_sort reactive power control of a converter in a hardware based environment using deep reinforcement learning
topic power grid
reactive power
voltage control
power hardware-in-the-loop
url https://www.mdpi.com/1996-1073/16/1/78
work_keys_str_mv AT odebokker reactivepowercontrolofaconverterinahardwarebasedenvironmentusingdeepreinforcementlearning
AT henningschlachter reactivepowercontrolofaconverterinahardwarebasedenvironmentusingdeepreinforcementlearning
AT vanessabeutel reactivepowercontrolofaconverterinahardwarebasedenvironmentusingdeepreinforcementlearning
AT stefangeißendorfer reactivepowercontrolofaconverterinahardwarebasedenvironmentusingdeepreinforcementlearning
AT karstenvonmaydell reactivepowercontrolofaconverterinahardwarebasedenvironmentusingdeepreinforcementlearning