Battery and Hydrogen Energy Storage Control in a Smart Energy Network with Flexible Energy Demand Using Deep Reinforcement Learning
Smart energy networks provide an effective means to accommodate high penetrations of variable renewable energy sources like solar and wind, which are key for the deep decarbonisation of energy production. However, given the variability of the renewables as well as the energy demand, it is imperative...
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-09-01
|
Series: | Energies |
Subjects: | |
Online Access: | https://www.mdpi.com/1996-1073/16/19/6770 |
_version_ | 1797575941793251328 |
---|---|
author | Cephas Samende Zhong Fan Jun Cao Renzo Fabián Gregory N. Baltas Pedro Rodriguez |
author_facet | Cephas Samende Zhong Fan Jun Cao Renzo Fabián Gregory N. Baltas Pedro Rodriguez |
author_sort | Cephas Samende |
collection | DOAJ |
description | Smart energy networks provide an effective means to accommodate high penetrations of variable renewable energy sources like solar and wind, which are key for the deep decarbonisation of energy production. However, given the variability of the renewables as well as the energy demand, it is imperative to develop effective control and energy storage schemes to manage the variable energy generation and achieve desired system economics and environmental goals. In this paper, we introduce a hybrid energy storage system composed of battery and hydrogen energy storage to handle the uncertainties related to electricity prices, renewable energy production, and consumption. We aim to improve renewable energy utilisation and minimise energy costs and carbon emissions while ensuring energy reliability and stability within the network. To achieve this, we propose a multi-agent deep deterministic policy gradient approach, which is a deep reinforcement learning-based control strategy to optimise the scheduling of the hybrid energy storage system and energy demand in real time. The proposed approach is model-free and does not require explicit knowledge and rigorous mathematical models of the smart energy network environment. Simulation results based on real-world data show that (i) integration and optimised operation of the hybrid energy storage system and energy demand reduce carbon emissions by 78.69%, improve cost savings by 23.5%, and improve renewable energy utilisation by over 13.2% compared to other baseline models; and (ii) the proposed algorithm outperforms the state-of-the-art self-learning algorithms like the deep-Q network. |
first_indexed | 2024-03-10T21:45:28Z |
format | Article |
id | doaj.art-262689d2e64944e6a664218b6debb823 |
institution | Directory Open Access Journal |
issn | 1996-1073 |
language | English |
last_indexed | 2024-03-10T21:45:28Z |
publishDate | 2023-09-01 |
publisher | MDPI AG |
record_format | Article |
series | Energies |
spelling | doaj.art-262689d2e64944e6a664218b6debb8232023-11-19T14:18:39ZengMDPI AGEnergies1996-10732023-09-011619677010.3390/en16196770Battery and Hydrogen Energy Storage Control in a Smart Energy Network with Flexible Energy Demand Using Deep Reinforcement LearningCephas Samende0Zhong Fan1Jun Cao2Renzo Fabián3Gregory N. Baltas4Pedro Rodriguez5Power Networks Demonstration Centre, University of Strathclyde, Glasgow G1 1XQ, UKEngineering Department, University of Exeter, Exeter EX4 4PY, UKEnvironmental Research and Innovation Department, Sustainable Energy Systems Group, Luxembourg Institute of Science and Technology, 4362 Esch-sur-Alzette, LuxembourgEnvironmental Research and Innovation Department, Sustainable Energy Systems Group, Luxembourg Institute of Science and Technology, 4362 Esch-sur-Alzette, LuxembourgEnvironmental Research and Innovation Department, Sustainable Energy Systems Group, Luxembourg Institute of Science and Technology, 4362 Esch-sur-Alzette, LuxembourgEnvironmental Research and Innovation Department, Sustainable Energy Systems Group, Luxembourg Institute of Science and Technology, 4362 Esch-sur-Alzette, LuxembourgSmart energy networks provide an effective means to accommodate high penetrations of variable renewable energy sources like solar and wind, which are key for the deep decarbonisation of energy production. However, given the variability of the renewables as well as the energy demand, it is imperative to develop effective control and energy storage schemes to manage the variable energy generation and achieve desired system economics and environmental goals. In this paper, we introduce a hybrid energy storage system composed of battery and hydrogen energy storage to handle the uncertainties related to electricity prices, renewable energy production, and consumption. We aim to improve renewable energy utilisation and minimise energy costs and carbon emissions while ensuring energy reliability and stability within the network. To achieve this, we propose a multi-agent deep deterministic policy gradient approach, which is a deep reinforcement learning-based control strategy to optimise the scheduling of the hybrid energy storage system and energy demand in real time. The proposed approach is model-free and does not require explicit knowledge and rigorous mathematical models of the smart energy network environment. Simulation results based on real-world data show that (i) integration and optimised operation of the hybrid energy storage system and energy demand reduce carbon emissions by 78.69%, improve cost savings by 23.5%, and improve renewable energy utilisation by over 13.2% compared to other baseline models; and (ii) the proposed algorithm outperforms the state-of-the-art self-learning algorithms like the deep-Q network.https://www.mdpi.com/1996-1073/16/19/6770deep reinforcement learningmulti-agent deep deterministic policy gradientbattery and hydrogen energy storage systemsdecarbonisationrenewable energycarbon emissions |
spellingShingle | Cephas Samende Zhong Fan Jun Cao Renzo Fabián Gregory N. Baltas Pedro Rodriguez Battery and Hydrogen Energy Storage Control in a Smart Energy Network with Flexible Energy Demand Using Deep Reinforcement Learning Energies deep reinforcement learning multi-agent deep deterministic policy gradient battery and hydrogen energy storage systems decarbonisation renewable energy carbon emissions |
title | Battery and Hydrogen Energy Storage Control in a Smart Energy Network with Flexible Energy Demand Using Deep Reinforcement Learning |
title_full | Battery and Hydrogen Energy Storage Control in a Smart Energy Network with Flexible Energy Demand Using Deep Reinforcement Learning |
title_fullStr | Battery and Hydrogen Energy Storage Control in a Smart Energy Network with Flexible Energy Demand Using Deep Reinforcement Learning |
title_full_unstemmed | Battery and Hydrogen Energy Storage Control in a Smart Energy Network with Flexible Energy Demand Using Deep Reinforcement Learning |
title_short | Battery and Hydrogen Energy Storage Control in a Smart Energy Network with Flexible Energy Demand Using Deep Reinforcement Learning |
title_sort | battery and hydrogen energy storage control in a smart energy network with flexible energy demand using deep reinforcement learning |
topic | deep reinforcement learning multi-agent deep deterministic policy gradient battery and hydrogen energy storage systems decarbonisation renewable energy carbon emissions |
url | https://www.mdpi.com/1996-1073/16/19/6770 |
work_keys_str_mv | AT cephassamende batteryandhydrogenenergystoragecontrolinasmartenergynetworkwithflexibleenergydemandusingdeepreinforcementlearning AT zhongfan batteryandhydrogenenergystoragecontrolinasmartenergynetworkwithflexibleenergydemandusingdeepreinforcementlearning AT juncao batteryandhydrogenenergystoragecontrolinasmartenergynetworkwithflexibleenergydemandusingdeepreinforcementlearning AT renzofabian batteryandhydrogenenergystoragecontrolinasmartenergynetworkwithflexibleenergydemandusingdeepreinforcementlearning AT gregorynbaltas batteryandhydrogenenergystoragecontrolinasmartenergynetworkwithflexibleenergydemandusingdeepreinforcementlearning AT pedrorodriguez batteryandhydrogenenergystoragecontrolinasmartenergynetworkwithflexibleenergydemandusingdeepreinforcementlearning |