Multi-agent deep reinforcement learning based distributed control architecture for interconnected multi-energy microgrid energy management and optimization

Environmental and climate change concerns are pushing the rapid development of new energy resources (DERs). The Energy Internet (EI), with the power-sharing functionality introduced by energy routers (ERs), offers an appealing alternative for DER systems. However, previous centralized control scheme...

Full description

Bibliographic Details
Main Authors: Zhang, Bin, Hu, Weihao, Ghias, Amer M. Y. M., Xu, Xiao, Chen, Zhe
Other Authors: School of Electrical and Electronic Engineering
Format: Journal Article
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/172264
_version_ 1811697500993617920
author Zhang, Bin
Hu, Weihao
Ghias, Amer M. Y. M.
Xu, Xiao
Chen, Zhe
author2 School of Electrical and Electronic Engineering
author_facet School of Electrical and Electronic Engineering
Zhang, Bin
Hu, Weihao
Ghias, Amer M. Y. M.
Xu, Xiao
Chen, Zhe
author_sort Zhang, Bin
collection NTU
description Environmental and climate change concerns are pushing the rapid development of new energy resources (DERs). The Energy Internet (EI), with the power-sharing functionality introduced by energy routers (ERs), offers an appealing alternative for DER systems. However, previous centralized control schemes for EI systems that follow a top-down architecture are unreliable for future power systems. This study proposes a distributed control scheme for bottom-up EI architecture. Second, model-based distributed control methods are not sufficiently flexible to deal with the complex uncertainties associated with multi-energy demands and DERs. A novel model-free/data-driven multiagent deep reinforcement learning (MADRL) method is proposed to learn the optimal operation strategy for the bottom-layer microgrid (MG) cluster. Unlike existing single-agent deep reinforcement learning methods that rely on homogeneous MG settings, the proposed MADRL adopts a form of decentralized execution, in which agents operate independently to meet local customized energy demands while preserving privacy. Third, an attention mechanism is added to the centralized critic, which can effectively accelerate the learning speed. Considering the bottom-layer power exchange request and the predicted electricity price, model predictive control of the upper layer determines the optimal power dispatching between the ERs and main grid. Simulations with other alternatives demonstrate the effectiveness of the proposed control scheme.
first_indexed 2024-10-01T07:56:15Z
format Journal Article
id ntu-10356/172264
institution Nanyang Technological University
language English
last_indexed 2024-10-01T07:56:15Z
publishDate 2023
record_format dspace
spelling ntu-10356/1722642023-12-04T07:50:56Z Multi-agent deep reinforcement learning based distributed control architecture for interconnected multi-energy microgrid energy management and optimization Zhang, Bin Hu, Weihao Ghias, Amer M. Y. M. Xu, Xiao Chen, Zhe School of Electrical and Electronic Engineering Engineering::Electrical and electronic engineering Energy Management Energy Internet Environmental and climate change concerns are pushing the rapid development of new energy resources (DERs). The Energy Internet (EI), with the power-sharing functionality introduced by energy routers (ERs), offers an appealing alternative for DER systems. However, previous centralized control schemes for EI systems that follow a top-down architecture are unreliable for future power systems. This study proposes a distributed control scheme for bottom-up EI architecture. Second, model-based distributed control methods are not sufficiently flexible to deal with the complex uncertainties associated with multi-energy demands and DERs. A novel model-free/data-driven multiagent deep reinforcement learning (MADRL) method is proposed to learn the optimal operation strategy for the bottom-layer microgrid (MG) cluster. Unlike existing single-agent deep reinforcement learning methods that rely on homogeneous MG settings, the proposed MADRL adopts a form of decentralized execution, in which agents operate independently to meet local customized energy demands while preserving privacy. Third, an attention mechanism is added to the centralized critic, which can effectively accelerate the learning speed. Considering the bottom-layer power exchange request and the predicted electricity price, model predictive control of the upper layer determines the optimal power dispatching between the ERs and main grid. Simulations with other alternatives demonstrate the effectiveness of the proposed control scheme. 2023-12-04T07:50:56Z 2023-12-04T07:50:56Z 2023 Journal Article Zhang, B., Hu, W., Ghias, A. M. Y. M., Xu, X. & Chen, Z. (2023). Multi-agent deep reinforcement learning based distributed control architecture for interconnected multi-energy microgrid energy management and optimization. Energy Conversion and Management, 277, 116647-. https://dx.doi.org/10.1016/j.enconman.2022.116647 0196-8904 https://hdl.handle.net/10356/172264 10.1016/j.enconman.2022.116647 2-s2.0-85146045427 277 116647 en Energy Conversion and Management © 2022 Elsevier Ltd. All rights reserved.
spellingShingle Engineering::Electrical and electronic engineering
Energy Management
Energy Internet
Zhang, Bin
Hu, Weihao
Ghias, Amer M. Y. M.
Xu, Xiao
Chen, Zhe
Multi-agent deep reinforcement learning based distributed control architecture for interconnected multi-energy microgrid energy management and optimization
title Multi-agent deep reinforcement learning based distributed control architecture for interconnected multi-energy microgrid energy management and optimization
title_full Multi-agent deep reinforcement learning based distributed control architecture for interconnected multi-energy microgrid energy management and optimization
title_fullStr Multi-agent deep reinforcement learning based distributed control architecture for interconnected multi-energy microgrid energy management and optimization
title_full_unstemmed Multi-agent deep reinforcement learning based distributed control architecture for interconnected multi-energy microgrid energy management and optimization
title_short Multi-agent deep reinforcement learning based distributed control architecture for interconnected multi-energy microgrid energy management and optimization
title_sort multi agent deep reinforcement learning based distributed control architecture for interconnected multi energy microgrid energy management and optimization
topic Engineering::Electrical and electronic engineering
Energy Management
Energy Internet
url https://hdl.handle.net/10356/172264
work_keys_str_mv AT zhangbin multiagentdeepreinforcementlearningbaseddistributedcontrolarchitectureforinterconnectedmultienergymicrogridenergymanagementandoptimization
AT huweihao multiagentdeepreinforcementlearningbaseddistributedcontrolarchitectureforinterconnectedmultienergymicrogridenergymanagementandoptimization
AT ghiasamermym multiagentdeepreinforcementlearningbaseddistributedcontrolarchitectureforinterconnectedmultienergymicrogridenergymanagementandoptimization
AT xuxiao multiagentdeepreinforcementlearningbaseddistributedcontrolarchitectureforinterconnectedmultienergymicrogridenergymanagementandoptimization
AT chenzhe multiagentdeepreinforcementlearningbaseddistributedcontrolarchitectureforinterconnectedmultienergymicrogridenergymanagementandoptimization