Microgrid energy management using deep Q-network reinforcement learning
This paper proposes a deep reinforcement learning-based approach to optimally manage the different energy resources within a microgrid. The proposed methodology considers the stochastic behavior of the main elements, which include load profile, generation profile, and pricing signals. The energy man...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier
2022-11-01
|
Series: | Alexandria Engineering Journal |
Subjects: | |
Online Access: | http://www.sciencedirect.com/science/article/pii/S1110016822001284 |
_version_ | 1811250652319318016 |
---|---|
author | Mohammed H. Alabdullah Mohammad A. Abido |
author_facet | Mohammed H. Alabdullah Mohammad A. Abido |
author_sort | Mohammed H. Alabdullah |
collection | DOAJ |
description | This paper proposes a deep reinforcement learning-based approach to optimally manage the different energy resources within a microgrid. The proposed methodology considers the stochastic behavior of the main elements, which include load profile, generation profile, and pricing signals. The energy management problem is formulated as a finite horizon Markov Decision Process (MDP) by defining the state, action, reward, and objective functions, without prior knowledge of the transition probabilities. Such formulation does not require explicit model of the microgrid, making use of the accumulated data and interaction with the microgrid to derive the optimal policy. An efficient reinforcement learning algorithm based on deep Q-networks is implemented to solve the developed formulation. To confirm the effectiveness of such methodology, a case study based on a real microgrid is implemented. The results of the proposed methodology demonstrate its capability to obtain online scheduling of various energy resources within a microgrid with optimal cost-effective actions under stochastic conditions. The achieved costs of operation are within 2% of those obtained in the optimal schedule. |
first_indexed | 2024-04-12T16:07:52Z |
format | Article |
id | doaj.art-3ef452066ed0416580c81a3b33378066 |
institution | Directory Open Access Journal |
issn | 1110-0168 |
language | English |
last_indexed | 2024-04-12T16:07:52Z |
publishDate | 2022-11-01 |
publisher | Elsevier |
record_format | Article |
series | Alexandria Engineering Journal |
spelling | doaj.art-3ef452066ed0416580c81a3b333780662022-12-22T03:25:59ZengElsevierAlexandria Engineering Journal1110-01682022-11-01611190699078Microgrid energy management using deep Q-network reinforcement learningMohammed H. Alabdullah0Mohammad A. Abido1Saudi Aramco, Dhahran, Saudi Arabia; Electrical Engineering Department, King Fahd University of Petroleum & Minerals, Dhahran, Saudi ArabiaElectrical Engineering Department, King Fahd University of Petroleum & Minerals, Dhahran, Saudi Arabia; KACARE Energy Research & Innovation Center (ERIC), KFUPM, Saudi Arabia; Interdisciplinary Research Center in Renewable Energy and Power Systems (IRC-REPS), KFUPM, Saudi Arabia; Corresponding author.This paper proposes a deep reinforcement learning-based approach to optimally manage the different energy resources within a microgrid. The proposed methodology considers the stochastic behavior of the main elements, which include load profile, generation profile, and pricing signals. The energy management problem is formulated as a finite horizon Markov Decision Process (MDP) by defining the state, action, reward, and objective functions, without prior knowledge of the transition probabilities. Such formulation does not require explicit model of the microgrid, making use of the accumulated data and interaction with the microgrid to derive the optimal policy. An efficient reinforcement learning algorithm based on deep Q-networks is implemented to solve the developed formulation. To confirm the effectiveness of such methodology, a case study based on a real microgrid is implemented. The results of the proposed methodology demonstrate its capability to obtain online scheduling of various energy resources within a microgrid with optimal cost-effective actions under stochastic conditions. The achieved costs of operation are within 2% of those obtained in the optimal schedule.http://www.sciencedirect.com/science/article/pii/S1110016822001284Deep reinforcement learningDeep Q-networksEnergy managementMicrogrid |
spellingShingle | Mohammed H. Alabdullah Mohammad A. Abido Microgrid energy management using deep Q-network reinforcement learning Alexandria Engineering Journal Deep reinforcement learning Deep Q-networks Energy management Microgrid |
title | Microgrid energy management using deep Q-network reinforcement learning |
title_full | Microgrid energy management using deep Q-network reinforcement learning |
title_fullStr | Microgrid energy management using deep Q-network reinforcement learning |
title_full_unstemmed | Microgrid energy management using deep Q-network reinforcement learning |
title_short | Microgrid energy management using deep Q-network reinforcement learning |
title_sort | microgrid energy management using deep q network reinforcement learning |
topic | Deep reinforcement learning Deep Q-networks Energy management Microgrid |
url | http://www.sciencedirect.com/science/article/pii/S1110016822001284 |
work_keys_str_mv | AT mohammedhalabdullah microgridenergymanagementusingdeepqnetworkreinforcementlearning AT mohammadaabido microgridenergymanagementusingdeepqnetworkreinforcementlearning |