Environment-Friendly Power Scheduling Based on Deep Contextual Reinforcement Learning

A novel approach to power scheduling is introduced, focusing on minimizing both economic and environmental impacts. This method utilizes deep contextual reinforcement learning (RL) within an agent-based simulation environment. Each generating unit is treated as an independent, heterogeneous agent, a...

Full description

Bibliographic Details
Main Authors: Awol Seid Ebrie, Chunhyun Paik, Yongjoo Chung, Young Jin Kim
Format: Article
Language:English
Published: MDPI AG 2023-08-01
Series:Energies
Subjects:
Online Access:https://www.mdpi.com/1996-1073/16/16/5920
Description
Summary:A novel approach to power scheduling is introduced, focusing on minimizing both economic and environmental impacts. This method utilizes deep contextual reinforcement learning (RL) within an agent-based simulation environment. Each generating unit is treated as an independent, heterogeneous agent, and the scheduling dynamics are formulated as Markov decision processes (MDPs). The MDPs are then used to train a deep RL model to determine optimal power schedules. The performance of this approach is evaluated across various power systems, including both small-scale and large-scale systems with up to 100 units. The results demonstrate that the proposed method exhibits superior performance and scalability in handling power systems with a larger number of units.
ISSN:1996-1073