Joint Optimization of Energy Efficiency and User Outage Using Multi-Agent Reinforcement Learning in Ultra-Dense Small Cell Networks
With the substantial increase in spatio-temporal mobile traffic, reducing the network-level energy consumption while satisfying various quality-of-service (QoS) requirements has become one of the most important challenges facing six-generation (6G) wireless networks. We herein propose a novel multi-...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-02-01
|
Series: | Electronics |
Subjects: | |
Online Access: | https://www.mdpi.com/2079-9292/11/4/599 |
_version_ | 1797480855400087552 |
---|---|
author | Eunjin Kim Bang Chul Jung Chan Yi Park Howon Lee |
author_facet | Eunjin Kim Bang Chul Jung Chan Yi Park Howon Lee |
author_sort | Eunjin Kim |
collection | DOAJ |
description | With the substantial increase in spatio-temporal mobile traffic, reducing the network-level energy consumption while satisfying various quality-of-service (QoS) requirements has become one of the most important challenges facing six-generation (6G) wireless networks. We herein propose a novel multi-agent distributed Q-learning based outage-aware cell breathing (MAQ-OCB) framework to optimize energy efficiency (EE) and user outage jointly. Through extensive simulations, we demonstrate that the proposed MAQ-OCB can achieve the EE-optimal solution obtained by the exhaustive search algorithm. In addition, MAQ-OCB significantly outperforms conventional algorithms such as no transmission-power-control (No TPC), On-Off, centralized Q-learning based outage-aware cell breathing (C-OCB), and random-action algorithms. |
first_indexed | 2024-03-09T22:06:11Z |
format | Article |
id | doaj.art-0f423dc60c1a4fdc81d5d52fad341b57 |
institution | Directory Open Access Journal |
issn | 2079-9292 |
language | English |
last_indexed | 2024-03-09T22:06:11Z |
publishDate | 2022-02-01 |
publisher | MDPI AG |
record_format | Article |
series | Electronics |
spelling | doaj.art-0f423dc60c1a4fdc81d5d52fad341b572023-11-23T19:39:58ZengMDPI AGElectronics2079-92922022-02-0111459910.3390/electronics11040599Joint Optimization of Energy Efficiency and User Outage Using Multi-Agent Reinforcement Learning in Ultra-Dense Small Cell NetworksEunjin Kim0Bang Chul Jung1Chan Yi Park2Howon Lee3School of Electronic and Electrical Engineering and IITC, Hankyong National University, Anseong 17579, KoreaDepartment of Electronic Engineering, Chungnam National University, Deajeon 34134, KoreaAgency for Defense Development, Daejeon 34186, KoreaSchool of Electronic and Electrical Engineering and IITC, Hankyong National University, Anseong 17579, KoreaWith the substantial increase in spatio-temporal mobile traffic, reducing the network-level energy consumption while satisfying various quality-of-service (QoS) requirements has become one of the most important challenges facing six-generation (6G) wireless networks. We herein propose a novel multi-agent distributed Q-learning based outage-aware cell breathing (MAQ-OCB) framework to optimize energy efficiency (EE) and user outage jointly. Through extensive simulations, we demonstrate that the proposed MAQ-OCB can achieve the EE-optimal solution obtained by the exhaustive search algorithm. In addition, MAQ-OCB significantly outperforms conventional algorithms such as no transmission-power-control (No TPC), On-Off, centralized Q-learning based outage-aware cell breathing (C-OCB), and random-action algorithms.https://www.mdpi.com/2079-9292/11/4/599joint optimizationenergy-efficiencyuser outagecell breathingmulti-agent distributed Q-learningultra-dense small cell network |
spellingShingle | Eunjin Kim Bang Chul Jung Chan Yi Park Howon Lee Joint Optimization of Energy Efficiency and User Outage Using Multi-Agent Reinforcement Learning in Ultra-Dense Small Cell Networks Electronics joint optimization energy-efficiency user outage cell breathing multi-agent distributed Q-learning ultra-dense small cell network |
title | Joint Optimization of Energy Efficiency and User Outage Using Multi-Agent Reinforcement Learning in Ultra-Dense Small Cell Networks |
title_full | Joint Optimization of Energy Efficiency and User Outage Using Multi-Agent Reinforcement Learning in Ultra-Dense Small Cell Networks |
title_fullStr | Joint Optimization of Energy Efficiency and User Outage Using Multi-Agent Reinforcement Learning in Ultra-Dense Small Cell Networks |
title_full_unstemmed | Joint Optimization of Energy Efficiency and User Outage Using Multi-Agent Reinforcement Learning in Ultra-Dense Small Cell Networks |
title_short | Joint Optimization of Energy Efficiency and User Outage Using Multi-Agent Reinforcement Learning in Ultra-Dense Small Cell Networks |
title_sort | joint optimization of energy efficiency and user outage using multi agent reinforcement learning in ultra dense small cell networks |
topic | joint optimization energy-efficiency user outage cell breathing multi-agent distributed Q-learning ultra-dense small cell network |
url | https://www.mdpi.com/2079-9292/11/4/599 |
work_keys_str_mv | AT eunjinkim jointoptimizationofenergyefficiencyanduseroutageusingmultiagentreinforcementlearninginultradensesmallcellnetworks AT bangchuljung jointoptimizationofenergyefficiencyanduseroutageusingmultiagentreinforcementlearninginultradensesmallcellnetworks AT chanyipark jointoptimizationofenergyefficiencyanduseroutageusingmultiagentreinforcementlearninginultradensesmallcellnetworks AT howonlee jointoptimizationofenergyefficiencyanduseroutageusingmultiagentreinforcementlearninginultradensesmallcellnetworks |