DRL-Based Resource Allocation for NOMA-Enabled D2D Communications Underlay Cellular Networks
Since the emergence of device-to-device (D2D) communications, an efficient resource allocation (RA) scheme with low-complexity suited for high variability of network environments has been continuously demanded. As a solution, we propose a RA scheme based on deep reinforcement learning (DRL) for D2D...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2023-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10355922/ |
_version_ | 1797376269471449088 |
---|---|
author | Yun Jae Jeong Seoyoung Yu Jeong Woo Lee |
author_facet | Yun Jae Jeong Seoyoung Yu Jeong Woo Lee |
author_sort | Yun Jae Jeong |
collection | DOAJ |
description | Since the emergence of device-to-device (D2D) communications, an efficient resource allocation (RA) scheme with low-complexity suited for high variability of network environments has been continuously demanded. As a solution, we propose a RA scheme based on deep reinforcement learning (DRL) for D2D communications exploiting cluster-wise non-orthogonal multiple access (NOMA) protocol underlay cellular networks. The goal of RA is allocating transmit power and channel spectrum to D2D links to maximize a benefit. We analyze and formulate the outage of NOMA-enabled D2D links and investigate performance measures. To alleviate system overhead and computational complexity with maintaining high benefit, we propose a sub-optimal RA scheme under a centralized multi-agent DRL framework. Each agent corresponding to each D2D cluster trains its own artificial neural networks in a cyclic manner with a timing-offset. The proposed DRL-based RA scheme enables prompt allocation of resources to D2D links based on the observation of time-varying environments. The proposed RA scheme outperforms other schemes in terms of benefit, energy efficiency, fairness and coordination of D2D users, where the performance gain becomes significant when the mutual interference among user equipments is severe. In a cell of radius 100-meter with target rates for D2D and cellular links of 2 and 8 bits/s/Hz, respectively, the proposed RA scheme improves normalized benefit, energy efficiency, fairness and coordination of D2D users by 18%, 23%, 75% and 80%, respectively, over a greedy scheme. The improvements in these performance measures over a random RA scheme are 152%, 164%, 87% and 77%, respectively. |
first_indexed | 2024-03-08T19:36:06Z |
format | Article |
id | doaj.art-21d47c07c6a8402e92d6b40db2204099 |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-03-08T19:36:06Z |
publishDate | 2023-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-21d47c07c6a8402e92d6b40db22040992023-12-26T00:09:41ZengIEEEIEEE Access2169-35362023-01-011114027014028610.1109/ACCESS.2023.334158510355922DRL-Based Resource Allocation for NOMA-Enabled D2D Communications Underlay Cellular NetworksYun Jae Jeong0https://orcid.org/0009-0000-4100-6319Seoyoung Yu1Jeong Woo Lee2https://orcid.org/0000-0002-6117-7489School of Electrical and Electronics Engineering, Chung-Ang University, Seoul, South KoreaSchool of Electrical and Electronics Engineering, Chung-Ang University, Seoul, South KoreaSchool of Electrical and Electronics Engineering, Chung-Ang University, Seoul, South KoreaSince the emergence of device-to-device (D2D) communications, an efficient resource allocation (RA) scheme with low-complexity suited for high variability of network environments has been continuously demanded. As a solution, we propose a RA scheme based on deep reinforcement learning (DRL) for D2D communications exploiting cluster-wise non-orthogonal multiple access (NOMA) protocol underlay cellular networks. The goal of RA is allocating transmit power and channel spectrum to D2D links to maximize a benefit. We analyze and formulate the outage of NOMA-enabled D2D links and investigate performance measures. To alleviate system overhead and computational complexity with maintaining high benefit, we propose a sub-optimal RA scheme under a centralized multi-agent DRL framework. Each agent corresponding to each D2D cluster trains its own artificial neural networks in a cyclic manner with a timing-offset. The proposed DRL-based RA scheme enables prompt allocation of resources to D2D links based on the observation of time-varying environments. The proposed RA scheme outperforms other schemes in terms of benefit, energy efficiency, fairness and coordination of D2D users, where the performance gain becomes significant when the mutual interference among user equipments is severe. In a cell of radius 100-meter with target rates for D2D and cellular links of 2 and 8 bits/s/Hz, respectively, the proposed RA scheme improves normalized benefit, energy efficiency, fairness and coordination of D2D users by 18%, 23%, 75% and 80%, respectively, over a greedy scheme. The improvements in these performance measures over a random RA scheme are 152%, 164%, 87% and 77%, respectively.https://ieeexplore.ieee.org/document/10355922/Device-to-device communicationscellular networkdeep reinforcement learningresource allocationnon-orthogonal multiple access |
spellingShingle | Yun Jae Jeong Seoyoung Yu Jeong Woo Lee DRL-Based Resource Allocation for NOMA-Enabled D2D Communications Underlay Cellular Networks IEEE Access Device-to-device communications cellular network deep reinforcement learning resource allocation non-orthogonal multiple access |
title | DRL-Based Resource Allocation for NOMA-Enabled D2D Communications Underlay Cellular Networks |
title_full | DRL-Based Resource Allocation for NOMA-Enabled D2D Communications Underlay Cellular Networks |
title_fullStr | DRL-Based Resource Allocation for NOMA-Enabled D2D Communications Underlay Cellular Networks |
title_full_unstemmed | DRL-Based Resource Allocation for NOMA-Enabled D2D Communications Underlay Cellular Networks |
title_short | DRL-Based Resource Allocation for NOMA-Enabled D2D Communications Underlay Cellular Networks |
title_sort | drl based resource allocation for noma enabled d2d communications underlay cellular networks |
topic | Device-to-device communications cellular network deep reinforcement learning resource allocation non-orthogonal multiple access |
url | https://ieeexplore.ieee.org/document/10355922/ |
work_keys_str_mv | AT yunjaejeong drlbasedresourceallocationfornomaenabledd2dcommunicationsunderlaycellularnetworks AT seoyoungyu drlbasedresourceallocationfornomaenabledd2dcommunicationsunderlaycellularnetworks AT jeongwoolee drlbasedresourceallocationfornomaenabledd2dcommunicationsunderlaycellularnetworks |