Energy-Aware Dynamic DU Selection and NF Relocation in O-RAN Using Actor–Critic Learning

Open radio access network (O-RAN) is one of the promising candidates for fulfilling flexible and cost-effective goals by considering openness and intelligence in its architecture. In the O-RAN architecture, a central unit (O-CU) and a distributed unit (O-DU) are virtualized and executed on processin...

Full description

Bibliographic Details
Main Authors: Shahram Mollahasani, Turgay Pamuklu, Rodney Wilson, Melike Erol-Kantarci
Format: Article
Language:English
Published: MDPI AG 2022-07-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/22/13/5029
_version_ 1797408182905077760
author Shahram Mollahasani
Turgay Pamuklu
Rodney Wilson
Melike Erol-Kantarci
author_facet Shahram Mollahasani
Turgay Pamuklu
Rodney Wilson
Melike Erol-Kantarci
author_sort Shahram Mollahasani
collection DOAJ
description Open radio access network (O-RAN) is one of the promising candidates for fulfilling flexible and cost-effective goals by considering openness and intelligence in its architecture. In the O-RAN architecture, a central unit (O-CU) and a distributed unit (O-DU) are virtualized and executed on processing pools of general-purpose processors that can be placed at different locations. Therefore, it is challenging to choose a proper location for executing network functions (NFs) over these entities by considering propagation delay and computational capacity. In this paper, we propose a Soft Actor–Critic Energy-Aware Dynamic DU Selection algorithm (SA2C-EADDUS) by integrating two nested actor–critic agents in the O-RAN architecture. In addition, we formulate an optimization model that minimizes delay and energy consumption. Then, we solve that problem with an MILP solver and use that solution as a lower bound comparison for our SA2C-EADDUS algorithm. Moreover, we compare that algorithm with recent works, including RL- and DRL-based resource allocation algorithms and a heuristic method. We show that by collaborating A2C agents in different layers and by dynamic relocation of NFs, based on service requirements, our schemes improve the energy efficiency by 50% with respect to other schemes. Moreover, we reduce the mean delay by a significant amount with our novel SA2C-EADDUS approach.
first_indexed 2024-03-09T03:54:37Z
format Article
id doaj.art-8b7ecfcc75a2476d909da07971aa8cea
institution Directory Open Access Journal
issn 1424-8220
language English
last_indexed 2024-03-09T03:54:37Z
publishDate 2022-07-01
publisher MDPI AG
record_format Article
series Sensors
spelling doaj.art-8b7ecfcc75a2476d909da07971aa8cea2023-12-03T14:22:46ZengMDPI AGSensors1424-82202022-07-012213502910.3390/s22135029Energy-Aware Dynamic DU Selection and NF Relocation in O-RAN Using Actor–Critic LearningShahram Mollahasani0Turgay Pamuklu1Rodney Wilson2Melike Erol-Kantarci3School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, ON K1N 6N5, CanadaSchool of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, ON K1N 6N5, CanadaCiena, Ottawa, ON K2K 0L1, CanadaSchool of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, ON K1N 6N5, CanadaOpen radio access network (O-RAN) is one of the promising candidates for fulfilling flexible and cost-effective goals by considering openness and intelligence in its architecture. In the O-RAN architecture, a central unit (O-CU) and a distributed unit (O-DU) are virtualized and executed on processing pools of general-purpose processors that can be placed at different locations. Therefore, it is challenging to choose a proper location for executing network functions (NFs) over these entities by considering propagation delay and computational capacity. In this paper, we propose a Soft Actor–Critic Energy-Aware Dynamic DU Selection algorithm (SA2C-EADDUS) by integrating two nested actor–critic agents in the O-RAN architecture. In addition, we formulate an optimization model that minimizes delay and energy consumption. Then, we solve that problem with an MILP solver and use that solution as a lower bound comparison for our SA2C-EADDUS algorithm. Moreover, we compare that algorithm with recent works, including RL- and DRL-based resource allocation algorithms and a heuristic method. We show that by collaborating A2C agents in different layers and by dynamic relocation of NFs, based on service requirements, our schemes improve the energy efficiency by 50% with respect to other schemes. Moreover, we reduce the mean delay by a significant amount with our novel SA2C-EADDUS approach.https://www.mdpi.com/1424-8220/22/13/5029actor–critic learningenergy-efficiencyO-RANRAN optimization
spellingShingle Shahram Mollahasani
Turgay Pamuklu
Rodney Wilson
Melike Erol-Kantarci
Energy-Aware Dynamic DU Selection and NF Relocation in O-RAN Using Actor–Critic Learning
Sensors
actor–critic learning
energy-efficiency
O-RAN
RAN optimization
title Energy-Aware Dynamic DU Selection and NF Relocation in O-RAN Using Actor–Critic Learning
title_full Energy-Aware Dynamic DU Selection and NF Relocation in O-RAN Using Actor–Critic Learning
title_fullStr Energy-Aware Dynamic DU Selection and NF Relocation in O-RAN Using Actor–Critic Learning
title_full_unstemmed Energy-Aware Dynamic DU Selection and NF Relocation in O-RAN Using Actor–Critic Learning
title_short Energy-Aware Dynamic DU Selection and NF Relocation in O-RAN Using Actor–Critic Learning
title_sort energy aware dynamic du selection and nf relocation in o ran using actor critic learning
topic actor–critic learning
energy-efficiency
O-RAN
RAN optimization
url https://www.mdpi.com/1424-8220/22/13/5029
work_keys_str_mv AT shahrammollahasani energyawaredynamicduselectionandnfrelocationinoranusingactorcriticlearning
AT turgaypamuklu energyawaredynamicduselectionandnfrelocationinoranusingactorcriticlearning
AT rodneywilson energyawaredynamicduselectionandnfrelocationinoranusingactorcriticlearning
AT melikeerolkantarci energyawaredynamicduselectionandnfrelocationinoranusingactorcriticlearning