Flexible Resource Block Allocation to Multiple Slices for Radio Access Network Slicing Using Deep Reinforcement Learning
In the fifth-generation of mobile communications, network slicing is used to provide an optimal network for various services as a slice. In this paper, we propose a radio access network (RAN) slicing method that flexibly allocates RAN resources using deep reinforcement learning (DRL). In RANs, the n...
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2020-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9057705/ |
_version_ | 1819276430452719616 |
---|---|
author | Yu Abiko Takato Saito Daizo Ikeda Ken Ohta Tadanori Mizuno Hiroshi Mineno |
author_facet | Yu Abiko Takato Saito Daizo Ikeda Ken Ohta Tadanori Mizuno Hiroshi Mineno |
author_sort | Yu Abiko |
collection | DOAJ |
description | In the fifth-generation of mobile communications, network slicing is used to provide an optimal network for various services as a slice. In this paper, we propose a radio access network (RAN) slicing method that flexibly allocates RAN resources using deep reinforcement learning (DRL). In RANs, the number of slices controlled by a base station fluctuates in terms of user ingress and egress from the base station coverage area and service switching on the respective sets of user equipment. Therefore, when resource allocation depends on the number of slices, resources cannot be allocated when the number of slices changes. We consider a method that makes optimal-resource allocation independent of the number of slices. Resource allocation is optimized using DRL, which learns the best action for a state through trial and error. To achieve independence from the number of slices, we show a design for a model that manages resources on a one-slice-by-one-agent basis using Ape-X, which is a DRL method. In Ape-X, because agents can be employed in parallel, models that learn various environments can be generated through trial and error of multiple environments. In addition, we design a model that satisfies the slicing requirements without over-allocating resources. Based on this design, it is possible to optimally allocate resources independently of the number of slices by changing the number of agents. In the evaluation, we test multiple scenarios and show that the mean satisfaction of the slice requirements is approximately 97%. |
first_indexed | 2024-12-23T23:40:06Z |
format | Article |
id | doaj.art-eb43642392fe4792bbe492b42afe54c1 |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-12-23T23:40:06Z |
publishDate | 2020-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-eb43642392fe4792bbe492b42afe54c12022-12-21T17:25:42ZengIEEEIEEE Access2169-35362020-01-018681836819810.1109/ACCESS.2020.29860509057705Flexible Resource Block Allocation to Multiple Slices for Radio Access Network Slicing Using Deep Reinforcement LearningYu Abiko0https://orcid.org/0000-0002-7165-7531Takato Saito1https://orcid.org/0000-0002-7243-2270Daizo Ikeda2https://orcid.org/0000-0002-4022-0505Ken Ohta3https://orcid.org/0000-0003-2360-9874Tadanori Mizuno4https://orcid.org/0000-0002-9831-4758Hiroshi Mineno5https://orcid.org/0000-0002-3921-4298Graduate School of Integrated Science and Technology, Shizuoka University, Hamamatsu, JapanResearch Laboratories, NTT DOCOMO, Inc., Yokosuka, JapanResearch Laboratories, NTT DOCOMO, Inc., Yokosuka, JapanResearch Laboratories, NTT DOCOMO, Inc., Yokosuka, JapanFaculty of Information Science, Aichi Institute of Technology, Toyota, JapanGraduate School of Integrated Science and Technology, Shizuoka University, Hamamatsu, JapanIn the fifth-generation of mobile communications, network slicing is used to provide an optimal network for various services as a slice. In this paper, we propose a radio access network (RAN) slicing method that flexibly allocates RAN resources using deep reinforcement learning (DRL). In RANs, the number of slices controlled by a base station fluctuates in terms of user ingress and egress from the base station coverage area and service switching on the respective sets of user equipment. Therefore, when resource allocation depends on the number of slices, resources cannot be allocated when the number of slices changes. We consider a method that makes optimal-resource allocation independent of the number of slices. Resource allocation is optimized using DRL, which learns the best action for a state through trial and error. To achieve independence from the number of slices, we show a design for a model that manages resources on a one-slice-by-one-agent basis using Ape-X, which is a DRL method. In Ape-X, because agents can be employed in parallel, models that learn various environments can be generated through trial and error of multiple environments. In addition, we design a model that satisfies the slicing requirements without over-allocating resources. Based on this design, it is possible to optimally allocate resources independently of the number of slices by changing the number of agents. In the evaluation, we test multiple scenarios and show that the mean satisfaction of the slice requirements is approximately 97%.https://ieeexplore.ieee.org/document/9057705/Deep reinforcement learningnetwork slicingRAN slicingresource management |
spellingShingle | Yu Abiko Takato Saito Daizo Ikeda Ken Ohta Tadanori Mizuno Hiroshi Mineno Flexible Resource Block Allocation to Multiple Slices for Radio Access Network Slicing Using Deep Reinforcement Learning IEEE Access Deep reinforcement learning network slicing RAN slicing resource management |
title | Flexible Resource Block Allocation to Multiple Slices for Radio Access Network Slicing Using Deep Reinforcement Learning |
title_full | Flexible Resource Block Allocation to Multiple Slices for Radio Access Network Slicing Using Deep Reinforcement Learning |
title_fullStr | Flexible Resource Block Allocation to Multiple Slices for Radio Access Network Slicing Using Deep Reinforcement Learning |
title_full_unstemmed | Flexible Resource Block Allocation to Multiple Slices for Radio Access Network Slicing Using Deep Reinforcement Learning |
title_short | Flexible Resource Block Allocation to Multiple Slices for Radio Access Network Slicing Using Deep Reinforcement Learning |
title_sort | flexible resource block allocation to multiple slices for radio access network slicing using deep reinforcement learning |
topic | Deep reinforcement learning network slicing RAN slicing resource management |
url | https://ieeexplore.ieee.org/document/9057705/ |
work_keys_str_mv | AT yuabiko flexibleresourceblockallocationtomultipleslicesforradioaccessnetworkslicingusingdeepreinforcementlearning AT takatosaito flexibleresourceblockallocationtomultipleslicesforradioaccessnetworkslicingusingdeepreinforcementlearning AT daizoikeda flexibleresourceblockallocationtomultipleslicesforradioaccessnetworkslicingusingdeepreinforcementlearning AT kenohta flexibleresourceblockallocationtomultipleslicesforradioaccessnetworkslicingusingdeepreinforcementlearning AT tadanorimizuno flexibleresourceblockallocationtomultipleslicesforradioaccessnetworkslicingusingdeepreinforcementlearning AT hiroshimineno flexibleresourceblockallocationtomultipleslicesforradioaccessnetworkslicingusingdeepreinforcementlearning |