Deep reinforcement learning for physical layer security enhancement in energy harvesting based cognitive radio networks
The paper studies the secrecy communication threatened by a single eavesdropper in Energy Harvesting (EH)-based cognitive radio networks, where both the Secure User (SU) and the jammer harvest, store, and utilize RF energy from the Primary Transmitter (PT). Our main goal is to optimize the time slot...
Main Authors: | , , , , , |
---|---|
其他作者: | |
格式: | Journal Article |
语言: | English |
出版: |
2023
|
主题: | |
在线阅读: | https://hdl.handle.net/10356/169461 |
_version_ | 1826119607129210880 |
---|---|
author | Lin, Ruiquan Qiu, Hangding Jiang, Weibin Jiang, Zhenglong Li, Zhili Wang, Jun |
author2 | School of Electrical and Electronic Engineering |
author_facet | School of Electrical and Electronic Engineering Lin, Ruiquan Qiu, Hangding Jiang, Weibin Jiang, Zhenglong Li, Zhili Wang, Jun |
author_sort | Lin, Ruiquan |
collection | NTU |
description | The paper studies the secrecy communication threatened by a single eavesdropper in Energy Harvesting (EH)-based cognitive radio networks, where both the Secure User (SU) and the jammer harvest, store, and utilize RF energy from the Primary Transmitter (PT). Our main goal is to optimize the time slots for energy harvesting and wireless communication for both the secure user as well as the jammer to maximize the long-term performance of secrecy communication. A multi-agent Deep Reinforcement Learning (DRL) method is proposed for solving the optimization of resource allocation and performance. Specifically, each sub-channel from the Secure Transmitter (ST) to the Secure Receiver (SR) link, along with the jammer to the eavesdropper link, is regarded as an agent, which is responsible for exploring optimal power allocation strategy while a time allocation network is established to obtain optimal EH time allocation strategy. Every agent dynamically interacts with the wireless communication environment. Simulation results demonstrate that the proposed DRL-based resource allocation method outperforms the existing schemes in terms of secrecy rate, convergence speed, and the average number of transition steps. |
first_indexed | 2024-10-01T05:03:09Z |
format | Journal Article |
id | ntu-10356/169461 |
institution | Nanyang Technological University |
language | English |
last_indexed | 2024-10-01T05:03:09Z |
publishDate | 2023 |
record_format | dspace |
spelling | ntu-10356/1694612023-07-21T15:40:27Z Deep reinforcement learning for physical layer security enhancement in energy harvesting based cognitive radio networks Lin, Ruiquan Qiu, Hangding Jiang, Weibin Jiang, Zhenglong Li, Zhili Wang, Jun School of Electrical and Electronic Engineering Engineering::Electrical and electronic engineering Cognitive Radio Network Energy Harvesting The paper studies the secrecy communication threatened by a single eavesdropper in Energy Harvesting (EH)-based cognitive radio networks, where both the Secure User (SU) and the jammer harvest, store, and utilize RF energy from the Primary Transmitter (PT). Our main goal is to optimize the time slots for energy harvesting and wireless communication for both the secure user as well as the jammer to maximize the long-term performance of secrecy communication. A multi-agent Deep Reinforcement Learning (DRL) method is proposed for solving the optimization of resource allocation and performance. Specifically, each sub-channel from the Secure Transmitter (ST) to the Secure Receiver (SR) link, along with the jammer to the eavesdropper link, is regarded as an agent, which is responsible for exploring optimal power allocation strategy while a time allocation network is established to obtain optimal EH time allocation strategy. Every agent dynamically interacts with the wireless communication environment. Simulation results demonstrate that the proposed DRL-based resource allocation method outperforms the existing schemes in terms of secrecy rate, convergence speed, and the average number of transition steps. Published version This work was supported in part by the Natural Science Foundation of China under Grants No. 61871133 and in part by the Industry-Academia Collaboration Program of Fujian Universities under Grants No. 2020H6006. 2023-07-19T05:42:57Z 2023-07-19T05:42:57Z 2023 Journal Article Lin, R., Qiu, H., Jiang, W., Jiang, Z., Li, Z. & Wang, J. (2023). Deep reinforcement learning for physical layer security enhancement in energy harvesting based cognitive radio networks. Sensors, 23(2), 807-. https://dx.doi.org/10.3390/s23020807 1424-8220 https://hdl.handle.net/10356/169461 10.3390/s23020807 36679601 2-s2.0-85146705950 2 23 807 en Sensors © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/). application/pdf |
spellingShingle | Engineering::Electrical and electronic engineering Cognitive Radio Network Energy Harvesting Lin, Ruiquan Qiu, Hangding Jiang, Weibin Jiang, Zhenglong Li, Zhili Wang, Jun Deep reinforcement learning for physical layer security enhancement in energy harvesting based cognitive radio networks |
title | Deep reinforcement learning for physical layer security enhancement in energy harvesting based cognitive radio networks |
title_full | Deep reinforcement learning for physical layer security enhancement in energy harvesting based cognitive radio networks |
title_fullStr | Deep reinforcement learning for physical layer security enhancement in energy harvesting based cognitive radio networks |
title_full_unstemmed | Deep reinforcement learning for physical layer security enhancement in energy harvesting based cognitive radio networks |
title_short | Deep reinforcement learning for physical layer security enhancement in energy harvesting based cognitive radio networks |
title_sort | deep reinforcement learning for physical layer security enhancement in energy harvesting based cognitive radio networks |
topic | Engineering::Electrical and electronic engineering Cognitive Radio Network Energy Harvesting |
url | https://hdl.handle.net/10356/169461 |
work_keys_str_mv | AT linruiquan deepreinforcementlearningforphysicallayersecurityenhancementinenergyharvestingbasedcognitiveradionetworks AT qiuhangding deepreinforcementlearningforphysicallayersecurityenhancementinenergyharvestingbasedcognitiveradionetworks AT jiangweibin deepreinforcementlearningforphysicallayersecurityenhancementinenergyharvestingbasedcognitiveradionetworks AT jiangzhenglong deepreinforcementlearningforphysicallayersecurityenhancementinenergyharvestingbasedcognitiveradionetworks AT lizhili deepreinforcementlearningforphysicallayersecurityenhancementinenergyharvestingbasedcognitiveradionetworks AT wangjun deepreinforcementlearningforphysicallayersecurityenhancementinenergyharvestingbasedcognitiveradionetworks |