Attention-Shared Multi-Agent Actor–Critic-Based Deep Reinforcement Learning Approach for Mobile Charging Dynamic Scheduling in Wireless Rechargeable Sensor Networks
The breakthrough of wireless energy transmission (WET) technology has greatly promoted the wireless rechargeable sensor networks (WRSNs). A promising method to overcome the energy constraint problem in WRSNs is mobile charging by employing a mobile charger to charge sensors via WET. Recently, more a...
Main Authors: | , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-07-01
|
Series: | Entropy |
Subjects: | |
Online Access: | https://www.mdpi.com/1099-4300/24/7/965 |
_version_ | 1797406497080082432 |
---|---|
author | Chengpeng Jiang Ziyang Wang Shuai Chen Jinglin Li Haoran Wang Jinwei Xiang Wendong Xiao |
author_facet | Chengpeng Jiang Ziyang Wang Shuai Chen Jinglin Li Haoran Wang Jinwei Xiang Wendong Xiao |
author_sort | Chengpeng Jiang |
collection | DOAJ |
description | The breakthrough of wireless energy transmission (WET) technology has greatly promoted the wireless rechargeable sensor networks (WRSNs). A promising method to overcome the energy constraint problem in WRSNs is mobile charging by employing a mobile charger to charge sensors via WET. Recently, more and more studies have been conducted for mobile charging scheduling under dynamic charging environments, ignoring the consideration of the joint charging sequence scheduling and charging ratio control (JSSRC) optimal design. This paper will propose a novel attention-shared multi-agent actor–critic-based deep reinforcement learning approach for JSSRC (AMADRL-JSSRC). In AMADRL-JSSRC, we employ two heterogeneous agents named charging sequence scheduler and charging ratio controller with an independent actor network and critic network. Meanwhile, we design the reward function for them, respectively, by considering the tour length and the number of dead sensors. The AMADRL-JSSRC trains decentralized policies in multi-agent environments, using a centralized computing critic network to share an attention mechanism, and it selects relevant policy information for each agent at every charging decision. Simulation results demonstrate that the proposed AMADRL-JSSRC can efficiently prolong the lifetime of the network and reduce the number of death sensors compared with the baseline algorithms. |
first_indexed | 2024-03-09T03:27:19Z |
format | Article |
id | doaj.art-5cfdeebde5784d0b90b4491b5c39e2aa |
institution | Directory Open Access Journal |
issn | 1099-4300 |
language | English |
last_indexed | 2024-03-09T03:27:19Z |
publishDate | 2022-07-01 |
publisher | MDPI AG |
record_format | Article |
series | Entropy |
spelling | doaj.art-5cfdeebde5784d0b90b4491b5c39e2aa2023-12-03T15:00:37ZengMDPI AGEntropy1099-43002022-07-0124796510.3390/e24070965Attention-Shared Multi-Agent Actor–Critic-Based Deep Reinforcement Learning Approach for Mobile Charging Dynamic Scheduling in Wireless Rechargeable Sensor NetworksChengpeng Jiang0Ziyang Wang1Shuai Chen2Jinglin Li3Haoran Wang4Jinwei Xiang5Wendong Xiao6School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, ChinaDepartment of Automation, Tsinghua University, Beijing 100084, ChinaSchool of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, ChinaSchool of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, ChinaSchool of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, ChinaSchool of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, ChinaSchool of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, ChinaThe breakthrough of wireless energy transmission (WET) technology has greatly promoted the wireless rechargeable sensor networks (WRSNs). A promising method to overcome the energy constraint problem in WRSNs is mobile charging by employing a mobile charger to charge sensors via WET. Recently, more and more studies have been conducted for mobile charging scheduling under dynamic charging environments, ignoring the consideration of the joint charging sequence scheduling and charging ratio control (JSSRC) optimal design. This paper will propose a novel attention-shared multi-agent actor–critic-based deep reinforcement learning approach for JSSRC (AMADRL-JSSRC). In AMADRL-JSSRC, we employ two heterogeneous agents named charging sequence scheduler and charging ratio controller with an independent actor network and critic network. Meanwhile, we design the reward function for them, respectively, by considering the tour length and the number of dead sensors. The AMADRL-JSSRC trains decentralized policies in multi-agent environments, using a centralized computing critic network to share an attention mechanism, and it selects relevant policy information for each agent at every charging decision. Simulation results demonstrate that the proposed AMADRL-JSSRC can efficiently prolong the lifetime of the network and reduce the number of death sensors compared with the baseline algorithms.https://www.mdpi.com/1099-4300/24/7/965wireless rechargeable sensor networkdeep reinforcement learningmulti-agentattention-sharedmobile charging |
spellingShingle | Chengpeng Jiang Ziyang Wang Shuai Chen Jinglin Li Haoran Wang Jinwei Xiang Wendong Xiao Attention-Shared Multi-Agent Actor–Critic-Based Deep Reinforcement Learning Approach for Mobile Charging Dynamic Scheduling in Wireless Rechargeable Sensor Networks Entropy wireless rechargeable sensor network deep reinforcement learning multi-agent attention-shared mobile charging |
title | Attention-Shared Multi-Agent Actor–Critic-Based Deep Reinforcement Learning Approach for Mobile Charging Dynamic Scheduling in Wireless Rechargeable Sensor Networks |
title_full | Attention-Shared Multi-Agent Actor–Critic-Based Deep Reinforcement Learning Approach for Mobile Charging Dynamic Scheduling in Wireless Rechargeable Sensor Networks |
title_fullStr | Attention-Shared Multi-Agent Actor–Critic-Based Deep Reinforcement Learning Approach for Mobile Charging Dynamic Scheduling in Wireless Rechargeable Sensor Networks |
title_full_unstemmed | Attention-Shared Multi-Agent Actor–Critic-Based Deep Reinforcement Learning Approach for Mobile Charging Dynamic Scheduling in Wireless Rechargeable Sensor Networks |
title_short | Attention-Shared Multi-Agent Actor–Critic-Based Deep Reinforcement Learning Approach for Mobile Charging Dynamic Scheduling in Wireless Rechargeable Sensor Networks |
title_sort | attention shared multi agent actor critic based deep reinforcement learning approach for mobile charging dynamic scheduling in wireless rechargeable sensor networks |
topic | wireless rechargeable sensor network deep reinforcement learning multi-agent attention-shared mobile charging |
url | https://www.mdpi.com/1099-4300/24/7/965 |
work_keys_str_mv | AT chengpengjiang attentionsharedmultiagentactorcriticbaseddeepreinforcementlearningapproachformobilechargingdynamicschedulinginwirelessrechargeablesensornetworks AT ziyangwang attentionsharedmultiagentactorcriticbaseddeepreinforcementlearningapproachformobilechargingdynamicschedulinginwirelessrechargeablesensornetworks AT shuaichen attentionsharedmultiagentactorcriticbaseddeepreinforcementlearningapproachformobilechargingdynamicschedulinginwirelessrechargeablesensornetworks AT jinglinli attentionsharedmultiagentactorcriticbaseddeepreinforcementlearningapproachformobilechargingdynamicschedulinginwirelessrechargeablesensornetworks AT haoranwang attentionsharedmultiagentactorcriticbaseddeepreinforcementlearningapproachformobilechargingdynamicschedulinginwirelessrechargeablesensornetworks AT jinweixiang attentionsharedmultiagentactorcriticbaseddeepreinforcementlearningapproachformobilechargingdynamicschedulinginwirelessrechargeablesensornetworks AT wendongxiao attentionsharedmultiagentactorcriticbaseddeepreinforcementlearningapproachformobilechargingdynamicschedulinginwirelessrechargeablesensornetworks |