Significant Sampling for Shortest Path Routing: A Deep Reinforcement Learning Solution

Significant sampling is an adaptive monitoring technique proposed for highly dynamic networks with centralized network management and control systems. The essential spirit of significant sampling is to collect and disseminate network state information when it is of significant value to the optimal o...

Full description

Bibliographic Details
Main Authors: Shao, Yulin, Rezaee, Arman, Liew, Soung Chang, Chan, Vincent W. S.
Other Authors: Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Format: Article
Language:English
Published: Institute of Electrical and Electronics Engineers (IEEE) 2021
Online Access:https://hdl.handle.net/1721.1/131057
_version_ 1811072140823232512
author Shao, Yulin
Rezaee, Arman
Liew, Soung Chang
Chan, Vincent W. S.
author2 Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
author_facet Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Shao, Yulin
Rezaee, Arman
Liew, Soung Chang
Chan, Vincent W. S.
author_sort Shao, Yulin
collection MIT
description Significant sampling is an adaptive monitoring technique proposed for highly dynamic networks with centralized network management and control systems. The essential spirit of significant sampling is to collect and disseminate network state information when it is of significant value to the optimal operation of the network, and in particular when it helps identify the shortest routes. Discovering the optimal sampling policy that specifies the optimal sampling frequency is referred to as the significant sampling problem. Modeling the problem as a Markov Decision process, this paper puts forth a deep reinforcement learning (DRL) approach to tackle the significant sampling problem. This approach is more flexible and general than prior approaches as it can accommodate a diverse set of network environments. Experimental results show that, 1) by following the objectives set in the prior work, our DRL approach can achieve performance comparable to their analytically derived policy $\phi '$ - unlike the prior approach, our approach is model-free and unaware of the underlying traffic model; 2) by appropriately modifying the objective functions, we obtain a new policy which addresses the never-sample problem of policy $\phi '$ , consequently reducing the overall cost; 3) our DRL approach works well under different stochastic variations of the network environment - it can provide good solutions under complex network environments where analytically tractable solutions are not feasible.
first_indexed 2024-09-23T09:01:32Z
format Article
id mit-1721.1/131057
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T09:01:32Z
publishDate 2021
publisher Institute of Electrical and Electronics Engineers (IEEE)
record_format dspace
spelling mit-1721.1/1310572022-09-26T09:57:01Z Significant Sampling for Shortest Path Routing: A Deep Reinforcement Learning Solution Shao, Yulin Rezaee, Arman Liew, Soung Chang Chan, Vincent W. S. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Significant sampling is an adaptive monitoring technique proposed for highly dynamic networks with centralized network management and control systems. The essential spirit of significant sampling is to collect and disseminate network state information when it is of significant value to the optimal operation of the network, and in particular when it helps identify the shortest routes. Discovering the optimal sampling policy that specifies the optimal sampling frequency is referred to as the significant sampling problem. Modeling the problem as a Markov Decision process, this paper puts forth a deep reinforcement learning (DRL) approach to tackle the significant sampling problem. This approach is more flexible and general than prior approaches as it can accommodate a diverse set of network environments. Experimental results show that, 1) by following the objectives set in the prior work, our DRL approach can achieve performance comparable to their analytically derived policy $\phi '$ - unlike the prior approach, our approach is model-free and unaware of the underlying traffic model; 2) by appropriately modifying the objective functions, we obtain a new policy which addresses the never-sample problem of policy $\phi '$ , consequently reducing the overall cost; 3) our DRL approach works well under different stochastic variations of the network environment - it can provide good solutions under complex network environments where analytically tractable solutions are not feasible. 2021-06-29T20:11:33Z 2021-06-29T20:11:33Z 2020-06 2021-06-28T16:48:31Z Article http://purl.org/eprint/type/JournalArticle 0733-8716 1558-0008 https://hdl.handle.net/1721.1/131057 Shao, Yulin et al. "Significant Sampling for Shortest Path Routing: A Deep Reinforcement Learning Solution." IEEE Journal on Selected Areas in Communications 38, 10 (October 2020): 2234 - 2248. © 2020 IEEE en http://dx.doi.org/10.1109/jsac.2020.3000364 IEEE Journal on Selected Areas in Communications Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Institute of Electrical and Electronics Engineers (IEEE) Prof. Chan via Phoebe Ayers
spellingShingle Shao, Yulin
Rezaee, Arman
Liew, Soung Chang
Chan, Vincent W. S.
Significant Sampling for Shortest Path Routing: A Deep Reinforcement Learning Solution
title Significant Sampling for Shortest Path Routing: A Deep Reinforcement Learning Solution
title_full Significant Sampling for Shortest Path Routing: A Deep Reinforcement Learning Solution
title_fullStr Significant Sampling for Shortest Path Routing: A Deep Reinforcement Learning Solution
title_full_unstemmed Significant Sampling for Shortest Path Routing: A Deep Reinforcement Learning Solution
title_short Significant Sampling for Shortest Path Routing: A Deep Reinforcement Learning Solution
title_sort significant sampling for shortest path routing a deep reinforcement learning solution
url https://hdl.handle.net/1721.1/131057
work_keys_str_mv AT shaoyulin significantsamplingforshortestpathroutingadeepreinforcementlearningsolution
AT rezaeearman significantsamplingforshortestpathroutingadeepreinforcementlearningsolution
AT liewsoungchang significantsamplingforshortestpathroutingadeepreinforcementlearningsolution
AT chanvincentws significantsamplingforshortestpathroutingadeepreinforcementlearningsolution