Significant Sampling for Shortest Path Routing: A Deep Reinforcement Learning Solution

We face a growing ecosystem of applications that produce and consume data at unprecedented rates and with strict latency requirements. Meanwhile, the bursty and unpredictable nature of their traffic can induce highly dynamic environments within networks which endanger their own viability. Unencumber...

Full description

Bibliographic Details
Main Authors: Shao, Yulin, Rezaee, Arman, Liew, Soung Chang, Chan, Vincent
Other Authors: Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Format: Article
Language:English
Published: Institute of Electrical and Electronics Engineers (IEEE) 2021
Online Access:https://hdl.handle.net/1721.1/131056
_version_ 1811086424711102464
author Shao, Yulin
Rezaee, Arman
Liew, Soung Chang
Chan, Vincent
author2 Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
author_facet Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Shao, Yulin
Rezaee, Arman
Liew, Soung Chang
Chan, Vincent
author_sort Shao, Yulin
collection MIT
description We face a growing ecosystem of applications that produce and consume data at unprecedented rates and with strict latency requirements. Meanwhile, the bursty and unpredictable nature of their traffic can induce highly dynamic environments within networks which endanger their own viability. Unencumbered operation of these applications requires rapid (re)actions by Network Management and Control (NMC) systems which themselves depends on timely collection of network state information. Given the size of today's networks, collection of detailed network states is prohibitively costly for the network transport and computational resources. Thus, judicious sampling of network states is necessary for a cost-effective NMC system. This paper proposes a deep reinforcement learning (DRL) solution that learns the principle of significant sampling and effectively balances the need for accurate state information against the cost of sampling. Modeling the problem as a Markov Decision Process, we treat the NMC system as an agent that samples the state of various network elements to make optimal routing decisions. The agent will periodically receive a reward commensurate with the quality of its routing decisions. The decision on when to sample will progressively improve as the agent learns the relationship between the sampling frequency and the reward function. We show that our solution has a comparable performance to the recently published analytical optimal without the need for an explicit knowledge of the traffic model. Furthermore, we show that our solution can adapt to new environments, a feature that has been largely absent in the analytical considerations of the problem.
first_indexed 2024-09-23T13:25:42Z
format Article
id mit-1721.1/131056
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T13:25:42Z
publishDate 2021
publisher Institute of Electrical and Electronics Engineers (IEEE)
record_format dspace
spelling mit-1721.1/1310562022-09-28T14:09:34Z Significant Sampling for Shortest Path Routing: A Deep Reinforcement Learning Solution Shao, Yulin Rezaee, Arman Liew, Soung Chang Chan, Vincent Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science We face a growing ecosystem of applications that produce and consume data at unprecedented rates and with strict latency requirements. Meanwhile, the bursty and unpredictable nature of their traffic can induce highly dynamic environments within networks which endanger their own viability. Unencumbered operation of these applications requires rapid (re)actions by Network Management and Control (NMC) systems which themselves depends on timely collection of network state information. Given the size of today's networks, collection of detailed network states is prohibitively costly for the network transport and computational resources. Thus, judicious sampling of network states is necessary for a cost-effective NMC system. This paper proposes a deep reinforcement learning (DRL) solution that learns the principle of significant sampling and effectively balances the need for accurate state information against the cost of sampling. Modeling the problem as a Markov Decision Process, we treat the NMC system as an agent that samples the state of various network elements to make optimal routing decisions. The agent will periodically receive a reward commensurate with the quality of its routing decisions. The decision on when to sample will progressively improve as the agent learns the relationship between the sampling frequency and the reward function. We show that our solution has a comparable performance to the recently published analytical optimal without the need for an explicit knowledge of the traffic model. Furthermore, we show that our solution can adapt to new environments, a feature that has been largely absent in the analytical considerations of the problem. 2021-06-29T20:10:19Z 2021-06-29T20:10:19Z 2020-02 2019-12 2021-06-28T16:52:28Z Article http://purl.org/eprint/type/ConferencePaper 9781728109626 2576-6813 https://hdl.handle.net/1721.1/131056 Shao, Yulin et al. "Significant Sampling for Shortest Path Routing: A Deep Reinforcement Learning Solution." 2019 IEEE Global Communications Conference, December 2019, Waikoloa, HI, Institute of Electrical and Electronics Engineers, February 2020. © 2019 IEEE en http://dx.doi.org/10.1109/globecom38437.2019.9013908 2019 IEEE Global Communications Conference (GLOBECOM) Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Institute of Electrical and Electronics Engineers (IEEE) Prof. Chan via Phoebe Ayers
spellingShingle Shao, Yulin
Rezaee, Arman
Liew, Soung Chang
Chan, Vincent
Significant Sampling for Shortest Path Routing: A Deep Reinforcement Learning Solution
title Significant Sampling for Shortest Path Routing: A Deep Reinforcement Learning Solution
title_full Significant Sampling for Shortest Path Routing: A Deep Reinforcement Learning Solution
title_fullStr Significant Sampling for Shortest Path Routing: A Deep Reinforcement Learning Solution
title_full_unstemmed Significant Sampling for Shortest Path Routing: A Deep Reinforcement Learning Solution
title_short Significant Sampling for Shortest Path Routing: A Deep Reinforcement Learning Solution
title_sort significant sampling for shortest path routing a deep reinforcement learning solution
url https://hdl.handle.net/1721.1/131056
work_keys_str_mv AT shaoyulin significantsamplingforshortestpathroutingadeepreinforcementlearningsolution
AT rezaeearman significantsamplingforshortestpathroutingadeepreinforcementlearningsolution
AT liewsoungchang significantsamplingforshortestpathroutingadeepreinforcementlearningsolution
AT chanvincent significantsamplingforshortestpathroutingadeepreinforcementlearningsolution