Selective network discovery via deep reinforcement learning on embedded spaces

Abstract Complex networks are often either too large for full exploration, partially accessible, or partially observed. Downstream learning tasks on these incomplete networks can produce low quality results. In addition, reducing the incompleteness of the network can be costly and non...

Full description

Bibliographic Details
Main Authors: Morales, Peter, Caceres, Rajmonda S, Eliassi-Rad, Tina
Other Authors: Lincoln Laboratory
Format: Article
Language:English
Published: Springer International Publishing 2021
Online Access:https://hdl.handle.net/1721.1/132068
_version_ 1826203179392434176
author Morales, Peter
Caceres, Rajmonda S
Eliassi-Rad, Tina
author2 Lincoln Laboratory
author_facet Lincoln Laboratory
Morales, Peter
Caceres, Rajmonda S
Eliassi-Rad, Tina
author_sort Morales, Peter
collection MIT
description Abstract Complex networks are often either too large for full exploration, partially accessible, or partially observed. Downstream learning tasks on these incomplete networks can produce low quality results. In addition, reducing the incompleteness of the network can be costly and nontrivial. As a result, network discovery algorithms optimized for specific downstream learning tasks given resource collection constraints are of great interest. In this paper, we formulate the task-specific network discovery problem as a sequential decision-making problem. Our downstream task is selective harvesting, the optimal collection of vertices with a particular attribute. We propose a framework, called network actor critic (NAC), which learns a policy and notion of future reward in an offline setting via a deep reinforcement learning algorithm. The NAC paradigm utilizes a task-specific network embedding to reduce the state space complexity. A detailed comparative analysis of popular network embeddings is presented with respect to their role in supporting offline planning. Furthermore, a quantitative study is presented on various synthetic and real benchmarks using NAC and several baselines. We show that offline models of reward and network discovery policies lead to significantly improved performance when compared to competitive online discovery algorithms. Finally, we outline learning regimes where planning is critical in addressing sparse and changing reward signals.
first_indexed 2024-09-23T12:32:50Z
format Article
id mit-1721.1/132068
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T12:32:50Z
publishDate 2021
publisher Springer International Publishing
record_format dspace
spelling mit-1721.1/1320682023-03-24T18:49:31Z Selective network discovery via deep reinforcement learning on embedded spaces Morales, Peter Caceres, Rajmonda S Eliassi-Rad, Tina Lincoln Laboratory Abstract Complex networks are often either too large for full exploration, partially accessible, or partially observed. Downstream learning tasks on these incomplete networks can produce low quality results. In addition, reducing the incompleteness of the network can be costly and nontrivial. As a result, network discovery algorithms optimized for specific downstream learning tasks given resource collection constraints are of great interest. In this paper, we formulate the task-specific network discovery problem as a sequential decision-making problem. Our downstream task is selective harvesting, the optimal collection of vertices with a particular attribute. We propose a framework, called network actor critic (NAC), which learns a policy and notion of future reward in an offline setting via a deep reinforcement learning algorithm. The NAC paradigm utilizes a task-specific network embedding to reduce the state space complexity. A detailed comparative analysis of popular network embeddings is presented with respect to their role in supporting offline planning. Furthermore, a quantitative study is presented on various synthetic and real benchmarks using NAC and several baselines. We show that offline models of reward and network discovery policies lead to significantly improved performance when compared to competitive online discovery algorithms. Finally, we outline learning regimes where planning is critical in addressing sparse and changing reward signals. 2021-09-20T17:41:47Z 2021-09-20T17:41:47Z 2021-03-20 2021-03-21T05:00:10Z Article http://purl.org/eprint/type/JournalArticle https://hdl.handle.net/1721.1/132068 Applied Network Science. 2021 Mar 20;6(1):24 PUBLISHER_CC en https://doi.org/10.1007/s41109-021-00365-8 Creative Commons Attribution https://creativecommons.org/licenses/by/4.0/ The Author(s) application/pdf Springer International Publishing Springer International Publishing
spellingShingle Morales, Peter
Caceres, Rajmonda S
Eliassi-Rad, Tina
Selective network discovery via deep reinforcement learning on embedded spaces
title Selective network discovery via deep reinforcement learning on embedded spaces
title_full Selective network discovery via deep reinforcement learning on embedded spaces
title_fullStr Selective network discovery via deep reinforcement learning on embedded spaces
title_full_unstemmed Selective network discovery via deep reinforcement learning on embedded spaces
title_short Selective network discovery via deep reinforcement learning on embedded spaces
title_sort selective network discovery via deep reinforcement learning on embedded spaces
url https://hdl.handle.net/1721.1/132068
work_keys_str_mv AT moralespeter selectivenetworkdiscoveryviadeepreinforcementlearningonembeddedspaces
AT caceresrajmondas selectivenetworkdiscoveryviadeepreinforcementlearningonembeddedspaces
AT eliassiradtina selectivenetworkdiscoveryviadeepreinforcementlearningonembeddedspaces