Selective network discovery via deep reinforcement learning on embedded spaces

Abstract Complex networks are often either too large for full exploration, partially accessible, or partially observed. Downstream learning tasks on these incomplete networks can produce low quality results. In addition, reducing the incompleteness of the network can be costly and nontrivial. As a r...

Full description

Bibliographic Details
Main Authors: Peter Morales, Rajmonda Sulo Caceres, Tina Eliassi-Rad
Format: Article
Language:English
Published: SpringerOpen 2021-03-01
Series:Applied Network Science
Subjects:
Online Access:https://doi.org/10.1007/s41109-021-00365-8
id doaj-d9c0c54179f54365aeb9ff7985ad5fe9
record_format Article
spelling doaj-d9c0c54179f54365aeb9ff7985ad5fe92021-03-21T12:27:50ZengSpringerOpenApplied Network Science2364-82282021-03-016112010.1007/s41109-021-00365-8Selective network discovery via deep reinforcement learning on embedded spacesPeter Morales0Rajmonda Sulo Caceres1Tina Eliassi-Rad2MIT Lincoln LaboratoryMIT Lincoln LaboratoryNortheastern UniversityAbstract Complex networks are often either too large for full exploration, partially accessible, or partially observed. Downstream learning tasks on these incomplete networks can produce low quality results. In addition, reducing the incompleteness of the network can be costly and nontrivial. As a result, network discovery algorithms optimized for specific downstream learning tasks given resource collection constraints are of great interest. In this paper, we formulate the task-specific network discovery problem as a sequential decision-making problem. Our downstream task is selective harvesting, the optimal collection of vertices with a particular attribute. We propose a framework, called network actor critic (NAC), which learns a policy and notion of future reward in an offline setting via a deep reinforcement learning algorithm. The NAC paradigm utilizes a task-specific network embedding to reduce the state space complexity. A detailed comparative analysis of popular network embeddings is presented with respect to their role in supporting offline planning. Furthermore, a quantitative study is presented on various synthetic and real benchmarks using NAC and several baselines. We show that offline models of reward and network discovery policies lead to significantly improved performance when compared to competitive online discovery algorithms. Finally, we outline learning regimes where planning is critical in addressing sparse and changing reward signals.https://doi.org/10.1007/s41109-021-00365-8Incomplete networksReinforcement learningNetwork embedding
collection DOAJ
language English
format Article
sources DOAJ
author Peter Morales
Rajmonda Sulo Caceres
Tina Eliassi-Rad
spellingShingle Peter Morales
Rajmonda Sulo Caceres
Tina Eliassi-Rad
Selective network discovery via deep reinforcement learning on embedded spaces
Applied Network Science
Incomplete networks
Reinforcement learning
Network embedding
author_facet Peter Morales
Rajmonda Sulo Caceres
Tina Eliassi-Rad
author_sort Peter Morales
title Selective network discovery via deep reinforcement learning on embedded spaces
title_short Selective network discovery via deep reinforcement learning on embedded spaces
title_full Selective network discovery via deep reinforcement learning on embedded spaces
title_fullStr Selective network discovery via deep reinforcement learning on embedded spaces
title_full_unstemmed Selective network discovery via deep reinforcement learning on embedded spaces
title_sort selective network discovery via deep reinforcement learning on embedded spaces
publisher SpringerOpen
series Applied Network Science
issn 2364-8228
publishDate 2021-03-01
description Abstract Complex networks are often either too large for full exploration, partially accessible, or partially observed. Downstream learning tasks on these incomplete networks can produce low quality results. In addition, reducing the incompleteness of the network can be costly and nontrivial. As a result, network discovery algorithms optimized for specific downstream learning tasks given resource collection constraints are of great interest. In this paper, we formulate the task-specific network discovery problem as a sequential decision-making problem. Our downstream task is selective harvesting, the optimal collection of vertices with a particular attribute. We propose a framework, called network actor critic (NAC), which learns a policy and notion of future reward in an offline setting via a deep reinforcement learning algorithm. The NAC paradigm utilizes a task-specific network embedding to reduce the state space complexity. A detailed comparative analysis of popular network embeddings is presented with respect to their role in supporting offline planning. Furthermore, a quantitative study is presented on various synthetic and real benchmarks using NAC and several baselines. We show that offline models of reward and network discovery policies lead to significantly improved performance when compared to competitive online discovery algorithms. Finally, we outline learning regimes where planning is critical in addressing sparse and changing reward signals.
topic Incomplete networks
Reinforcement learning
Network embedding
url https://doi.org/10.1007/s41109-021-00365-8
work_keys_str_mv AT petermorales selectivenetworkdiscoveryviadeepreinforcementlearningonembeddedspaces
AT rajmondasulocaceres selectivenetworkdiscoveryviadeepreinforcementlearningonembeddedspaces
AT tinaeliassirad selectivenetworkdiscoveryviadeepreinforcementlearningonembeddedspaces
_version_ 1724210576593256448