PIANO: influence maximization meets deep reinforcement learning

Since its introduction in 2003, the influence maximization (IM) problem has drawn significant research attention in the literature. The aim of IM, which is NP-hard, is to select a set of k users known as seed users who can influence the most individuals in the social network. The state-of-the-art al...

Full description

Bibliographic Details
Main Authors: Li, Hui, Xu, Mengting, Bhowmick, Sourav S., Rayhan, Joty Shafiq, Sun, Changsheng, Cui, Jiangtao
Other Authors: School of Computer Science and Engineering
Format: Journal Article
Language:English
Published: 2022
Subjects:
Online Access:https://hdl.handle.net/10356/163625
_version_ 1826120419014344704
author Li, Hui
Xu, Mengting
Bhowmick, Sourav S.
Rayhan, Joty Shafiq
Sun, Changsheng
Cui, Jiangtao
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Li, Hui
Xu, Mengting
Bhowmick, Sourav S.
Rayhan, Joty Shafiq
Sun, Changsheng
Cui, Jiangtao
author_sort Li, Hui
collection NTU
description Since its introduction in 2003, the influence maximization (IM) problem has drawn significant research attention in the literature. The aim of IM, which is NP-hard, is to select a set of k users known as seed users who can influence the most individuals in the social network. The state-of-the-art algorithms estimate the expected influence of nodes based on sampled diffusion paths. As the number of required samples has been recently proven to be lower bounded by a particular threshold that presets tradeoff between the accuracy and the efficiency, the result quality of these traditional solutions is hard to be further improved without sacrificing efficiency. In this article, we present an orthogonal and novel paradigm to address the IM problem by leveraging deep reinforcement learning (RL) to estimate the expected influence. In particular, we present a novel framework called deeP reInforcement leArning-based iNfluence maximizatiOn (PIANO) that incorporates network embedding and RL techniques to address this problem. In order to make it practical, we further present PIANO-E and PIANO@⟨angle d⟩, both of which can be applied directly to answer IM without training the model from scratch. Experimental study on real-world networks demonstrates that PIANO achieves the best performance with respect to efficiency and influence spread quality compared to state-of-the-art classical solutions. We also demonstrate that the learned parametric models generalize well across different networks. Besides, we provide a pool of pretrained PIANO models such that any IM task can be addressed by directly applying a model from the pool without training over the targeted network.
first_indexed 2024-10-01T05:16:08Z
format Journal Article
id ntu-10356/163625
institution Nanyang Technological University
language English
last_indexed 2024-10-01T05:16:08Z
publishDate 2022
record_format dspace
spelling ntu-10356/1636252022-12-13T01:33:54Z PIANO: influence maximization meets deep reinforcement learning Li, Hui Xu, Mengting Bhowmick, Sourav S. Rayhan, Joty Shafiq Sun, Changsheng Cui, Jiangtao School of Computer Science and Engineering Engineering::Computer science and engineering Deep Reinforcement Learning Graph Embedding Since its introduction in 2003, the influence maximization (IM) problem has drawn significant research attention in the literature. The aim of IM, which is NP-hard, is to select a set of k users known as seed users who can influence the most individuals in the social network. The state-of-the-art algorithms estimate the expected influence of nodes based on sampled diffusion paths. As the number of required samples has been recently proven to be lower bounded by a particular threshold that presets tradeoff between the accuracy and the efficiency, the result quality of these traditional solutions is hard to be further improved without sacrificing efficiency. In this article, we present an orthogonal and novel paradigm to address the IM problem by leveraging deep reinforcement learning (RL) to estimate the expected influence. In particular, we present a novel framework called deeP reInforcement leArning-based iNfluence maximizatiOn (PIANO) that incorporates network embedding and RL techniques to address this problem. In order to make it practical, we further present PIANO-E and PIANO@⟨angle d⟩, both of which can be applied directly to answer IM without training the model from scratch. Experimental study on real-world networks demonstrates that PIANO achieves the best performance with respect to efficiency and influence spread quality compared to state-of-the-art classical solutions. We also demonstrate that the learned parametric models generalize well across different networks. Besides, we provide a pool of pretrained PIANO models such that any IM task can be addressed by directly applying a model from the pool without training over the targeted network. This work was supported by the National Natural Science Foundation of China under Grant 61972309. 2022-12-13T01:33:54Z 2022-12-13T01:33:54Z 2022 Journal Article Li, H., Xu, M., Bhowmick, S. S., Rayhan, J. S., Sun, C. & Cui, J. (2022). PIANO: influence maximization meets deep reinforcement learning. IEEE Transactions On Computational Social Systems, 1-13. https://dx.doi.org/10.1109/TCSS.2022.3164667 2329-924X https://hdl.handle.net/10356/163625 10.1109/TCSS.2022.3164667 2-s2.0-85132537548 1 13 en IEEE Transactions on Computational Social Systems © 2022 IEEE. All rights reserved.
spellingShingle Engineering::Computer science and engineering
Deep Reinforcement Learning
Graph Embedding
Li, Hui
Xu, Mengting
Bhowmick, Sourav S.
Rayhan, Joty Shafiq
Sun, Changsheng
Cui, Jiangtao
PIANO: influence maximization meets deep reinforcement learning
title PIANO: influence maximization meets deep reinforcement learning
title_full PIANO: influence maximization meets deep reinforcement learning
title_fullStr PIANO: influence maximization meets deep reinforcement learning
title_full_unstemmed PIANO: influence maximization meets deep reinforcement learning
title_short PIANO: influence maximization meets deep reinforcement learning
title_sort piano influence maximization meets deep reinforcement learning
topic Engineering::Computer science and engineering
Deep Reinforcement Learning
Graph Embedding
url https://hdl.handle.net/10356/163625
work_keys_str_mv AT lihui pianoinfluencemaximizationmeetsdeepreinforcementlearning
AT xumengting pianoinfluencemaximizationmeetsdeepreinforcementlearning
AT bhowmicksouravs pianoinfluencemaximizationmeetsdeepreinforcementlearning
AT rayhanjotyshafiq pianoinfluencemaximizationmeetsdeepreinforcementlearning
AT sunchangsheng pianoinfluencemaximizationmeetsdeepreinforcementlearning
AT cuijiangtao pianoinfluencemaximizationmeetsdeepreinforcementlearning