Evolving network representation learning based on random walks
Abstract Large-scale network mining and analysis is key to revealing the underlying dynamics of networks, not easily observable before. Lately, there is a fast-growing interest in learning low-dimensional continuous representations of networks that can be utilized to perform highly accurate and scal...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
SpringerOpen
2020-03-01
|
Series: | Applied Network Science |
Subjects: | |
Online Access: | http://link.springer.com/article/10.1007/s41109-020-00257-3 |
_version_ | 1828759670906421248 |
---|---|
author | Farzaneh Heidari Manos Papagelis |
author_facet | Farzaneh Heidari Manos Papagelis |
author_sort | Farzaneh Heidari |
collection | DOAJ |
description | Abstract Large-scale network mining and analysis is key to revealing the underlying dynamics of networks, not easily observable before. Lately, there is a fast-growing interest in learning low-dimensional continuous representations of networks that can be utilized to perform highly accurate and scalable graph mining tasks. A family of these methods is based on performing random walks on a network to learn its structural features and providing the sequence of random walks as input to a deep learning architecture to learn a network embedding. While these methods perform well, they can only operate on static networks. However, in real-world, networks are evolving, as nodes and edges are continuously added or deleted. As a result, any previously obtained network representation will now be outdated having an adverse effect on the accuracy of the network mining task at stake. The naive approach to address this problem is to re-apply the embedding method of choice every time there is an update to the network. But this approach has serious drawbacks. First, it is inefficient, because the embedding method itself is computationally expensive. Then, the network mining task outcome obtained by the subsequent network representations are not directly comparable to each other, due to the randomness involved in the new set of random walks involved each time. In this paper, we propose EvoNRL, a random-walk based method for learning representations of evolving networks. The key idea of our approach is to first obtain a set of random walks on the current state of network. Then, while changes occur in the evolving network’s topology, to dynamically update the random walks in reserve, so they do not introduce any bias. That way we are in position of utilizing the updated set of random walks to continuously learn accurate mappings from the evolving network to a low-dimension network representation. Moreover, we present an analytical method for determining the right time to obtain a new representation of the evolving network that balances accuracy and time performance. A thorough experimental evaluation is performed that demonstrates the effectiveness of our method against sensible baselines and varying conditions. |
first_indexed | 2024-12-11T01:01:55Z |
format | Article |
id | doaj.art-7825c85b7c74434db909b4577a176dfc |
institution | Directory Open Access Journal |
issn | 2364-8228 |
language | English |
last_indexed | 2024-12-11T01:01:55Z |
publishDate | 2020-03-01 |
publisher | SpringerOpen |
record_format | Article |
series | Applied Network Science |
spelling | doaj.art-7825c85b7c74434db909b4577a176dfc2022-12-22T01:26:18ZengSpringerOpenApplied Network Science2364-82282020-03-015113810.1007/s41109-020-00257-3Evolving network representation learning based on random walksFarzaneh Heidari0Manos Papagelis1York UniversityYork UniversityAbstract Large-scale network mining and analysis is key to revealing the underlying dynamics of networks, not easily observable before. Lately, there is a fast-growing interest in learning low-dimensional continuous representations of networks that can be utilized to perform highly accurate and scalable graph mining tasks. A family of these methods is based on performing random walks on a network to learn its structural features and providing the sequence of random walks as input to a deep learning architecture to learn a network embedding. While these methods perform well, they can only operate on static networks. However, in real-world, networks are evolving, as nodes and edges are continuously added or deleted. As a result, any previously obtained network representation will now be outdated having an adverse effect on the accuracy of the network mining task at stake. The naive approach to address this problem is to re-apply the embedding method of choice every time there is an update to the network. But this approach has serious drawbacks. First, it is inefficient, because the embedding method itself is computationally expensive. Then, the network mining task outcome obtained by the subsequent network representations are not directly comparable to each other, due to the randomness involved in the new set of random walks involved each time. In this paper, we propose EvoNRL, a random-walk based method for learning representations of evolving networks. The key idea of our approach is to first obtain a set of random walks on the current state of network. Then, while changes occur in the evolving network’s topology, to dynamically update the random walks in reserve, so they do not introduce any bias. That way we are in position of utilizing the updated set of random walks to continuously learn accurate mappings from the evolving network to a low-dimension network representation. Moreover, we present an analytical method for determining the right time to obtain a new representation of the evolving network that balances accuracy and time performance. A thorough experimental evaluation is performed that demonstrates the effectiveness of our method against sensible baselines and varying conditions.http://link.springer.com/article/10.1007/s41109-020-00257-3Network representation learningEvolving networksDynamic random walksDynamic graph embedding |
spellingShingle | Farzaneh Heidari Manos Papagelis Evolving network representation learning based on random walks Applied Network Science Network representation learning Evolving networks Dynamic random walks Dynamic graph embedding |
title | Evolving network representation learning based on random walks |
title_full | Evolving network representation learning based on random walks |
title_fullStr | Evolving network representation learning based on random walks |
title_full_unstemmed | Evolving network representation learning based on random walks |
title_short | Evolving network representation learning based on random walks |
title_sort | evolving network representation learning based on random walks |
topic | Network representation learning Evolving networks Dynamic random walks Dynamic graph embedding |
url | http://link.springer.com/article/10.1007/s41109-020-00257-3 |
work_keys_str_mv | AT farzanehheidari evolvingnetworkrepresentationlearningbasedonrandomwalks AT manospapagelis evolvingnetworkrepresentationlearningbasedonrandomwalks |